WorldWideScience

Sample records for source code units

  1. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  2. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  3. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  6. Development of Level-2 PSA Technology: A Development of the Database of the Parametric Source Term for Kori Unit 1 Using the MAAP4 Code

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chang Soon; Mun, Ju Hyun; Yun, Jeong Ick; Cho, Young Hoo; Kim, Chong Uk [Seoul National University, Seoul (Korea, Republic of)

    1997-07-15

    To quantify the severe accident source term of the parametric model method, the uncertainty of the parameters should be analyzed. Generally, to analyze the uncertainties, the cumulative distribution functions(CDF`S) of the parameters are derived. This report introduces a method of derivation of the CDF`s of the basic parameters, FCOR, FVES and FDCH. The calculation tool of the source term is the MAAP version 4.0. In the MAAP code, there are model parameters to consider an uncertain physical and/or chemical phenomenon. In general, the parameters have not a point value but a range. In this paper, considering this point, the input values of model parameters influencing each parameter are sampled using LHS. Then, the calculation results are shown in the cumulative distribution form. For a case study, the CDF`s of FCOR, FVES and FDCH of KORI unit 1 are derived. The target scenarios for the calculation are the ones whose initial events are large LOCA, small LOCA and transient, respectively. It is found that the distributions of this study are consistent to those of NUREG-1150 and are proven to be adequate in assessing the uncertainties in the severe accident source term of KORI Unit 1. 15 refs., 27 tabs., 4 figs. (author)

  7. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  8. Quasi-cyclic unit memory convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Paaske, Erik; Ballan, Mark

    1990-01-01

    Unit memory convolutional codes with generator matrices, which are composed of circulant submatrices, are introduced. This structure facilitates the analysis of efficient search for good codes. Equivalences among such codes and some of the basic structural properties are discussed. In particular......, catastrophic encoders and minimal encoders are characterized and dual codes treated. Further, various distance measures are discussed, and a number of good codes, some of which result from efficient computer search and some of which result from known block codes, are presented...

  9. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  10. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  11. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  12. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  13. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  14. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  15. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  16. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  17. Development of DUST: A computer code that calculates release rates from a LLW disposal unit

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1992-01-01

    Performance assessment of a Low-Level Waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the disposal unit source term). The major physical processes that influence the source term are water flow, container degradation, waste form leaching, and radionuclide transport. A computer code, DUST (Disposal Unit Source Term) has been developed which incorporates these processes in a unified manner. The DUST code improves upon existing codes as it has the capability to model multiple container failure times, multiple waste form release properties, and radionuclide specific transport properties. Verification studies performed on the code are discussed

  18. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  19. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  20. Modeling of the CTEx subcritical unit using MCNPX code

    International Nuclear Information System (INIS)

    Santos, Avelino; Silva, Ademir X. da; Rebello, Wilson F.; Cunha, Victor L. Lassance

    2011-01-01

    The present work aims at simulating the subcritical unit of Army Technology Center (CTEx) namely ARGUS pile (subcritical uranium-graphite arrangement) by using the computational code MCNPX. Once such modeling is finished, it could be used in k-effective calculations for systems using natural uranium as fuel, for instance. ARGUS is a subcritical assembly which uses reactor-grade graphite as moderator of fission neutrons and metallic uranium fuel rods with aluminum cladding. The pile is driven by an Am-Be spontaneous neutron source. In order to achieve a higher value for k eff , a higher concentration of U235 can be proposed, provided it safely remains below one. (author)

  1. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  2. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  3. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  4. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  5. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  6. Country Report on Building Energy Codes in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Halverson, Mark A.; Shui, Bin; Evans, Meredydd

    2009-04-30

    This report is part of a series of reports on building energy efficiency codes in countries associated with the Asian Pacific Partnership (APP) - Australia, South Korea, Japan, China, India, and the United States of America (U.S.). This reports gives an overview of the development of building energy codes in U.S., including national energy policies related to building energy codes, history of building energy codes, recent national projects and activities to promote building energy codes. The report also provides a review of current building energy codes (such as building envelope, HVAC, lighting, and water heating) for commercial and residential buildings in the U.S.

  7. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  8. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  9. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  10. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  11. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  12. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  13. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  14. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  15. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  16. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  17. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  18. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  19. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  20. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  1. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  2. Aeroelastic code development activities in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Wright, A.D. [National Renewable Energy Lab., Golden, Colorado (United States)

    1996-09-01

    Designing wind turbines to be fatigue resistant and to have long lifetimes at minimal cost is a major goal of the federal wind program and the wind industry in the United States. To achieve this goal, we must be able to predict critical loads for a wide variety of different wind turbines operating under extreme conditions. The codes used for wind turbine dynamic analysis must be able to analyze a wide range of different wind turbine configurations as well as rapidly predict the loads due to turbulent wind inflow with a minimal set of degrees of freedom. Code development activities in the US have taken a two-pronged approach in order to satisfy both of these criteria: (1) development of a multi-purpose code which can be used to analyze a wide variety of wind turbine configurations without having to develop new equations of motion with each configuration change, and (2) development of specialized codes with minimal sets of specific degrees of freedom for analysis of two- and three-bladed horizontal axis wind turbines and calculation of machine loads due to turbulent inflow. In the first method we have adapted a commercial multi-body dynamics simulation package for wind turbine analysis. In the second approach we are developing specialized codes with limited degrees of freedom, usually specified in the modal domain. This paper will summarize progress to date in the development, validation, and application of these codes. (au) 13 refs.

  3. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  4. Evaluation of Yonggwang unit 4 cycle 5 using SPNOVA code

    International Nuclear Information System (INIS)

    Choi, Y. S.; Cha, K. H.; Lee, E. K.; Park, M. K.

    2004-01-01

    Core follow calculation of Yonggwang (YGN) unit 4 cycle 5 is performed to evaluate SPNOVA code if it can be applicable or not to Korean standard nuclear power plant (KSNP). SPNOVA code consists of BEPREPN and ANC code to represent incore detector and neutronics model, respectively. SPNOVA core deflection model is compared and verified with ANC depletion results in terms of critical boron concentration (CBC), peaking factor (Fq) and radial power distribution. In YGN4, SPNOVA predicts 30 ppm lower than that of ROCS predicting CBC. Fq and radial power distribution behavior of SPNOVA calculation have conservatively higher than those of ROCS predicting values. And also SPNOVA predicting results are compared with measurement data from snapshot and CECOR core calculation. It is reasonable to accept SPNOVA to analyze KSNP. The model of SPNOVA for KSNP will be used to develop the brand-new incore detector of platinum and vanadium

  5. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  6. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  7. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  8. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  9. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  10. Analysis of radiation field distribution in Yonggwang unit 3 with MCNP code

    International Nuclear Information System (INIS)

    Lee, Cheol Woo; Ha, Wi Ho; Shin, Chang Ho; Kim, Soon Young; Kim, Jong Kyung

    2004-01-01

    Radiation field analysis is performed at the inside of the containment building of nuclear power plant(NPP) using the well-known MCNP code. The target NPP in this study is Yonggwang Unit 3 Cycle 8. In this work, whole transport calculations were done using MCNPX 2.4.0 due to the functional benefits, such as Mesh Tally, that the code provides. The neutron spectra released from the operating reactor core were firstly evaluated as a radiation source term, and then dose distributions in the work areas of the NPP were calculated

  11. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  12. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  13. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  14. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  15. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    Science.gov (United States)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  16. Comparative analysis of design codes for timber bridges in Canada, the United States, and Europe

    Science.gov (United States)

    James Wacker; James (Scott) Groenier

    2010-01-01

    The United States recently completed its transition from the allowable stress design code to the load and resistance factor design (LRFD) reliability-based code for the design of most highway bridges. For an international perspective on the LRFD-based bridge codes, a comparative analysis is presented: a study addressed national codes of the United States, Canada, and...

  17. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  18. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  19. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  20. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  1. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  2. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  3. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  4. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  5. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  6. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  7. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  8. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  9. MHD code using multi graphical processing units: SMAUG+

    Science.gov (United States)

    Gyenge, N.; Griffiths, M. K.; Erdélyi, R.

    2018-01-01

    This paper introduces the Sheffield Magnetohydrodynamics Algorithm Using GPUs (SMAUG+), an advanced numerical code for solving magnetohydrodynamic (MHD) problems, using multi-GPU systems. Multi-GPU systems facilitate the development of accelerated codes and enable us to investigate larger model sizes and/or more detailed computational domain resolutions. This is a significant advancement over the parent single-GPU MHD code, SMAUG (Griffiths et al., 2015). Here, we demonstrate the validity of the SMAUG + code, describe the parallelisation techniques and investigate performance benchmarks. The initial configuration of the Orszag-Tang vortex simulations are distributed among 4, 16, 64 and 100 GPUs. Furthermore, different simulation box resolutions are applied: 1000 × 1000, 2044 × 2044, 4000 × 4000 and 8000 × 8000 . We also tested the code with the Brio-Wu shock tube simulations with model size of 800 employing up to 10 GPUs. Based on the test results, we observed speed ups and slow downs, depending on the granularity and the communication overhead of certain parallel tasks. The main aim of the code development is to provide massively parallel code without the memory limitation of a single GPU. By using our code, the applied model size could be significantly increased. We demonstrate that we are able to successfully compute numerically valid and large 2D MHD problems.

  10. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  11. Pesticide Information Sources in the United States.

    Science.gov (United States)

    Alston, Patricia Gayle

    1992-01-01

    Presents an overview of electronic and published sources on pesticides. Includes sources such as databases, CD-ROMs, books, journals, brochures, pamphlets, fact sheets, hotlines, courses, electronic mail, and electronic bulletin boards. (MCO)

  12. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    This paper describes the application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third problem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed. (Auth.)

  13. Application of containment codes to LMFBRs in the United States

    International Nuclear Information System (INIS)

    Chang, Y.W.

    1977-01-01

    The application of containment codes to predict the response of the fast reactor containment and the primary piping loops to HCDAs is described. Five sample problems are given to illustrate their applications. The first problem deals with the response of the primary containment to an HCDA. The second problem deals with the coolant flow in the reactor lower plenum. The third proem concerns sodium spillage and slug impact. The fourth problem deals with the response of a piping loop. The fifth problem analyzes the response of a reactor head closure. Application of codes in parametric studies and comparison of code predictions with experiments are also discussed

  14. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  15. Development of a 14-digit Hydrologic Unit Code Numbering System for South Carolina

    Science.gov (United States)

    Bower, David E.; Lowry, Claude; Lowery, Mark A.; Hurley, Noel M.

    1999-01-01

    A Hydrologic Unit Map showing the cataloging units, watersheds, and subwatersheds of South Carolina has been developed by the U.S. Geological Survey in cooperation with the South Carolina Department of Health and Environmental Control, funded through a U.S. Environmental Protection Agency 319 Grant, and the U.S. Department of Agriculture, Natural Resources Conservation Service. These delineations represent 8-, 11-, and 14-digit Hydrologic Unit Codes, respectively. This map presents information on drainage, hydrography, and hydrologic boundaries of the water-resources regions, subregions, accounting units, cataloging units, watersheds, and subwatersheds. The source maps for the basin delineations are 1:24,000-scale 7.5-minute series topographic maps and the base maps are from 1:100,000-scale Digital Line Graphs; however, the data are published at a scale of 1:500,000. In addition, an electronic version of the data is provided on a compact disc.Of the 1,022 subwatersheds delineated for this project, 1,004 range in size from 3,000 to 40,000 acres (4.69 to 62.5 square miles). Seventeen subwatersheds are smaller than 3,000 acres and one subwatershed, located on St. Helena Island, is larger than 40,000 acres.This map and its associated codes provide a standardized base for use by water-resource managers and planners in locating, storing, retrieving, and exchanging hydrologic data. In addition, the map can be used for cataloging water-data acquisition activities, geographically organizing hydrologic data, and planning and describing water-use and related land-use activities.

  16. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  17. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  18. Wood construction codes issues in the United States

    Science.gov (United States)

    Douglas R. Rammer

    2006-01-01

    The current wood construction codes find their origin in the 1935 Wood Handbook: Wood as an Engineering Material published by the USDA Forest Service. Many of the current design recommendations can be traced back to statements from this book. Since this time a series of development both historical and recent has led to a multi-layered system for use of wood products in...

  19. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  20. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  1. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  2. Private Source Funding for FCS Units

    Science.gov (United States)

    Winchip, Susan M.

    2004-01-01

    Financial difficulties have prompted institutions of higher education to explore private sources of funding. In recent years, public institutions have significantly increased their focus on private giving, with several campaigns having more than $1 billion as a goal. Family and consumer sciences (FCS) professionals need to be actively involved in…

  3. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  4. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  5. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  6. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  7. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  8. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  9. ERP correlates of source memory: unitized source information increases familiarity-based retrieval.

    Science.gov (United States)

    Diana, Rachel A; Van den Boom, Wijnand; Yonelinas, Andrew P; Ranganath, Charan

    2011-01-07

    Source memory tests typically require subjects to make decisions about the context in which an item was encoded and are thought to depend on recollection of details from the study episode. Although it is generally believed that familiarity does not contribute to source memory, recent behavioral studies have suggested that familiarity may also support source recognition when item and source information are integrated, or "unitized," during study (Diana, Yonelinas, and Ranganath, 2008). However, an alternative explanation of these behavioral findings is that unitization affects the manner in which recollection contributes to performance, rather than increasing familiarity-based source memory. To discriminate between these possibilities, we conducted an event-related potential (ERP) study testing the hypothesis that unitization increases the contribution of familiarity to source recognition. Participants studied associations between words and background colors using tasks that either encouraged or discouraged unitization. ERPs were recorded during a source memory test for background color. The results revealed two distinct neural correlates of source recognition: a frontally distributed positivity that was associated with familiarity-based source memory in the high-unitization condition only and a parietally distributed positivity that was associated with recollection-based source memory in both the high- and low-unitization conditions. The ERP and behavioral findings provide converging evidence for the idea that familiarity can contribute to source recognition, particularly when source information is encoded as an item detail. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Unitals and ovals of symmetric block designs in LDPC and space-time coding

    Science.gov (United States)

    Andriamanalimanana, Bruno R.

    2004-08-01

    An approach to the design of LDPC (low density parity check) error-correction and space-time modulation codes involves starting with known mathematical and combinatorial structures, and deriving code properties from structure properties. This paper reports on an investigation of unital and oval configurations within generic symmetric combinatorial designs, not just classical projective planes, as the underlying structure for classes of space-time LDPC outer codes. Of particular interest are the encoding and iterative (sum-product) decoding gains that these codes may provide. Various small-length cases have been numerically implemented in Java and Matlab for a number of channel models.

  11. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  12. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  13. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  14. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  15. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  16. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  17. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  18. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  19. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  20. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  1. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  2. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  3. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  4. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  5. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  6. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  7. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  8. The United States initiative for international radioactive source management (ISRM)

    International Nuclear Information System (INIS)

    Naraine, N.; Karhnak, J.

    1999-01-01

    The United States takes seriously the potential problems from uncontrolled radioactive sources. To address these problems, the United States Department of State is leading the development of an initiative for International Radioactive Source Management (ISRM). The Department of State, through a number of Federal and state agencies, regulatory bodies and private industry, will endeavor to provide coordinated support to the international community, particularly through IAEA, to assist in the development and implementation of risk-based clearance levels to support import/export of radioactive contaminated metals and the tracking, management, identification, remediation, and disposition of 'lost sources' entering nation states and targeted industries. The United States believes that the international control of radioactive sources is critical in avoiding wide-spread contamination of the world metal supply. Thus the initiative has four objectives: (1) Protect sources from becoming lost (Tracking management); (2) Identify primary locations where sources have been lost (Stop future losses); (3) Locate lost sources (monitor and retrieve); and (4) Educate and train (deploy knowledge and technology). A number of efforts already underway in the United States support the overall initiative. The EPA has provided a grant to the Conference of Radiation Program Control Directors (CRCPD) to develop a nation-wide program for the disposition of orphaned radioactive sources. This program now has internet visibility and a toll-free telephone number to call for assistance in the disposal of sources. The Nuclear Regulatory Commission (NRC), the Department of Energy (DOE), and other government agencies as well as private companies are assisting CRCPD in this program. The NRC has begun a program to improve control of radioactive sources in the United States, and also intends to promulgate a regulation defining conditions for the release of materials from licensed facilities. The DOE is

  9. Dose mapping in working space of KORI unit 1 using MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C. W.; Shin, C. H.; Kim, J. G. [Hanyang University, Seoul (Korea, Republic of); Kim, S. Y. [Innovative Techonology Center for Radiation Safety, Seoul (Korea, Republic of)

    2004-07-01

    Radiation field analysis in nuclear power plant mainly depends on actual measurements. In this study, the analysis using computational calculation is performed to overcome the limits of measurement and provide the initial information for unfolding. The radiation field mapping is performed, which makes it possible to analyze the trends of the radiation filed for whole space. By using MCNPX code, containment building inside is modeled for KORI unit 1 cycle 21 under operation. Applying the neutron spectrum from the operating reactor as a radiation source, the ambient doses are calculated in the whole space, containment building inside, for neutron and photon fields. Dose mapping is performed for three spaces, 6{approx}20, 20{approx}44, 44{approx}70 ft from bottom of the containment building. The radiation distribution in dose maps shows the effects from structures and materials of components. With this dose maps, radiation field analysis contained the region near the detect position. The analysis and prediction are possible for radiation field from other radiation source or operating cycle.

  10. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  11. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  12. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  13. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  14. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  15. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  16. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  17. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  18. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  19. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  20. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  1. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  2. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  3. Thermal-hydraulic analysis of the Three Mile Island Unit 2 reactor accident with THALES code

    International Nuclear Information System (INIS)

    Hashimoto, Kazuichiro; Soda, Kunihisa

    1991-10-01

    The OECD Nuclear Energy Agency (NEA) has established a Task Group in the Committee on the Safety of Nuclear Installations (CSNI) to perform an analysis of Three Mile Island Unit 2 (TMI-2) accident as a standard problem to benchmark severe accident computer codes and to assess the capability of the codes. The TMI-2 Analysis Exercise was performed at the Japan Atomic Energy Research Institute (JAERI) using the THALES (Thermal-Hydraulic Analysis of Loss-of-Coolant, Emergency Core Cooling and Severe Core Damage) - PM1/TMI code. The purpose of the analysis is to verify the capability of THALES-PM1/TMI code to describe accident progression in the actual plant. The present paper describes the final result of the TMI-2 Analysis Exercise performed at JAERI. (author)

  4. The new orphaned radioactive sources program in the United States

    International Nuclear Information System (INIS)

    Naraine, N.; Karhnak, J.M.

    1998-01-01

    Exposure of the public to uncontrolled radioactive sources has become an significant concern to the United States (US) Government because of the continuous increase in the number of sources that are being found, sometimes without proper radiation markings. This problem is primarily due to inadequate control, insufficient accountability, and improper disposal of radioactive materials. The US Environmental Protection Agency (EPA) has funded a cooperative 'orphaned' source initiative with the Conference of Radiation Control Program Directors (CRCPD) to bring under control unwanted sources and thus reduce the potential for unnecessary exposure to the public, workers and the environment. The program is being developed through the cooperative efforts of government agencies and industry, and will provide a quick and efficient method to bring orphaned sources under control and out of potentially dangerous situations. (author)

  5. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  6. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  7. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  8. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  9. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  10. Clean Energy in City Codes: A Baseline Analysis of Municipal Codification across the United States

    Energy Technology Data Exchange (ETDEWEB)

    Cook, Jeffrey J. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Aznar, Alexandra [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dane, Alexander [National Renewable Energy Lab. (NREL), Golden, CO (United States); Day, Megan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Mathur, Sivani [National Renewable Energy Lab. (NREL), Golden, CO (United States); Doris, Elizabeth [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2016-12-01

    Municipal governments in the United States are well positioned to influence clean energy (energy efficiency and alternative energy) and transportation technology and strategy implementation within their jurisdictions through planning, programs, and codification. Municipal governments are leveraging planning processes and programs to shape their energy futures. There is limited understanding in the literature related to codification, the primary way that municipal governments enact enforceable policies. The authors fill the gap in the literature by documenting the status of municipal codification of clean energy and transportation across the United States. More directly, we leverage online databases of municipal codes to develop national and state-specific representative samples of municipal governments by population size. Our analysis finds that municipal governments with the authority to set residential building energy codes within their jurisdictions frequently do so. In some cases, communities set codes higher than their respective state governments. Examination of codes across the nation indicates that municipal governments are employing their code as a policy mechanism to address clean energy and transportation.

  11. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  12. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  13. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  14. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  15. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  16. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  17. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  18. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    Science.gov (United States)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  19. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  20. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  1. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  2. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  3. Ethical Responsibilities: An Empirical Analysis Of The Ethical Codes Of The Top 100 Companies In The United Kingdom

    OpenAIRE

    Sarah D. Stanwick; Peter A. Stanwick

    2011-01-01

    In response to ethical dilemmas faced by companies around the globe, companies are developing or refining their ethical codes. Many of these companies communicate these codes to their stakeholders through the companys corporate social responsibility (CSR) report. This paper examines the ethics codes of the top 100 companies (based on market capitalization) in the United Kingdom. A sample of CSR reports for these companies is examined to determine if the company includes its ethical code in th...

  4. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  5. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    International Nuclear Information System (INIS)

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-01-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096 3 effective resolution and 16 GPUs with 8192 3 effective resolution, respectively.

  6. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  7. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  8. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  9. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  10. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  11. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  12. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  13. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  14. Challenges with secondary use of multi-source water-quality data in the United States

    Science.gov (United States)

    Sprague, Lori A.; Oelsner, Gretchen P.; Argue, Denise M.

    2017-01-01

    Combining water-quality data from multiple sources can help counterbalance diminishing resources for stream monitoring in the United States and lead to important regional and national insights that would not otherwise be possible. Individual monitoring organizations understand their own data very well, but issues can arise when their data are combined with data from other organizations that have used different methods for reporting the same common metadata elements. Such use of multi-source data is termed “secondary use”—the use of data beyond the original intent determined by the organization that collected the data. In this study, we surveyed more than 25 million nutrient records collected by 488 organizations in the United States since 1899 to identify major inconsistencies in metadata elements that limit the secondary use of multi-source data. Nearly 14.5 million of these records had missing or ambiguous information for one or more key metadata elements, including (in decreasing order of records affected) sample fraction, chemical form, parameter name, units of measurement, precise numerical value, and remark codes. As a result, metadata harmonization to make secondary use of these multi-source data will be time consuming, expensive, and inexact. Different data users may make different assumptions about the same ambiguous data, potentially resulting in different conclusions about important environmental issues. The value of these ambiguous data is estimated at \\$US12 billion, a substantial collective investment by water-resource organizations in the United States. By comparison, the value of unambiguous data is estimated at \\$US8.2 billion. The ambiguous data could be preserved for uses beyond the original intent by developing and implementing standardized metadata practices for future and legacy water-quality data throughout the United States.

  15. Contributions at the Tripoli Monte Carlo code qualifying on critical experiences and at neutronic interaction study of fissile units

    International Nuclear Information System (INIS)

    Nouri, A.

    1994-01-01

    Criticality studies in nuclear fuel cycle are based on Monte Carlo method. These codes use multigroup cross sections which can verify by experimental configurations or by use of reference codes such Tripoli 2. In this Tripoli 2 code nuclear data are errors attached and asked for experimental studies with critical experiences. This is one of the aim of this thesis. To calculate the keff of interacted fissile units we have used the multigroup Monte Carlo code Moret with convergence problems. A new estimator of reactions rates permit to better approximate the neutrons exchange between units and a new importance function has been tested. 2 annexes

  16. Stable Isotope Identification of Nitrogen Sources for United ...

    Science.gov (United States)

    We used natural abundance stable isotope data to evaluate nitrogen sources to U.S. west coast estuaries. We collected δ15N of macroalgae data and supplemented this with available data from the literature for estuaries from Mexico to Alaska. Stable isotope ratios of green macroalgae were compared to δ15N of dissolved inorganic nitrogen of oceanic and watershed end members. There was a latitudinal gradient in δ15N of macroalgae with southern estuaries being 7 per mil heavier than northern estuaries. Gradients in isotope data were compared to nitrogen sources estimated by the USGS using the SPARROW model. In California estuaries, the elevation of isotope data appeared to be related to anthropogenic nitrogen sources. In Oregon systems, the nitrogen levels of streams flowing into the estuaries are related to forest cover, rather than to developed land classes. In addition, the δ15N of macroalgae suggested that the ocean and nitrogen-fixing trees in the watersheds were the dominant nitrogen sources. There was also a strong gradient in δ15N of macroalgae with heavier sites located near the estuary mouth. In some Oregon estuaries, there was an elevation an elevation of δ15N above marine end members in the vicinity of wastewater treatment facility discharge locations, suggesting isotopes may be useful for distinguishing inputs along an estuarine gradient. Nutrients are the leading cause of water quality impairments in the United States, and as a result too

  17. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  18. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  19. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  20. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  1. Mobile source pollution control in the United States and China

    International Nuclear Information System (INIS)

    Menz, Fredric C

    2002-01-01

    This paper reviews policies for the control of mobile source pollution and their potential application in China. The first section of the paper reviews the U.S. experience with mobile source pollution control since regulations were first established in the Clean Air Act of 1970. Highlights in the policy and trends in vehicle emissions over the 1970 to 2000 time period are discussed. The second section of the paper discusses the range of policy instruments that could be used to control vehicle pollution, ranging from traditional direct regulations to market-based instruments. Experiences with the use of economic incentives in the United States and elsewhere are also discussed. The third section of the paper discusses possible implications of the U.S. experience for controlling vehicle pollution in China. While market-based instruments might be particularly appropriate for use in several aspects of China's pollution control policies, important differences between the institutional structures in China and the United States suggest that they should be phased in gradually. The paper closes with concluding remarks. (author)

  2. Mobile source pollution control in the United States and China

    Energy Technology Data Exchange (ETDEWEB)

    Menz, Fredric C

    2002-07-01

    This paper reviews policies for the control of mobile source pollution and their potential application in China. The first section of the paper reviews the U.S. experience with mobile source pollution control since regulations were first established in the Clean Air Act of 1970. Highlights in the policy and trends in vehicle emissions over the 1970 to 2000 time period are discussed. The second section of the paper discusses the range of policy instruments that could be used to control vehicle pollution, ranging from traditional direct regulations to market-based instruments. Experiences with the use of economic incentives in the United States and elsewhere are also discussed. The third section of the paper discusses possible implications of the U.S. experience for controlling vehicle pollution in China. While market-based instruments might be particularly appropriate for use in several aspects of China's pollution control policies, important differences between the institutional structures in China and the United States suggest that they should be phased in gradually. The paper closes with concluding remarks. (author)

  3. Spread-out Bragg peak and monitor units calculation with the Monte Carlo Code MCNPX

    International Nuclear Information System (INIS)

    Herault, J.; Iborra, N.; Serrano, B.; Chauvel, P.

    2007-01-01

    The aim of this work was to study the dosimetric potential of the Monte Carlo code MCNPX applied to the protontherapy field. For series of clinical configurations a comparison between simulated and experimental data was carried out, using the proton beam line of the MEDICYC isochronous cyclotron installed in the Centre Antoine Lacassagne in Nice. The dosimetric quantities tested were depth-dose distributions, output factors, and monitor units. For each parameter, the simulation reproduced accurately the experiment, which attests the quality of the choices made both in the geometrical description and in the physics parameters for beam definition. These encouraging results enable us today to consider a simplification of quality control measurements in the future. Monitor Units calculation is planned to be carried out with preestablished Monte Carlo simulation data. The measurement, which was until now our main patient dose calibration system, will be progressively replaced by computation based on the MCNPX code. This determination of Monitor Units will be controlled by an independent semi-empirical calculation

  4. Calculation of the real states of Ignalina NPP Unit 1 and Unit 2 RBMK-1500 reactors in the verification process of QUABOX/CUBBOX code

    International Nuclear Information System (INIS)

    Bubelis, E.; Pabarcius, R.; Demcenko, M.

    2001-01-01

    Calculations of the main neutron-physical characteristics of RBMK-1500 reactors of Ignalina NPP Unit 1 and Unit 2 were performed, taking real reactor core states as the basis for these calculations. Comparison of the calculation results, obtained using QUABOX/CUBBOX code, with experimental data and the calculation results, obtained using STEPAN code, showed that all the main neutron-physical characteristics of the reactors of Unit 1 and Unit 2 of Ignalina NPP are in the safe deviation range of die analyzed parameters, and that reactors of Ignalina NPP, during the process of the reactor core composition change, are operated in a safe and stable manner. (author)

  5. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  6. Integrating industry nuclear codes and standards into United States Department of Energy facilities

    Energy Technology Data Exchange (ETDEWEB)

    Jacox, J.

    1995-02-01

    Recently the United States Department of Energy (DOE) has mandated facilities under their jurisdiction use various industry Codes and Standards developed for civilian power reactors that operate under U.S. Nuclear Regulatory Commission License. While this is a major step forward in putting all our nuclear facilities under common technical standards there are always problems associated with implementing such advances. This paper will discuss some of the advantages and problems experienced to date. These include the universal challenge of educating new users of any technical documents, repeating errors made by the NRC licensed facilities over the years and some unique problems specific to DOE facilities.

  7. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  8. Monte Carlo Simulation of stepping source in afterloading intracavitary brachytherapy for GZP6 unit

    International Nuclear Information System (INIS)

    Toossi, M.T.B.; Abdollahi, M.; Ghorbani, M.

    2010-01-01

    Full text: Stepping source in brachytherapy systems is used to treat a target lesion longer than the effective treatment length of the source. Dose calculation accuracy plays a vital role in the outcome of brachytherapy treatment. In this study, the stepping source (channel 6) of GZP6 brachytherapy unit was simulated by Monte Carlo simulation and matrix shift method. The stepping source of GZP6 was simulated by Monte Carlo MCNPX code. The Mesh tally (type I) was employed for absorbed dose calculation in a cylindrical water phantom. 5 x 108 photon histories were scored and a 0.2% statistical uncertainty was obtained by Monte Carlo calculations. Dose distributions were obtained by our matrix shift method for esophageal cancer tumor lengths of 8 and 10 cm. Isodose curves produced by simulation and TPS were superimposed to estimate the differences. Results Comparison of Monte Carlo and TPS dose distributions show that in longitudinal direction (source movement direction) Monte Carlo and TPS dose distributions are comparable. [n transverse direction, the dose differences of 7 and 5% were observed for esophageal tumor lengths of 8 and 10 cm respectively. Conclusions Although, the results show that the maximum difference between Monte Carlo and TPS calculations is about 7%, but considering that the certified activity is given with ± I 0%, uncertainty, then an error of the order of 20% for Monte Carlo calculation would be reasonable. It can be suggested that accuracy of the dose distribution produced by TPS is acceptable for clinical applications. (author)

  9. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  10. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  11. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  12. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  13. Evolvability Is an Evolved Ability: The Coding Concept as the Arch-Unit of Natural Selection.

    Science.gov (United States)

    Janković, Srdja; Ćirković, Milan M

    2016-03-01

    Physical processes that characterize living matter are qualitatively distinct in that they involve encoding and transfer of specific types of information. Such information plays an active part in the control of events that are ultimately linked to the capacity of the system to persist and multiply. This algorithmicity of life is a key prerequisite for its Darwinian evolution, driven by natural selection acting upon stochastically arising variations of the encoded information. The concept of evolvability attempts to define the total capacity of a system to evolve new encoded traits under appropriate conditions, i.e., the accessible section of total morphological space. Since this is dependent on previously evolved regulatory networks that govern information flow in the system, evolvability itself may be regarded as an evolved ability. The way information is physically written, read and modified in living cells (the "coding concept") has not changed substantially during the whole history of the Earth's biosphere. This biosphere, be it alone or one of many, is, accordingly, itself a product of natural selection, since the overall evolvability conferred by its coding concept (nucleic acids as information carriers with the "rulebook of meanings" provided by codons, as well as all the subsystems that regulate various conditional information-reading modes) certainly played a key role in enabling this biosphere to survive up to the present, through alterations of planetary conditions, including at least five catastrophic events linked to major mass extinctions. We submit that, whatever the actual prebiotic physical and chemical processes may have been on our home planet, or may, in principle, occur at some time and place in the Universe, a particular coding concept, with its respective potential to give rise to a biosphere, or class of biospheres, of a certain evolvability, may itself be regarded as a unit (indeed the arch-unit) of natural selection.

  14. Modification of an x-ray diffraction unit to comply with the NH and MRC code of practice

    International Nuclear Information System (INIS)

    Ibbetson, V.J.; Young, J.G.

    2004-01-01

    X-ray analysis units are commonly used in research and industrial laboratories throughout Australia. Despite a well-established Code of Practice and working protocols for the safe use of such units, there are all too many stories of users by-passing safety features significantly increasing the risk of accidental exposure to the primary X-ray beam. Since the output of such units may be as high as 300 Gy x s 1 , such accidental exposures could have very serious consequences. Australian Radiation Services Pty Ltd undertook a compliance audit of an X-ray diffraction unit with respect to the NH and MRC Code of Practice for protection against ionising radiation emitted from X-ray analysis equipment. This paper discusses the findings from the initial inspection and the modifications recommended for the XRD unit to ensure compliance with the Code, without unnecessarily restricting its use. Copyright (2004) Australasian Radiation Protection Society Inc

  15. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  16. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  17. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  18. DUSTMS-D: DISPOSAL UNIT SOURCE TERM - MULTIPLE SPECIES - DISTRIBUTED FAILURE DATA INPUT GUIDE.

    Energy Technology Data Exchange (ETDEWEB)

    SULLIVAN, T.M.

    2006-01-01

    Performance assessment of a low-level waste (LLW) disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). Many of these physical processes are influenced by the design of the disposal facility (e.g., how the engineered barriers control infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This has been done and the resulting models have been incorporated into the computer code DUST-MS (Disposal Unit Source Term-Multiple Species). The DUST-MS computer code is designed to model water flow, container degradation, release of contaminants from the wasteform to the contacting solution and transport through the subsurface media. Water flow through the facility over time is modeled using tabular input. Container degradation models include three types of failure rates: (a) instantaneous (all containers in a control volume fail at once), (b) uniformly distributed failures (containers fail at a linear rate between a specified starting and ending time), and (c) gaussian failure rates (containers fail at a rate determined by a mean failure time, standard deviation and gaussian distribution). Wasteform release models include four release mechanisms: (a) rinse with partitioning (inventory is released instantly upon container failure subject to equilibrium partitioning (sorption) with

  19. Comprehensive trends assessment of nitrogen sources and loads to estuaries of the coterminous United States

    Science.gov (United States)

    Sources of nitrogen and phosphorus to estuaries and estuarine watersheds of the coterminous United States have been compiled from a variety of publically available data sources (1985 – 2015). Atmospheric loading was obtained from two sources. Modelled and interpolated meas...

  20. 50 CFR Table 1 to Subpart H of... - Pacific Salmon EFH Identified by USGS Hydrologic Unit Code (HUC)

    Science.gov (United States)

    2010-10-01

    ... 50 Wildlife and Fisheries 9 2010-10-01 2010-10-01 false Pacific Salmon EFH Identified by USGS... 660—Pacific Salmon EFH Identified by USGS Hydrologic Unit Code (HUC) USGS HUC State(s) Hydrologic Unit... 18010206 CA/OR Upper Klamath River Chinook and coho salmon Iron Gate Dam 18010207 CA Shasta River Chinook...

  1. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  2. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  3. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  4. An Effective Transform Unit Size Decision Method for High Efficiency Video Coding

    Directory of Open Access Journals (Sweden)

    Chou-Chen Wang

    2014-01-01

    Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.

  5. Increasing Trend of Fatal Falls in Older Adults in the United States, 1992 to 2005: Coding Practice or Reporting Quality?

    Science.gov (United States)

    Kharrazi, Rebekah J; Nash, Denis; Mielenz, Thelma J

    2015-09-01

    To investigate whether changes in death certificate coding and reporting practices explain part or all of the recent increase in the rate of fatal falls in adults aged 65 and older in the United States. Trends in coding and reporting practices of fatal falls were evaluated under mortality coding schemes for International Classification of Diseases (ICD), Ninth Revision (1992-1998) and Tenth Revision (1999-2005). United States, 1992 to 2005. Individuals aged 65 and older with falls listed as the underlying cause of death (UCD) on their death certificates. The primary outcome was annual fatal falls rates per 100,000 U.S. residents aged 65 and older. Coding practice was assessed through analysis of trends in rates of specific UCD fall ICD e-codes over time. Reporting quality was assessed by examining changes in the location on the death certificate where fall e-codes were reported, in particular, the percentage of fall e-codes recorded in the proper location on the death certificate. Fatal falls rates increased over both time periods: 1992 to 1998 and 1999 to 2005. A single falls e-code was responsible for the increasing trend of fatal falls overall from 1992 to 1998 (E888, other and unspecified fall) and from 1999 to 2005 (W18, other falls on the same level), whereas trends for other falls e-codes remained stable. Reporting quality improved steadily throughout the study period. Better reporting quality, not coding practices, contributed to the increasing rate of fatal falls in older adults in the United States from 1992 to 2005. © 2015, Copyright the Authors Journal compilation © 2015, The American Geriatrics Society.

  6. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  7. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  8. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  9. Evaluation of SPACE code for simulation of inadvertent opening of spray valve in Shin Kori unit 1

    International Nuclear Information System (INIS)

    Kim, Seyun; Youn, Bumsoo

    2013-01-01

    SPACE code is expected to be applied to the safety analysis for LOCA (Loss of Coolant Accident) and Non-LOCA scenarios. SPACE code solves two-fluid, three-field governing equations and programmed with C++ computer language using object-oriented concepts. To evaluate the analysis capability for the transient phenomena in the actual nuclear power plant, an inadvertent opening of spray valve in startup test phase of Shin Kori unit 1 was simulated with SPACE code. To evaluate the analysis capability for the transient phenomena in the actual nuclear power plant, an inadvertent opening of spray valve in startup test phase of Shin Kori unit 1 was simulated with SPACE code

  10. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  11. Codes of Journalism Ethics in Russia and the United States: Traditions and the Current Practice of Application

    OpenAIRE

    Bykov, Aleksei Yuryevich; Georgieva, Elena Savova; Danilova, Yuliya Sokratovna; Baychik, Anna Vitalyevna

    2016-01-01

    The purpose of the article is to identify the main categories stated in the codes of journalism ethics in Russia and the United States, as well as the principles of their practical application. As a part of the comparative analysis of the codes of the journalism organizations of the two countries, we identify factors affecting the adoption and contents of the documents and the approaches to the regulation of different areas of professional activity which were reflected in these documents. The...

  12. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  13. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  14. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  15. Sources of ultrafine particles in the Eastern United States

    Science.gov (United States)

    Posner, Laura N.; Pandis, Spyros N.

    2015-06-01

    Source contributions to ultrafine particle number concentrations for a summertime period in the Eastern U.S. are investigated using the chemical transport model PMCAMx-UF. New source-resolved number emissions inventories are developed for biomass burning, dust, gasoline automobiles, industrial sources, non-road and on-road diesel. According to the inventory for this summertime period in the Eastern U.S., gasoline automobiles are responsible for 40% of the ultrafine particle number emissions, followed by industrial sources (33%), non-road diesel (16%), on-road diesel (10%), and 1% from biomass burning and dust. With these emissions as input, the chemical transport model PMCAMx-UF reproduces observed ultrafine particle number concentrations (N3-100) in Pittsburgh with an error of 12%. For this summertime period in the Eastern U.S., nucleation is predicted to be the source of more than 90% of the total particle number concentrations. The source contributions to primary particle number concentrations are on average similar to those of their source emissions contributions: gasoline is predicted to contribute 36% of the total particle number concentrations, followed by industrial sources (31%), non-road diesel (18%), on-road diesel (10%), biomass burning (1%), and long-range transport (4%). For this summertime period in Pittsburgh, number source apportionment predictions for particles larger than 3 nm in diameter (traffic 65%, other combustion sources 35%) are consistent with measurement-based source apportionment (traffic 60%, combustion sources 40%).

  16. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  17. Lost opportunities: Modeling commercial building energy code adoption in the United States

    International Nuclear Information System (INIS)

    Nelson, Hal T.

    2012-01-01

    This paper models the adoption of commercial building energy codes in the US between 1977 and 2006. Energy code adoption typically results in an increase in aggregate social welfare by cost effectively reducing energy expenditures. Using a Cox proportional hazards model, I test if relative state funding, a new, objective, multivariate regression-derived measure of government capacity, as well as a vector of control variables commonly used in comparative state research, predict commercial building energy code adoption. The research shows little political influence over historical commercial building energy code adoption in the sample. Colder climates and higher electricity prices also do not predict more frequent code adoptions. I do find evidence of high government capacity states being 60 percent more likely than low capacity states to adopt commercial building energy codes in the following year. Wealthier states are also more likely to adopt commercial codes. Policy recommendations to increase building code adoption include increasing access to low cost capital for the private sector and providing noncompetitive block grants to the states from the federal government. - Highlights: ► Model the adoption of commercial building energy codes from 1977–2006 in the US. ► Little political influence over historical building energy code adoption. ► High capacity states are over 60 percent more likely than low capacity states to adopt codes. ► Wealthier states are more likely to adopt commercial codes. ► Access to capital and technical assistance is critical to increase code adoption.

  18. The needs for brachytherapy source calibrations in the United States

    International Nuclear Information System (INIS)

    Coursey, B.M.; Goodman, L.J.; Hoppes, D.D.; Loevinger, R.; McLaughlin, W.L.; Soares, C.G.; Weaver, J.T.

    1992-01-01

    Brachytherapy sources of beta and gamma radiation ('brachy' is from the Greek, meaning 'near') have a long history of use in interstitial, intracavitary, intraluminal, and ocular radiation therapy. In the past the US national standards for these sources were often specified in activity or milligram radium equivalent. With the introduction of new radionuclide sources to replace radium, source strength calibrations are now expressed as air kerma rate at a meter. In this paper, we review the NIST standards for brachytherapy sources, list some of the common radionuclides and source encapsulations in use in the US radiology community, and describe the latest NIST work, in collaboration with several US medical institutions, on a method of two- and three-dimensional dose mapping of brachytherapy sources using radiochromic films. (orig.)

  19. Commercial and Industrial Solid Waste Incineration Units (CISWI): New Source Performance Standards (NSPS) and Emission Guidelines (EG) for Existing Sources

    Science.gov (United States)

    Learn about the New Source Performance Standards (NSPS) for commercial and industrial solid waste incineration (CISWI) units including emission guidelines and compliance times for the rule. Read the rule history and summary, and find supporting documents

  20. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  1. Genetic coding and united-hypercomplex systems in the models of algebraic biology.

    Science.gov (United States)

    Petoukhov, Sergey V

    2017-08-01

    Structured alphabets of DNA and RNA in their matrix form of representations are connected with Walsh functions and a new type of systems of multidimensional numbers. This type generalizes systems of complex numbers and hypercomplex numbers, which serve as the basis of mathematical natural sciences and many technologies. The new systems of multi-dimensional numbers have interesting mathematical properties and are called in a general case as "systems of united-hypercomplex numbers" (or briefly "U-hypercomplex numbers"). They can be widely used in models of multi-parametrical systems in the field of algebraic biology, artificial life, devices of biological inspired artificial intelligence, etc. In particular, an application of U-hypercomplex numbers reveals hidden properties of genetic alphabets under cyclic permutations in their doublets and triplets. A special attention is devoted to the author's hypothesis about a multi-linguistic in DNA-sequences in a relation with an ensemble of U-numerical sub-alphabets. Genetic multi-linguistic is considered as an important factor to provide noise-immunity properties of the multi-channel genetic coding. Our results attest to the conformity of the algebraic properties of the U-numerical systems with phenomenological properties of the DNA-alphabets and with the complementary device of the double DNA-helix. It seems that in the modeling field of algebraic biology the genetic-informational organization of living bodies can be considered as a set of united-hypercomplex numbers in some association with the famous slogan of Pythagoras "the numbers rule the world". Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Qualification of McCARD/MASTER Code System for Yonggwang Unit 4

    International Nuclear Information System (INIS)

    Park, Ho Jin; Shim, Hyung Jin; Joo, Han Gyu; Kim, Chang Hyo

    2011-01-01

    Recently, we have developed the new two-step procedure based on the Monte Carlo (MC) methods. In this procedure, one can generate the few group constants including the few-group diffusion constants by the MC method augmented by the critical spectrum, which is provided by the solution to the homogeneous 0-dimensional B1 equation. In order to examine the qualification of the few-group constants generated by MC method, we combine MASTER with McCARD to form McCARD/MASTER code system for two-step core neutronics calculations. In the fictitious PWR system problems, the core design parameters calculated by the two-step McCARD/MASTER analysis agree well with those from the direct MC calculations. In this paper, a neutronic design analysis for the initial core of Yonggwang Nuclear Unit 4 (YGN4) is conducted using McCARD/MASTER two-step procedure to examine the qualification of two group constants from McCARD in terms of a real PWR core problem. To compare with the results, the nuclear design report and measured data are chosen as the reference solutions

  3. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  4. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  5. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  6. Instruction in Specialized Braille Codes, Abacus, and Tactile Graphics at Universities in the United States and Canada

    Science.gov (United States)

    Rosenblum, L. Penny; Smith, Derrick

    2012-01-01

    Introduction: This study gathered data on methods and materials that are used to teach the Nemeth braille code, computer braille, foreign-language braille, and music braille in 26 university programs in the United States and Canada that prepare teachers of students with visual impairments. Information about instruction in the abacus and the…

  7. Fabrication of californium-252 sources in the United Kingdom

    International Nuclear Information System (INIS)

    Ainsworth, A.; Brady, M.W.; Thornett, W.H.

    1975-01-01

    The advent of californium-252 in weighable quantities and at a reasonable price has caused some rethinking among neutron source suppliers. To explore this market the Radiochemical Center Ltd. has purchased 2 mg of californium-252, and subdivided this into a wide range of sources. To take advantage of its high specific neutron emission, a small double welded stainless steel capsule 7.8mm diameter x 10mm high was chosen for stock sources and this entailed the use of a microdispensing technique which had to be specially developed. The apparatus and procedure for subdividing milligram amounts of californium-252 are described. Some details of our experience in processing these one milligram shipments are given. 100 sources with activities from 200 microgram to 0.01 microgram have been produced. Losses have been small. Measurement of neutron spectra gamma spectra and dose rates from encapsulated sources has confirmed published data. Though it is early days, little industrial interest in californium-252 sources has been detected, most of the sources have so far been required for research into activation analysis and two examples of this are given. (U.S.)

  8. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  9. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  10. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  11. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  12. Potential for unconventional energy sources for the United Kingdom

    Energy Technology Data Exchange (ETDEWEB)

    Leighton, L H; Wright, J K; Syrett, J J

    1977-01-01

    The unconventional sources considered are solar energy, wind power, wave and tidal power, and geothermal heat. Their potential contribution to energy supply in the UK is being assessed as part of a wider exercise aimed at formulating a national energy R and D strategy sufficiently robust to be valid for a wide range of possible future conditions. For each of the sources considered, the present state of knowledge of the magnitude of the potential resource base is outlined and the inherent characteristics of each are discussed in terms of environmental impact and of estimated cost relative to conventional technology. With respect to the latter, attention is drawn to the inherent variability of most of the sources, which imposes upon them a cost penalty for back-up plant and/or large scale storage is firm power is to be assured. The progress that has been made in drawing up, for each of the sources, a national R and D program compatible with the assessment of development potential is outlined, and a tentative estimate is made of the maximum credible contribution the sources could make to energy supply in the UK by the end of the century. The concluding paragraphs deal with the prospects for the next century and indicate that the long-term uncertainties on energy supply justify a determined effort to convert the most promising of the unconventional sources into the well-researched technological options that may be needed.

  13. Dosimetric comparison between the microSelectron HDR 192Ir v2 source and the BEBIG 60Co source for HDR brachytherapy using the EGSnrc Monte Carlo transport code

    International Nuclear Information System (INIS)

    Anwarul Islam, M.; Akramuzzaman, M.M.; Zakaria, G.A.

    2012-01-01

    Manufacturing of miniaturized high activity 192 Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60 Co HDR sources have been made available with identical dimensions to those of 192 Ir sources. 60 Co sources have an advantage of longer half life while comparing with 192 Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60 Co HDR and new microSelectron 192 Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSelectron HDR 192 Ir v2 and new BEBIG 60 Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698x10 -8 ± 0.55% U Bq -1 and 3.039x10 -7 ± 0.41% U Bq -1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h -1 U -1 and 1.097±0.12% cGy h -1 U -1 , respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60 Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60 Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using

  14. Simulation analysis on accident at Fukushima Daiichi Nuclear Power Plant Unit 2 by SAMPSON code

    International Nuclear Information System (INIS)

    Takahashi, Atsuo; Pellegrini, Marco; Mizouchi, Hideo; Suzuki, Hiroaki; Naitoh, Masanori

    2015-01-01

    The accident occurred at the Fukushima Daiichi Nuclear Power Plant Unit 2 has been investigated by the severe accident analysis code, SAMPSON with more realistic boundary conditions and newly introduced models. In Unit 2, the Reactor Core Isolation Cooling system (RCIC) is thought to have worked for unexpectedly long time (about 70 hours) without batteries. It is thought to be due to balance between injected water from the RCIC pump and supplied mixture of steam and water to the RCIC turbine. To confirm the RCIC working condition and reproduce the measured plant properties, such as pressure and water level in the reactor pressure vessel (RPV), we introduced two-phase turbine driven pump model into SAMPSON. In the model, mass flow rate of water injected by RCIC was calculated through mass flow rate of steam included in extracted two-phase flow, steam generated from flashing of water included in extracted two-phase flow, and turbine efficiency degradation originated by the mixture of steam and water flowing to the RCIC turbine. To reproduce the dry well (DW) pressure, we assumed that torus room was flooded by the tsunami and heat was removed from the suppression chamber to the sea water. Simulation results by SAMPSON basically agree with the measured values such as pressure in the RPV and in the DW until several days after the scram. However, some contradictions between the simulation results and the measured values, such as that inversion of the RPV pressure at 10 hours after scram in the measurement happened at 14 hours in the simulation and that the DW pressure showed different behavior between simulation and measurement when SRV started periodic operation at 71 hours, are still remain and are under consideration. In the current calculation, model for falling core to the lower plenum was modified so that debris is not retained at the core plate based on observation of the XR2-1 experiment. Additionally, model of the RPV failure by melting of the penetrating pipe

  15. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  16. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  17. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  18. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  19. ERP correlates of source memory: Unitized source information increases familiarity-based retrieval

    OpenAIRE

    Diana, Rachel A.; Van den Boom, Wijnand; Yonelinas, Andrew P.; Ranganath, Charan

    2010-01-01

    Source memory tests typically require subjects to make decisions about the context in which an item was encoded and are thought to depend on recollection of details from the study episode. Although it is generally believed that familiarity does not contribute to source memory, recent behavioral studies have suggested that familiarity may also support source recognition when item and source information are integrated, or “unitized”, during study (Diana, Yonelinas, and Ranganath 2008). However,...

  20. The influence of time units on the flexibility of the spatial numerical association of response codes effect.

    Science.gov (United States)

    Zhao, Tingting; He, Xianyou; Zhao, Xueru; Huang, Jianrui; Zhang, Wei; Wu, Shuang; Chen, Qi

    2018-05-01

    The Spatial Numerical/Temporal Association of Response Codes (SNARC/STEARC) effects are considered evidence of the association between number or time and space, respectively. As the SNARC effect was proposed by Dehaene, Bossini, and Giraux in 1993, several studies have suggested that different tasks and cultural factors can affect the flexibility of the SNARC effect. This study explored the influence of time units on the flexibility of the SNARC effect via materials with Arabic numbers, which were suffixed with time units and subjected to magnitude comparison tasks. Experiment 1 replicated the SNARC effect for numbers and the STEARC effect for time units. Experiment 2 explored the flexibility of the SNARC effect when numbers were attached to time units, which either conflicted with the numerical magnitude or in which the time units were the same or different. Experiment 3 explored whether the SNARC effect of numbers was stable when numbers were near the transition of two adjacent time units. The results indicate that the SNARC effect was flexible when the numbers were suffixed with time units: Time units influenced the direction of the SNARC effect in a way which could not be accounted for by the mathematical differences between the time units and numbers. This suggests that the SNARC effect is not obligatory and can be easily adapted or inhibited based on the current context. © 2017 The Authors. British Journal of Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.

  1. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  2. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  3. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  4. Northeast Hub Partners and United Salts Single Source Determination

    Science.gov (United States)

    This document may be of assistance in applying the New Source Review (NSR) air permitting regulations including the Prevention of Significant Deterioration (PSD) requirements. This document is part of the NSR Policy and Guidance Database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.

  5. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  6. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  7. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  8. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  9. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  10. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  11. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  12. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  13. Nonlinear pre-coding apparatus of multi-antenna system, has pre-coding unit that extents original constellation points of modulated symbols to several constellation points by using limited perturbation vector

    DEFF Research Database (Denmark)

    2008-01-01

    A Coding/Modulating units (200-1-200-N) outputs modulated symbols by modulating coding bit streams based on certain modulation scheme. The limited perturbation vector is calculated by using distribution of perturbation vectors. The original constellation points of modulated symbols are extended t...

  14. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  15. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  16. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  17. Modeled Sources, Transport, and Accumulation of Dissolved Solids in Water Resources of the Southwestern United States.

    Science.gov (United States)

    Anning, David W

    2011-10-01

    Information on important source areas for dissolved solids in streams of the southwestern United States, the relative share of deliveries of dissolved solids to streams from natural and human sources, and the potential for salt accumulation in soil or groundwater was developed using a SPAtially Referenced Regressions On Watershed attributes model. Predicted area-normalized reach-catchment delivery rates of dissolved solids to streams ranged from Salton Sea accounting unit.

  18. Nitrogen deposition to the United States: distribution, sources, and processes

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We simulate nitrogen deposition over the US in 2006–2008 by using the GEOS-Chem global chemical transport model at 1/2°×2/3° horizontal resolution over North America and adjacent oceans. US emissions of NOx and NH3 in the model are 6.7 and 2.9 Tg N a−1 respectively, including a 20% natural contribution for each. Ammonia emissions are a factor of 3 lower in winter than summer, providing a good match to US network observations of NHx (≡NH3 gas + ammonium aerosol and ammonium wet deposition fluxes. Model comparisons to observed deposition fluxes and surface air concentrations of oxidized nitrogen species (NOy show overall good agreement but excessive wintertime HNO3 production over the US Midwest and Northeast. This suggests a model overestimate N2O5 hydrolysis in aerosols, and a possible factor is inhibition by aerosol nitrate. Model results indicate a total nitrogen deposition flux of 6.5 Tg N a−1 over the contiguous US, including 4.2 as NOy and 2.3 as NHx. Domestic anthropogenic, foreign anthropogenic, and natural sources contribute respectively 78%, 6%, and 16% of total nitrogen deposition over the contiguous US in the model. The domestic anthropogenic contribution generally exceeds 70% in the east and in populated areas of the west, and is typically 50–70% in remote areas of the west. Total nitrogen deposition in the model exceeds 10 kg N ha−1 a−1 over 35% of the contiguous US.

  19. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  20. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  1. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  2. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  3. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  4. Audit calculations of accidents analysis for second unit of Ignalina NPP with ATHLET code

    International Nuclear Information System (INIS)

    Adomavicius, A.; Belousov, A.; Ognerubov, V.

    2004-01-01

    Background of thermo hydraulic processes audit calculations in the frame of RSR-2 project is presented. Assumptions for the design based accident - RBMK-1500 group distributor header break analysis and modeling are presented. Audit calculations by ATHLET code and evaluation of results were provided. (author)

  5. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  6. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  7. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  8. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  9. Determining Market Categorization of United States Zip Codes for Purposes of Army Recruiting

    Science.gov (United States)

    2016-06-01

    Army uses commercial market segmentation data to analyze markets and past accessions to assign recruiters and quotas to maximize production. We use...Army Recruiting Command to rely on proprietary data with 66 market segments per ZIP code for market analysis and predicting recruiting potential...have different densities of potential recruits; the Army uses commercial market segmentation data to analyze markets and past accessions to assign

  10. Computation of a Canadian SCWR unit cell with deterministic and Monte Carlo codes

    International Nuclear Information System (INIS)

    Harrisson, G.; Marleau, G.

    2012-01-01

    The Canadian SCWR has the potential to achieve the goals that the generation IV nuclear reactors must meet. As part of the optimization process for this design concept, lattice cell calculations are routinely performed using deterministic codes. In this study, the first step (self-shielding treatment) of the computation scheme developed with the deterministic code DRAGON for the Canadian SCWR has been validated. Some options available in the module responsible for the resonance self-shielding calculation in DRAGON 3.06 and different microscopic cross section libraries based on the ENDF/B-VII.0 evaluated nuclear data file have been tested and compared to a reference calculation performed with the Monte Carlo code SERPENT under the same conditions. Compared to SERPENT, DRAGON underestimates the infinite multiplication factor in all cases. In general, the original Stammler model with the Livolant-Jeanpierre approximations are the most appropriate self-shielding options to use in this case of study. In addition, the 89 groups WIMS-AECL library for slight enriched uranium and the 172 groups WLUP library for a mixture of plutonium and thorium give the most consistent results with those of SERPENT. (authors)

  11. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  12. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  13. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  14. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  15. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  16. The Small Area Health Statistics Unit: a national facility for investigating health around point sources of environmental pollution in the United Kingdom.

    Science.gov (United States)

    Elliott, P; Westlake, A J; Hills, M; Kleinschmidt, I; Rodrigues, L; McGale, P; Marshall, K; Rose, G

    1992-01-01

    STUDY OBJECTIVE--The Small Area Health Statistics Unit (SAHSU) was established at the London School of Hygiene and Tropical Medicine in response to a recommendation of the enquiry into the increased incidence of childhood leukaemia near Sellafield, the nuclear reprocessing plant in West Cumbria. The aim of this paper was to describe the Unit's methods for the investigation of health around point sources of environmental pollution in the United Kingdom. DESIGN--Routine data currently including deaths and cancer registrations are held in a large national database which uses a post code based retrieval system to locate cases geographically and link them to the underlying census enumeration districts, and hence to their populations at risk. Main outcome measures were comparison of observed/expected ratios (based on national rates) within bands delineated by concentric circles around point sources of environmental pollution located anywhere in Britain. MAIN RESULTS--The system is illustrated by a study of mortality from mesothelioma and asbestosis near the Plymouth naval dockyards during 1981-87. Within a 3 km radius of the docks the mortality rate for mesothelioma was higher than the national rate by a factor of 8.4, and that for asbestosis was higher by a factor of 13.6. CONCLUSIONS--SAHSU is a new national facility which is rapidly able to provide rates of mortality and cancer incidence for arbitrary circles drawn around any point in Britain. The example around Plymouth of mesothelioma and asbestosis demonstrates the ability of the system to detect an unusual excess of disease in a small locality, although in this case the findings are likely to be related to occupational rather than environmental exposure. PMID:1431704

  17. The IPEM code of practice for determination of the reference air kerma rate for HDR 192Ir brachytherapy sources based on the NPL air kerma standard

    International Nuclear Information System (INIS)

    Bidmead, A M; Sander, T; Nutbrown, R F; Locks, S M; Lee, C D; Aird, E G A; Flynn, A

    2010-01-01

    This paper contains the recommendations of the high dose rate (HDR) brachytherapy working party of the UK Institute of Physics and Engineering in Medicine (IPEM). The recommendations consist of a Code of Practice (COP) for the UK for measuring the reference air kerma rate (RAKR) of HDR 192 Ir brachytherapy sources. In 2004, the National Physical Laboratory (NPL) commissioned a primary standard for the realization of RAKR of HDR 192 Ir brachytherapy sources. This has meant that it is now possible to calibrate ionization chambers directly traceable to an air kerma standard using an 192 Ir source (Sander and Nutbrown 2006 NPL Report DQL-RD 004 (Teddington: NPL) http://publications.npl.co.uk). In order to use the source specification in terms of either RAKR, .K R (ICRU 1985 ICRU Report No 38 (Washington, DC: ICRU); ICRU 1997 ICRU Report No 58 (Bethesda, MD: ICRU)), or air kerma strength, S K (Nath et al 1995 Med. Phys. 22 209-34), it has been necessary to develop algorithms that can calculate the dose at any point around brachytherapy sources within the patient tissues. The AAPM TG-43 protocol (Nath et al 1995 Med. Phys. 22 209-34) and the 2004 update TG-43U1 (Rivard et al 2004 Med. Phys. 31 633-74) have been developed more fully than any other protocol and are widely used in commercial treatment planning systems. Since the TG-43 formalism uses the quantity air kerma strength, whereas this COP uses RAKR, a unit conversion from RAKR to air kerma strength was included in the appendix to this COP. It is recommended that the measured RAKR determined with a calibrated well chamber traceable to the NPL 192 Ir primary standard is used in the treatment planning system. The measurement uncertainty in the source calibration based on the system described in this COP has been reduced considerably compared to other methods based on interpolation techniques.

  18. The National Spallation Neutron Source Collaboration: Towards a new pulsed neutron source in the United States

    International Nuclear Information System (INIS)

    Appleton, B.R.; Ball, J.B.; Alonso, J.R.; Gough, R.A.; Weng, W.T.; Jason, A.

    1996-01-01

    The US Department of Energy has commissioned Oak Ridge National Laboratory to initiate the conceptual design for a next-generation pulsed spallation neutron source. Current expectation is for a construction start in FY 1998, with commencement of operations in 2004. For this project, ORNL has entered into a collaborative arrangement with LBNL, BNL, LANL (and most recently ANL). The conceptual design study is now well underway, building on the strong base of the extensive work already performed by various Laboratories, as well as input from the user community (from special BESAC subpanels). Study progress, including accelerator configuration and plans for resolution of critical issues, is reported in this paper

  19. Title 16 united states code §55 and its implications for management of concession facilities in Yosemite National Park

    Science.gov (United States)

    Lemons, John

    1987-08-01

    Yosemite National Park is one of the nation's most scenic and ecologically/geologically important parks. Unfortunately, the park is subject to extensive development of concession facilities and associated high levels of visitor use. Those concerned with preservation of the park's resources have attempted to limit the types and extent of such facilities to reduce adverse impacts. Strictly speaking, resolution of the preservation versus use controversy must be based on whether the National Park Service is adhering to its legislative mandate to regulate development and use in the parks. The common interpretation of legislative mandates for national parks, including Yosemite, is that they call for a difficult balancing between the conflicting goals of preservation and use. Accordingly, although concession developments cause significant impacts, they usually have been interpreted to be within the legal discretion allowed the secretary of the interior. However, the usual interpretations of the meanings of legislative mandates for Yosemite National Park have not considered Title 16 United States Code §55, which is a very restrictive statute limiting concession facilities. Many of the limitations imposed on concession facilities by the plain language of the statute have been exceeded. If it can be shown that 16 United States Code §55 is a valid statute, the policy implications for park management in Yosemite National Park would be considerable — namely, that significant reductions in concession facilities could be required. This article examines whether the statute can reasonably be thought to be valid and encourages others to conduct further examination of this question.

  20. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  1. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  2. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  3. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  4. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library

  5. Use of EGS4 codes system for the evaluation of electron contamination in telecobalt therapy unit

    International Nuclear Information System (INIS)

    Bernal, B.; Alfonso, R.

    1995-01-01

    The cobalt 60 beams employed radiotherapy usually have some electron contamination, mainly depending on the selected field size, the diaphragm-skin distance and the collation system features. The electron component of a thyratron 780C cobalt unit was evaluated, using in any material and geometry, by using Monte Carlo techniques. The radiation transport in the unit head was simulated, as well as the absorbed dose in a water phantom, so the surface dose fraction due to electron was computed. Measurements from 0 to 5 mm depth were carried out in order to confirm our calculations, finding good agreement with them. Several PMMA filters with different thickness were analyzed to study their role in the electron contamination reduction; an optimal thickness around 5 mm was found

  6. Development of the Computer Code to Determine an Individual Radionuclides in the Rad-wastes Container for Ulchin Units 3 and 4

    Energy Technology Data Exchange (ETDEWEB)

    Kang, D.W.; Chi, J.H.; Goh, E.O. [Korea Electric Power Research Institute, Taejon (Korea)

    2001-07-01

    A computer program, RASSAY was developed to evaluate accurately the activities of various nuclides in the rad-waste container for Ulchin units 3 and 4. This is the final report of the project, {sup D}evelopment of the Computer Code to Determine an Individual Radionuclides in the Rad-wastes Container for Ulchin Units 3 and 4 and includes the followings; 1) Structure of the computer code, RASSAY 2) An example of surface dose calculation by computer simulation using MCNP code 3) Methods of sampling and activity measurement of various Rad-wastes. (author). 21 refs., 35 figs., 6 tabs.

  7. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  8. Nonpoint and Point Sources of Nitrogen in Major Watersheds of the United States

    Science.gov (United States)

    Puckett, Larry J.

    1994-01-01

    Estimates of nonpoint and point sources of nitrogen were made for 107 watersheds located in the U.S. Geological Survey's National Water-Quality Assessment Program study units throughout the conterminous United States. The proportions of nitrogen originating from fertilizer, manure, atmospheric deposition, sewage, and industrial sources were found to vary with climate, hydrologic conditions, land use, population, and physiography. Fertilizer sources of nitrogen are proportionally greater in agricultural areas of the West and the Midwest than in other parts of the Nation. Animal manure contributes large proportions of nitrogen in the South and parts of the Northeast. Atmospheric deposition of nitrogen is generally greatest in areas of greatest precipitation, such as the Northeast. Point sources (sewage and industrial) generally are predominant in watersheds near cities, where they may account for large proportions of the nitrogen in streams. The transport of nitrogen in streams increases as amounts of precipitation and runoff increase and is greatest in the Northeastern United States. Because no single nonpoint nitrogen source is dominant everywhere, approaches to control nitrogen must vary throughout the Nation. Watershed-based approaches to understanding nonpoint and point sources of contamination, as used by the National Water-Quality Assessment Program, will aid water-quality and environmental managers to devise methods to reduce nitrogen pollution.

  9. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  10. Basic design of the HANARO cold neutron source using MCNP code

    International Nuclear Information System (INIS)

    Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil

    2005-01-01

    The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described

  11. Factors for Microbial Carbon Sources in Organic and Mineral Soils from Eastern United States Deciduous Forests

    Energy Technology Data Exchange (ETDEWEB)

    Stitt, Caroline R. [Mills College, Oakland, CA (United States)

    2013-09-16

    Forest soils represent a large portion of global terrestrial carbon; however, which soil carbon sources are used by soil microbes and respired as carbon dioxide (CO2) is not well known. This study will focus on characterizing microbial carbon sources from organic and mineral soils from four eastern United States deciduous forests using a unique radiocarbon (14C) tracer. Results from the dark incubation of organic and mineral soils are heavily influenced by site characteristics when incubated at optimal microbial activity temperature. Sites with considerable differences in temperature, texture, and location differ in carbon source attribution, indicating that site characteristics play a role in soil respiration.

  12. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    Science.gov (United States)

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  13. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  14. Changing priorities of codes and standards: An A/E's perspective for operating units and new generation

    International Nuclear Information System (INIS)

    Meyers, B.L.; Jackson, R.W.; Morowski, B.D.

    1994-01-01

    As the nuclear power industry has shifted emphasis from the construction of new plants to the reliability and maintenance of operating units, the industry's commitment to safety has been well guarded and maintained. Many other important indicators of nuclear industry performance are also positive. Unfortunately, by some projections, as many as 25 operating nuclear units could prematurely shutdown because of increasing O ampersand M and total operating costs. The immediate impact of higher generating costs on the nuclear industry is evident. However, when viewed over the longer-term, high generating costs will also affect license renewals, progress in the development of advanced light water reactor designs and prospects for a return to the building of new plants. Today's challenge is to leverage the expertise and contribution of the nuclear industry partner organizations to steadily improve the work processes and methods necessary to reduce operating costs, to achieve higher levels in the performance of operating units, and to maintain high standards of technical excellence and safety. From the experience and perspective of an A/E and partner in the nuclear industry, this paper will discuss the changing priorities of codes and standards as they relate to opportunities for the communication of lessons learned and improving the responsiveness to industry needs

  15. SURFACE AND LIGHTNING SOURCES OF NITROGEN OXIDES OVER THE UNITED STATES: MAGNITUDES, CHEMICAL EVOLUTION, AND OUTFLOW

    Science.gov (United States)

    We use observations from two aircraft during the ICARTT campaign over the eastern United States and North Atlantic during summer 2004, interpreted with a global 3-D model of tropospheric chemistry (GEOS-Chem) to test current understanding of regional sources, chemical evolution...

  16. 100 Area source operable unit focused feasibility study report. Draft A

    International Nuclear Information System (INIS)

    1994-09-01

    In accordance with the Hanford Past-Practice Strategy (HPPS), a focused feasibility study (FFS) is performed for those waste sites which have been identified as candidates for interim remedial measures (IRM) based on information contained in applicable work plans and limited field investigations (LFI). The FFS process for the 100 Area source operable units will be conducted in two stages. This report, hereafter referred to as the Process Document, documents the first stage of the process. In this stage, IRM alternatives are developed and analyzed on the basis of waste site groups associated with the 100 Area source operable units. The second stage, site-specific evaluation of the IRM alternatives presented in this Process Document, is documented in a series of operable unit-specific reports. The objective of the FFS (this Process Document and subsequent operable unit-specific reports) is to provide decision makers with sufficient information to allow appropriate and timely selection of IRM for sites associated with the 100 Area source operable units. Accordingly, the following information is presented: a presentation of remedial action objectives; a description of 100 Area waste site groups and associated group profiles; a description of IRM alternatives; and detailed and comparative analyses of the IRM alternatives

  17. Geochemical evidence for diversity of dust sources in the southwestern United States

    Science.gov (United States)

    Reheis, M.C.; Budahn, J.R.; Lamothe, P.J.

    2002-01-01

    Several potential dust sources, including generic sources of sparsely vegetated alluvium, playa deposits, and anthropogenic emissions, as well as the area around Owens Lake, California, affect the composition of modern dust in the southwestern United States. A comparison of geochemical analyses of modern and old (a few thousand years) dust with samples of potential local sources suggests that dusts reflect four primary sources: (1) alluvial sediments (represented by Hf, K, Rb, Zr, and rare-earth elements, (2) playas, most of which produce calcareous dust (Sr, associated with Ca), (3) the area of Owens (dry) Lake, a human-induced playa (As, Ba, Li, Pb, Sb, and Sr), and (4) anthropogenic and/or volcanic emissions (As, Cr, Ni, and Sb). A comparison of dust and source samples with previous analyses shows that Owens (dry) Lake and mining wastes from the adjacent Cerro Gordo mining district are the primary sources of As, Ba, Li, and Pb in dusts from Owens Valley. Decreases in dust contents of As, Ba, and Sb with distance from Owens Valley suggest that dust from southern Owens Valley is being transported at least 400 km to the east. Samples of old dust that accumulated before European settlement are distinctly lower in As, Ba, and Sb abundances relative to modern dust, likely due to modern transport of dust from Owens Valley. Thus, southern Owens Valley appears to be an important, geochemically distinct, point source for regional dust in the southwestern United States. Copyright ?? 2002 Elsevier Science Ltd.

  18. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  19. 40 CFR 60.1025 - Do subpart E new source performance standards also apply to my municipal waste combustion unit?

    Science.gov (United States)

    2010-07-01

    ... standards also apply to my municipal waste combustion unit? 60.1025 Section 60.1025 Protection of... NEW STATIONARY SOURCES Standards of Performance for Small Municipal Waste Combustion Units for Which... municipal waste combustion unit? If this subpart AAAA applies to your municipal waste combustion unit, then...

  20. Integrity Analysis of Turbine Building for the MSLB Using GOTHIC code for Wolsong NPP Unit 2

    International Nuclear Information System (INIS)

    Ko, Bong-Jin; Jin, Dong-Sik; Kim, Jong-Hyun; Han, Sang-Koo; Choi, Hoon; Kho, Dong-Wook

    2015-01-01

    A break in the piping between the steam generators and the turbine can lead to rapid loss of secondary circuit inventory. A break inside the turbine building leads to pressure differentials between different areas of the turbine building. In order to improve the environmental protection of various components within the turbine building, a wall has been erected which effectively separates the area in which these components are housed from the rest of the turbine building. Relief panels installed in the turbine building ensure that the pressure differential across the wall would be less than that required to jeopardize the wall integrity. The turbine building service wing is excluded from the scope of this analysis. It is further assumed that any doors in the heavy wall are as strong as the wall itself, with no gaps or leakage around the doors. For the full scope safety analysis of turbine building for Wolsong NPP unit 2, input decks for the various objectives, which can be read by GOTHIC 7.2a, are developed and tested for the steady state simulation. The input data files provide simplified representations of the geometric layout of the turbine building (volumes, dimensions, flow paths, doors, panels, etc.) and the performance characteristics of the various turbine building subsystems

  1. Integrity Analysis of Turbine Building for the MSLB Using GOTHIC code for Wolsong NPP Unit 2

    Energy Technology Data Exchange (ETDEWEB)

    Ko, Bong-Jin; Jin, Dong-Sik; Kim, Jong-Hyun; Han, Sang-Koo [ACT, Daejeon (Korea, Republic of); Choi, Hoon; Kho, Dong-Wook [KHNP-CRI, Daejeon (Korea, Republic of)

    2015-05-15

    A break in the piping between the steam generators and the turbine can lead to rapid loss of secondary circuit inventory. A break inside the turbine building leads to pressure differentials between different areas of the turbine building. In order to improve the environmental protection of various components within the turbine building, a wall has been erected which effectively separates the area in which these components are housed from the rest of the turbine building. Relief panels installed in the turbine building ensure that the pressure differential across the wall would be less than that required to jeopardize the wall integrity. The turbine building service wing is excluded from the scope of this analysis. It is further assumed that any doors in the heavy wall are as strong as the wall itself, with no gaps or leakage around the doors. For the full scope safety analysis of turbine building for Wolsong NPP unit 2, input decks for the various objectives, which can be read by GOTHIC 7.2a, are developed and tested for the steady state simulation. The input data files provide simplified representations of the geometric layout of the turbine building (volumes, dimensions, flow paths, doors, panels, etc.) and the performance characteristics of the various turbine building subsystems.

  2. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  3. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  4. New Source Code: Spelman Women Transforming the Grid of Science and Technology

    Science.gov (United States)

    Okonkwo, Holly

    From a seminary for newly freedwomen in the 19th century "Deep South" of the United States to a "Model Institution for Excellence" in undergraduate science, technology, engineering, and math education, the narrative of Spelman College is a critical piece to understanding the overall history and socially constructed nature of science and higher education in the U.S. Making a place for science at Spelman College, disrupts and redefines the presumed and acceptable roles of African American women in science and their social, political and economic engagements in U.S society as a whole. Over the course of 16 months, I explore the narrative experiences of members of the Spelman campus community and immerse myself in the environment to experience becoming of member of a scientific community that asserts a place for women of African descent in science and technology and perceives this positionality as positive, powerful and the locus of agency. My intention is to offer this research as an in-depth ethnographic presentation of intentional science learning, knowledge production and practice as lived experiences at the multiple intersections of the constructs of race, gender, positionality and U.S science itself. In this research, I am motivated to move the contemporary discourse of diversifying science, technology, engineering and mathematics fields in the U.S. academy, beyond the chronicling of women of African descent as statistical rarities over time, as subjectivities and the deficit frameworks that theoretically encapsulate their narratives. The findings of this research demonstrate that Spelman students, staff and alumni are themselves, the cultural capital that validates Spelman's identity as a place, its institutional mission and are at the core of the institutional success of the college. It is a personal mission as much as it is an institutional mission, which is precisely what makes it powerful.

  5. Surface-water nutrient conditions and sources in the United States Pacific Northwest

    Science.gov (United States)

    Wise, D.R.; Johnson, H.M.

    2011-01-01

    The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.

  6. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  7. Reactor units for power supply to the Russian Arctic regions: Priority assessment of nuclear energy sources

    Directory of Open Access Journals (Sweden)

    Mel'nikov N. N.

    2017-03-01

    Full Text Available Under conditions of competitiveness of small nuclear power plants (SNPP and feasibility of their use to supply power to remote and inaccessible regions the competition occurs between nuclear energy sources, which is caused by a wide range of proposals for solving the problem of power supply to different consumers in the decentralized area of the Russian Arctic power complex. The paper suggests a methodological approach for expert assessment of the priority of small power reactor units based on the application of the point system. The priority types of the reactor units have been determined based on evaluation of the unit's conformity to the following criteria: the level of referentiality and readiness degree of reactor units to implementation; duration of the fuel cycle, which largely determines an autonomy level of the nuclear energy source; the possibility of creating a modular block structure of SNPP; the maximum weight of a transported single equipment for the reactor unit; service life of the main equipment. Within the proposed methodological approach the authors have performed a preliminary ranking of the reactor units according to various criteria, which allows quantitatively determining relative difference and priority of the small nuclear power plants projects aimed at energy supply to the Russian Arctic. To assess the sensitivity of the ranking results to the parameters of the point system the authors have observed the five-point and ten-point scales under variations of importance (weights of different criteria. The paper presents the results of preliminary ranking, which have allowed distinguishing the following types of the reactor units in order of their priority: ABV-6E (ABV-6M, "Uniterm" and SVBR-10 in the energy range up to 20 MW; RITM-200 (RITM-200M, KLT-40S and SVBR-100 in the energy range above 20 MW.

  8. Risk analysis of NPP in multi-unit site for configuration of AAC power source

    International Nuclear Information System (INIS)

    Kim, Myung Ki

    2000-01-01

    Because of the difficulties in finding new sites for nuclear power plants, more units are being added to the existing sites. In these multi-unit sites, appropriate countermeasures should be established to cope with the potential station blackout (SBO) accident. Currently, installation of additional diesel generator (DG) is considered to ensure an alternative AC power source, but it has not been decided yet how many DGs should be installed in a multi-unit site. In this paper, risk informed decision making method, which evaluates reliability of electrical system, core damage frequency, and site average core damage frequency, is introduced to draw up the suitable number of DG in multi-unit site. The analysis results show that installing two DGs lowered the site average core damage frequency by 1.4% compared to one DG in six unit site. In the light of risk-informed decisions in regulatory guide 1.174, there is no difference of safety between two alternatives. It is concluded that one emergency diesel generator sufficiently guarantees safety against station blackout of nuclear power plants in multi-unit site. (author)

  9. Development of a method to evaluate shared alternate AC power source effects in multi-unit nuclear power plants

    International Nuclear Information System (INIS)

    Jung, Woo Sik; Yang, Joon Eun

    2003-07-01

    In order to evaluate accurately a Station BlackOut (SBO) event frequency of a multi-unit nuclear power plant that has a shared Alternate AC (AAC) power source, an approach has been developed which accommodates the complex inter-unit behavior of the shared AAC power source under multi-unit Loss Of Offsite Power (LOOP) conditions. The approach is illustrated for two cases, 2 units and 4 units at a single site, and generalized for a multi-unit site. Furthermore, the SBO frequency of the first unit of the 2-unit site is quantified. The SBO frequency at a target unit of Probabilistic Safety Assessment (PSA) could be underestimated if the inter-unit dependency of the shared AAC power source is not properly modeled. The effect of the inter-unit behavior of the shared AAC power source on the SBO frequency is not negligible depending on the Common Cause Failure (CCF) characteristics among AC power sources. The methodology suggested in the present report is believed to be very useful in evaluating the SBO frequency and the core damage frequency resulting from the SBO event. This approach is also applicable to the probabilistic evaluation of the other shared systems in a multi-unit nuclear power plant

  10. Are biogenic emissions a significant source of summertime atmospheric toluene in the rural Northeastern United States?

    OpenAIRE

    M. L. White; R. S. Russo; Y. Zhou; J. L. Ambrose; K. Haase; E. K. Frinak; R. K. Varner; O. W. Wingenter; H. Mao; R. Talbot; B. C. Sive

    2009-01-01

    Summertime atmospheric toluene enhancements at Thompson Farm in the rural northeastern United States were unexpected and resulted in a toluene/benzene seasonal pattern that was distinctly different from that of other anthropogenic volatile organic compounds. Consequently, three hydrocarbon sources were investigated for potential contributions to the enhancements during 2004–2006. These included: (1) increased warm season fuel evaporation coupled with changes in reformulated gasoline (RFG) con...

  11. Dissolved-solids sources, loads, yields, and concentrations in streams of the conterminous United States

    Science.gov (United States)

    Anning, David W.; Flynn, Marilyn E.

    2014-01-01

    Recent studies have shown that excessive dissolved-solids concentrations in water can have adverse effects on the environment and on agricultural, domestic, municipal, and industrial water users. Such effects motivated the U.S. Geological Survey’s National Water Quality Assessment Program to develop a SPAtially-Referenced Regression on Watershed Attributes (SPARROW) model that has improved the understanding of sources, loads, yields, and concentrations of dissolved solids in streams of the conterminous United States.

  12. Assessment of gamma irradiation heating and damage in miniature neutron source reactor vessel using computational methods and SRIM - TRIM code

    International Nuclear Information System (INIS)

    Appiah-Ofori, F. F.

    2014-07-01

    The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and

  13. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  14. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    International Nuclear Information System (INIS)

    O'Connor, Evan; Ott, Christian D

    2010-01-01

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  15. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)

    2010-06-07

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  16. Three Mile Island Unit 1 Main Steam Line Break Three-Dimensional Neutronics/Thermal-Hydraulics Analysis: Application of Different Coupled Codes

    International Nuclear Information System (INIS)

    D'Auria, Francesco; Moreno, Jose Luis Gago; Galassi, Giorgio Maria; Grgic, Davor; Spadoni, Antonino

    2003-01-01

    A comprehensive analysis of the double ended main steam line break (MSLB) accident assumed to occur in the Babcock and Wilcox Three Mile Island Unit 1 (TMI-1) has been carried out at the Dipartimento di Ingegneria Meccanica, Nucleare e della Produzione of the University of Pisa, Italy, in cooperation with the University of Zagreb, Croatia. The overall activity has been completed within the framework of the participation in the Organization for Economic Cooperation and Development-Committee on the Safety of Nuclear Installations-Nuclear Science Committee pressurized water reactor MSLB benchmark.Thermal-hydraulic system codes (various versions of Relap5), three-dimensional (3-D) neutronics codes (Parcs, Quabbox, and Nestle), and one subchannel code (Cobra) have been adopted for the analysis. Results from the following codes (or code versions) are assumed as reference:1. Relap5/mod3.2.2, beta version, coupled with the 3-D neutron kinetics Parcs code parallel virtual machine (PVM) coupling2. Relap5/mod3.2.2, gamma version, coupled with the 3-D neutron kinetics Quabbox code (direct coupling)3. Relap5/3D code coupled with the 3-D neutron kinetics Nestle code.The influence of PVM and of direct coupling is also discussed.Boundary and initial conditions of the system, including those relevant to the fuel status, have been supplied by Pennsylvania State University in cooperation with GPU Nuclear Corporation (the utility, owner of TMI) and the U.S. Nuclear Regulatory Commission. The comparison among the results obtained by adopting the same thermal-hydraulic nodalization and the coupled code version is discussed in this paper.The capability of the control rods to recover the accident has been demonstrated in all the cases as well as the capability of all the codes to predict the time evolution of the assigned transient. However, one stuck control rod caused some 'recriticality' or 'return to power' whose magnitude is largely affected by boundary and initial conditions

  17. Documentation of Source Code.

    Science.gov (United States)

    1988-05-12

    the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen

  18. Error Correcting Codes I. Applications of Elementary Algebra to Information Theory. Modules and Monographs in Undergraduate Mathematics and Its Applications Project. UMAP Unit 346.

    Science.gov (United States)

    Rice, Bart F.; Wilde, Carroll O.

    It is noted that with the prominence of computers in today's technological society, digital communication systems have become widely used in a variety of applications. Some of the problems that arise in digital communications systems are described. This unit presents the problem of correcting errors in such systems. Error correcting codes are…

  19. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  20. Effects of demographic factors and information sources on United States consumer perceptions of animal welfare.

    Science.gov (United States)

    McKendree, M G S; Croney, C C; Widmar, N J O

    2014-07-01

    As consumers have become more interested in understanding how their food is produced, scrutiny and criticism have increased regarding intensified food animal production methods. Resolution of public concerns about animal agricultural practices depends on understanding the myriad factors that provide the basis for concerns. An online survey of 798 U.S. households was conducted to investigate relationships between household characteristics (demographics, geographic location, and experiences) and level of concern for animal welfare as well as sources used to obtain information on the subject. Because recent media attention has focused on animal care practices used in the U.S. swine industry, respondents were also asked specific questions pertaining to their perceptions of pig management practices and welfare issues and their corresponding pork purchasing behavior. Respondents reporting higher levels of concern about animal welfare were more frequently female, younger, and self-reported members of the Democratic Party. Fourteen percent of respondents reported reduction in pork consumption because of animal welfare concerns with an average reduction of 56%. Over half of the respondents (56%) did not have a primary source for animal welfare information; those who identified a primary information source most commonly used information provided by animal protection organizations, the Humane Society of the United States (HSUS), and People for the Ethical Treatment of Animals (PETA). Midwest participants were significantly, at the 5% significance level, less concerned about domestic livestock animal welfare and more frequently reported not having a source for animal welfare information than those from other regions of the United States. Overall, the U.S. livestock and poultry industries and other organizations affiliated with animal agriculture appear to be less used public sources of information on animal welfare than popular animal protection organizations. Improved

  1. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  2. Target-ion source unit ionization efficiency measurement by method of stable ion beam implantation

    CERN Document Server

    Panteleev, V.N; Fedorov, D.V; Moroz, F.V; Orlov, S.Yu; Volkov, Yu.M

    The ionization efficiency is one of the most important parameters of an on-line used target-ion source system exploited for production of exotic radioactive beams. The ionization efficiency value determination as a characteristic of a target-ion source unit in the stage of its normalizing before on-line use is a very important step in the course of the preparation for an on-line experiment. At the IRIS facility (Petersburg Nuclear Physics Institute, Gatchina) a reliable and rather precise method of the target-ion source unit ionization efficiency measurement by the method of stable beam implantation has been developed. The method worked out exploits an off-line mass-separator for the implantation of the ion beams of selected stable isotopes of different elements into a tantalum foil placed inside the Faraday cup in the focal plane of the mass-separator. The amount of implanted ions has been measured with a high accuracy by the current integrator connected to the Faraday cup. After the implantation of needed a...

  3. Analysis of source spectra, attenuation, and site effects from central and eastern United States earthquakes

    International Nuclear Information System (INIS)

    Lindley, G.

    1998-02-01

    This report describes the results from three studies of source spectra, attenuation, and site effects of central and eastern United States earthquakes. In the first study source parameter estimates taken from 27 previous studies were combined to test the assumption that the earthquake stress drop is roughly a constant, independent of earthquake size. 200 estimates of stress drop and seismic moment from eastern North American earthquakes were combined. It was found that the estimated stress drop from the 27 studies increases approximately as the square-root of the seismic moment, from about 3 bars at 10 20 dyne-cm to 690 bars at 10 25 dyne-cm. These results do not support the assumption of a constant stress drop when estimating ground motion parameters from eastern North American earthquakes. In the second study, broadband seismograms recorded by the United States National Seismograph Network and cooperating stations have been analysed to determine Q Lg as a function of frequency in five regions: the northeastern US, southeastern US, central US, northern Basin and Range, and California and western Nevada. In the third study, using spectral analysis, estimates have been made for the anelastic attenuation of four regional phases, and estimates have been made for the source parameters of 27 earthquakes, including the M b 5.6, 14 April, 1995, West Texas earthquake

  4. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  5. Sources of HO x and production of ozone in the upper troposphere over the United States

    OpenAIRE

    Jaeglé, L.; Jacob, Daniel James; Brune, W. H.; Tan, D.; Faloona, I. C.; Weinheimer, A. J.; Ridley, B. A.; Campos, T. L.; Sachse, G. W.

    1998-01-01

    The sources of HOx (OH+peroxy radicals) and the associated production of ozone at 8–12 km over the United States are examined by modeling observations of OH, HO2, NO, and other species during the SUCCESS aircraft campaign in April–May 1996. The HOx concentrations measured in SUCCESS are up to a factor of 3 higher than can be calculated from oxidation of water vapor and photolysis of acetone. The highest discrepancy was seen in the outflow of a convective storm. We show that convective injecti...

  6. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  7. Determination of Noise Level and Its Sources in the Neonatal Intensive Care Unit and Neonatal Ward

    Directory of Open Access Journals (Sweden)

    Mahdi Jahangir Blourchian

    2015-12-01

    Full Text Available Background: In Neonatal intensive care units (NICU different sound intensities and frequencies are produced from different sources, which may exert undesirable physiological effects on the infants. The aim of this study was to determine the noise level and its sources in the NICU and neonatal ward of Al-Zahra Hospital of Rasht, Iran. Methods: In this descriptive cross-sectional study, the intensity of the sounds generated by the internal and external sources in the NICU and neonatal ward was measured using a sound level meter device. The sound produced by each of the sources was individually calculated. Data were analyzed performing descriptive and analytical statistics, using SPSS version 19. Results: The mean noise levels in six rooms and a hallway during morning, afternoon and night shifts with the electromechanical devices turned on were 61.67±4.5, 61.32±4.32 and 60.71±4.56 dB, respectively. Moreover, with the devices tuned off the mean noise levels during morning, afternoon and evening shifts were 64.97±2.6, 60.6±1.29 and 57.91±4.73 dB, respectively. The differences between the mean noise levels in the neonatal wards (standard noise level=45 dB during each shift with the electromechanical devices turned on and off were statistically significant (P=0.002 and P

  8. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1995-01-01

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code

  9. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  10. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  11. Nuclear thermal source transfer unit, post-blast soil sample drying system

    International Nuclear Information System (INIS)

    Wiser, Ralph S.; Valencia, Matthew J

    2017-01-01

    Los Alamos National Laboratory states that its mission is ''To solve national security challenges through scientific excellence.'' The Science Undergraduate Laboratory Internship (SULI) programs exists to engage undergraduate students in STEM work by providing opportunity to work at DOE facilities. As an undergraduate mechanical engineering intern under the SULI program at Los Alamos during the fall semester of 2016, I had the opportunity to contribute to the mission of the Laboratory while developing skills in a STEM discipline. I worked with Technology Applications, an engineering group that supports non-proliferation, counter terrorism, and emergency response missions. This group specializes in tool design, weapons engineering, rapid prototyping, and mission training. I assisted with two major projects during my appointment Los Alamos. The first was a thermal source transportation unit, intended to safely contain a nuclear thermal source during transit. The second was a soil drying unit for use in nuclear postblast field sample collection. These projects have given me invaluable experience working alongside a team of professional engineers. Skills developed include modeling, simulation, group design, product and system design, and product testing.

  12. Nuclear thermal source transfer unit, post-blast soil sample drying system

    Energy Technology Data Exchange (ETDEWEB)

    Wiser, Ralph S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Valencia, Matthew J [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-03

    Los Alamos National Laboratory states that its mission is “To solve national security challenges through scientific excellence.” The Science Undergraduate Laboratory Internship (SULI) programs exists to engage undergraduate students in STEM work by providing opportunity to work at DOE facilities. As an undergraduate mechanical engineering intern under the SULI program at Los Alamos during the fall semester of 2016, I had the opportunity to contribute to the mission of the Laboratory while developing skills in a STEM discipline. I worked with Technology Applications, an engineering group that supports non-proliferation, counter terrorism, and emergency response missions. This group specializes in tool design, weapons engineering, rapid prototyping, and mission training. I assisted with two major projects during my appointment Los Alamos. The first was a thermal source transportation unit, intended to safely contain a nuclear thermal source during transit. The second was a soil drying unit for use in nuclear postblast field sample collection. These projects have given me invaluable experience working alongside a team of professional engineers. Skills developed include modeling, simulation, group design, product and system design, and product testing.

  13. Quad Cities Unit 2 Main Steam Line Acoustic Source Identification and Load Reduction

    International Nuclear Information System (INIS)

    DeBoo, Guy; Ramsden, Kevin; Gesior, Roman

    2006-01-01

    The Quad Cities Units 1 and 2 have a history of steam line vibration issues. The implementation of an Extended Power Up-rate resulted in significant increases in steam line vibration as well as acoustic loading of the steam dryers, which led to equipment failures and fatigue cracking of the dryers. This paper discusses the results of extensive data collection on the Quad Cities Unit 2 replacement dryer and the Main Steam Lines. This data was taken with the intent of identifying acoustic sources in the steam system. Review of the data confirmed that vortex shedding coupled column resonance in the relief and safety valve stub pipes were the principal sources of large magnitude acoustic loads in the main steam system. Modifications were developed in sub-scale testing to alter the acoustic properties of the valve standpipes and add acoustic damping to the system. The modifications developed and installed consisted of acoustic side branches that were attached to the Electromatic Relief Valve (ERV) and Main Steam Safety Valve (MSSV) attachment pipes. Subsequent post-modification testing was performed in plant to confirm the effectiveness of the modifications. The modifications have been demonstrated to reduce vibration loads at full Extended Power Up-rate (EPU) conditions to levels below those at Original Licensed Thermal Power (OLTP). (authors)

  14. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  15. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  16. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  17. Point sources of emerging contaminants along the Colorado River Basin: Source water for the arid Southwestern United States

    Science.gov (United States)

    Jones-Lepp, Tammy L.; Sanchez, Charles; Alvarez, David A.; Wilson, Doyle C.; Taniguchi-Fu, Randi-Laurant

    2012-01-01

    Emerging contaminants (ECs) (e.g., pharmaceuticals, illicit drugs, personal care products) have been detected in waters across the United States. The objective of this study was to evaluate point sources of ECs along the Colorado River, from the headwaters in Colorado to the Gulf of California. At selected locations in the Colorado River Basin (sites in Colorado, Utah, Nevada, Arizona, and California), waste stream tributaries and receiving surface waters were sampled using either grab sampling or polar organic chemical integrative samplers (POCIS). The grab samples were extracted using solid-phase cartridge extraction (SPE), and the POCIS sorbents were transferred into empty SPEs and eluted with methanol. All extracts were prepared for, and analyzed by, liquid chromatography–electrospray-ion trap mass spectrometry (LC–ESI-ITMS). Log DOW values were calculated for all ECs in the study and compared to the empirical data collected. POCIS extracts were screened for the presence of estrogenic chemicals using the yeast estrogen screen (YES) assay. Extracts from the 2008 POCIS deployment in the Las Vegas Wash showed the second highest estrogenicity response. In the grab samples, azithromycin (an antibiotic) was detected in all but one urban waste stream, with concentrations ranging from 30 ng/L to 2800 ng/L. Concentration levels of azithromycin, methamphetamine and pseudoephedrine showed temporal variation from the Tucson WWTP. Those ECs that were detected in the main surface water channels (those that are diverted for urban use and irrigation along the Colorado River) were in the region of the limit-of-detection (e.g., 10 ng/L), but most were below detection limits.

  18. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  19. Quantifying the sources of ozone, fine particulate matter, and regional haze in the Southeastern United States.

    Science.gov (United States)

    Odman, M Talat; Hu, Yongtao; Russell, Armistead G; Hanedar, Asude; Boylan, James W; Brewer, Patricia F

    2009-07-01

    A detailed sensitivity analysis was conducted to quantify the contributions of various emission sources to ozone (O3), fine particulate matter (PM2.5), and regional haze in the Southeastern United States. O3 and particulate matter (PM) levels were estimated using the Community Multiscale Air Quality (CMAQ) modeling system and light extinction values were calculated from modeled PM concentrations. First, the base case was established using the emission projections for the year 2009. Then, in each model run, SO2, primary carbon (PC), NH3, NO(x) or VOC emissions from a particular source category in a certain geographic area were reduced by 30% and the responses were determined by calculating the difference between the results of the reduced emission case and the base case. The sensitivity of summertime O3 to VOC emissions is small in the Southeast and ground-level NO(x) controls are generally more beneficial than elevated NO(x) controls (per unit mass of emissions reduced). SO2 emission reduction is the most beneficial control strategy in reducing summertime PM2.5 levels and improving visibility in the Southeast and electric generating utilities are the single largest source of SO2. Controlling PC emissions can be very effective locally, especially in winter. Reducing NH3 emissions is an effective strategy to reduce wintertime ammonium nitrate (NO3NH4) levels and improve visibility; NO(x) emissions reductions are not as effective. The results presented here will help the development of specific emission control strategies for future attainment of the National Ambient Air Quality Standards in the region.

  20. Salmonellosis outbreaks in the United States due to fresh produce: sources and potential intervention measures.

    Science.gov (United States)

    Hanning, Irene B; Nutt, J D; Ricke, Steven C

    2009-01-01

    Foodborne Salmonella spp. is a leading cause of foodborne illness in the United States each year. Traditionally, most cases of salmonellosis were thought to originate from meat and poultry products. However, an increasing number of salmonellosis outbreaks are occurring as a result of contaminated produce. Several produce items specifically have been identified in outbreaks, and the ability of Salmonella to attach or internalize into vegetables and fruits may be factors that make these produce items more likely to be sources of Salmonella. In addition, environmental factors including contaminated water sources used to irrigate and wash produce crops have been implicated in a large number of outbreaks. Salmonella is carried by both domesticated and wild animals and can contaminate freshwater by direct or indirect contact. In some cases, direct contact of produce or seeds with contaminated manure or animal wastes can lead to contaminated crops. This review examines outbreaks of Salmonella due to contaminated produce, the potential sources of Salmonella, and possible control measures to prevent contamination of produce.

  1. Atmospheric Nitrogen Deposition in the Western United States: Sources, Sinks and Changes over Time

    Science.gov (United States)

    Anderson, Sarah Marie

    Anthropogenic activities have greatly modified the way nitrogen moves through the atmosphere and terrestrial and aquatic environments. Excess reactive nitrogen generated through fossil fuel combustion, industrial fixation, and intensification of agriculture is not confined to anthropogenic systems but leaks into natural ecosystems with consequences including acidification, eutrophication, and biodiversity loss. A better understanding of where excess nitrogen originates and how that changes over time is crucial to identifying when, where, and to what degree environmental impacts occur. A major route into ecosystems for excess nitrogen is through atmospheric deposition. Excess nitrogen is emitted to the atmosphere where it can be transported great distances before being deposited back to the Earth's surface. Analyzing the composition of atmospheric nitrogen deposition and biological indicators that reflect deposition can provide insight into the emission sources as well as processes and atmospheric chemistry that occur during transport and what drives variation in these sources and processes. Chapter 1 provides a review and proof of concept of lichens to act as biological indicators and how their elemental and stable isotope composition can elucidate variation in amounts and emission sources of nitrogen over space and time. Information on amounts and emission sources of nitrogen deposition helps inform natural resources and land management decisions by helping to identify potentially impacted areas and causes of those impacts. Chapter 2 demonstrates that herbaria lichen specimens and field lichen samples reflect historical changes in atmospheric nitrogen deposition from urban and agricultural sources across the western United States. Nitrogen deposition increases throughout most of the 20 th century because of multiple types of emission sources until the implementation of the Clean Air Act Amendments of 1990 eventually decrease nitrogen deposition around the turn of

  2. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  3. Application of Phasor Measurement Units for Protection of Distribution Networks with High Penetration of Photovoltaic Sources

    Science.gov (United States)

    Meskin, Matin

    The rate of the integration of distributed generation (DG) units to the distribution level to meet the growth in demand increases as a reasonable replacement for costly network expansion. This integration brings many advantages to the consumers and power grids, as well as giving rise to more challenges in relation to protection and control. Recent research has brought to light the negative effects of DG units on short circuit currents and overcurrent (OC) protection systems in distribution networks. Change in the direction of fault current flow, increment or decrement of fault current magnitude, blindness of protection, feeder sympathy trip, nuisance trip of interrupting devices, and the disruption of coordination between protective devices are some potential impacts of DG unit integration. Among other types of DG units, the integration of renewable energy resources into the electric grid has seen a vast improvement in recent years. In particular, the interconnection of photovoltaic (PV) sources to the medium voltage (MV) distribution networks has experienced a rapid increase in the last decade. In this work, the effect of PV source on conventional OC relays in MV distribution networks is shown. It is indicated that the PV output fluctuation, due to changes in solar radiation, causes the magnitude and direction of the current to change haphazardly. These variations may result in the poor operation of OC relays as the main protective devices in the MV distribution networks. In other words, due to the bi-directional power flow characteristic and the fluctuation of current magnitude occurring in the presence of PV sources, a specific setting of OC relays is difficult to realize. Therefore, OC relays may operate in normal conditions. To improve the OC relay operation, a voltage-dependent-overcurrent protection is proposed. Although, this new method prevents the OC relay from maloperation, its ability to detect earth faults and high impedance faults is poor. Thus, a

  4. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  5. Evaluation of the methodology for dose calculation in microdosimetry with electrons sources using the MCNP5 Code

    International Nuclear Information System (INIS)

    Cintra, Felipe Belonsi de

    2010-01-01

    This study made a comparison between some of the major transport codes that employ the Monte Carlo stochastic approach in dosimetric calculations in nuclear medicine. We analyzed in detail the various physical and numerical models used by MCNP5 code in relation with codes like EGS and Penelope. The identification of its potential and limitations for solving microdosimetry problems were highlighted. The condensed history methodology used by MCNP resulted in lower values for energy deposition calculation. This showed a known feature of the condensed stories: its underestimates both the number of collisions along the trajectory of the electron and the number of secondary particles created. The use of transport codes like MCNP and Penelope for micrometer scales received special attention in this work. Class I and class II codes were studied and their main resources were exploited in order to transport electrons, which have particular importance in dosimetry. It is expected that the evaluation of available methodologies mentioned here contribute to a better understanding of the behavior of these codes, especially for this class of problems, common in microdosimetry. (author)

  6. Automatic examination of nuclear reactor vessels with focused search units. Status and typical application to inspections performed in accordance with ASME code

    International Nuclear Information System (INIS)

    Verger, B.; Saglio, R.

    1981-05-01

    The use of focused search units in nuclear reactor vessel examinations has significantly increased the capability of flaw indication detection and characterization. These search units especially allow a more accurate sizing of indications and a more efficient follow up of their history. In this aspect, they are a unique tool in the area of safety and reliability of installations. It was this type of search unit which was adopted to perform the examinations required within the scope of inservice inspections of all P.W.R. reactors of the French nuclear program. This paper summarizes the results gathered through the 4l examinations performed over the last five years. A typical application of focused search units in automated inspections performed in accordance with ASME code requirements on P.W.R. nuclear reactor vessels is then described

  7. Isotopes, Inventories and Seasonality: Unraveling Methane Source Distribution in the Complex Landscapes of the United Kingdom.

    Science.gov (United States)

    Lowry, D.; Fisher, R. E.; Zazzeri, G.; Lanoisellé, M.; France, J.; Allen, G.; Nisbet, E. G.

    2017-12-01

    Unlike the big open landscapes of many continents with large area sources dominated by one particular methane emission type that can be isotopically characterized by flight measurements and sampling, the complex patchwork of urban, fossil and agricultural methane sources across NW Europe require detailed ground surveys for characterization (Zazzeri et al., 2017). Here we outline the findings from multiple seasonal urban and rural measurement campaigns in the United Kingdom. These surveys aim to: 1) Assess source distribution and baseline in regions of planned fracking, and relate to on-site continuous baseline climatology. 2) Characterize spatial and seasonal differences in the isotopic signatures of the UNFCCC source categories, and 3) Assess the spatial validity of the 1 x 1 km UK inventory for large continuous emitters, proposed point sources, and seasonal / ephemeral emissions. The UK inventory suggests that 90% of methane emissions are from 3 source categories, ruminants, landfill and gas distribution. Bag sampling and GC-IRMS delta13C analysis shows that landfill gives a constant signature of -57 ±3 ‰ throughout the year. Fugitive gas emissions are consistent regionally depending on the North Sea supply regions feeding the network (-41 ± 2 ‰ in N England, -37 ± 2 ‰ in SE England). Ruminant, mostly cattle, emissions are far more complex as these spend winters in barns and summers in fields, but are essentially a mix of 2 end members, breath at -68 ±3 ‰ and manure at -51 ±3 ‰, resulting in broad summer field emission plumes of -64 ‰ and point winter barn emission plumes of -58 ‰. The inventory correctly locates emission hotspots from landfill, larger sewage treatment plants and gas compressor stations, giving a broad overview of emission distribution for regional model validation. Mobile surveys are adding an extra layer of detail to this which, combined with isotopic characterization, has identified spatial distribution of gas pipe leaks

  8. Validation of the Open Source Code_Aster Software Used in the Modal Analysis of the Fluid-filled Cylindrical Shell

    Directory of Open Access Journals (Sweden)

    B D. Kashfutdinov

    2017-01-01

    Full Text Available The paper deals with a modal analysis of the elastic cylindrical shell with a clamped bottom partially filled with fluid in open source Code_Aster software using the finite element method. Natural frequencies and modes obtained in Code_Aster are compared to experimental and theoretical data. The aim of this paper is to prove that Code_Aster has all necessary tools for solving fluid structure interaction problems. Also, Code_Aster can be used in the industrial projects as an alternative to commercial software. The available free pre- and post-processors with a graphical user interface that is compatible with Code_Aster allow creating complex models and processing the results.The paper presents new validation results of open source Code_Aster software used to calculate small natural modes of the cylindrical shell partially filled with non-viscous compressible barotropic fluid under gravity field.The displacement of the middle surface of thin shell and the displacement of the fluid relative to the equilibrium position are described by coupled hydro-elasticity problem. The fluid flow is considered to be potential. The finite element method (FEM is used. The features of computational model are described. The resolution equation has symmetrical block matrices. To compare the results, is discussed the well-known modal analysis problem of cylindrical shell with flat non-deformable bottom, filled with a compressible fluid. The numerical parameters of the scheme were chosen in accordance with well-known experimental and analytical data. Three cases were taken into account: an empty, a partially filled and a full-filled cylindrical shell.The frequencies of Code_Aster are in good agreement with those, obtained in experiment, analytical solution, as well as with results obtained by FEM in other software. The difference between experiment and analytical solution in software is approximately the same. The obtained results extend a set of validation tests for

  9. Radiation Tolerance Qualification Tests of the Final Source Interface Unit for the ALICE Experiment

    CERN Document Server

    Dénes, E; Futó, E; Kerék, A; Kiss, T; Molnár, J; Novák, D; Soós, C; Tölyhi, T; Van de Vyvre, P

    2007-01-01

    The ALICE Detector Data Link (DDL) is a high-speed optical link designed to interface the readout electronics of ALICE sub-detectors to the DAQ computers. The Source Interface Unit (SIU) of the DDL will operate in radiation environment. Previous tests showed that a configuration loss of SRAM-based FPGA devices may happen and the frequency of undetected data errors in the FPGA user memory area is also not acceptable. Therefore, we redesigned the SIU card using another FPGA based on flash technology. In order to detect bit errors in the user memory we added parity check logic to the design. The new SIU has been extensively tested using neutron and proton irradiation to verify its radiation tolerance. In this paper we summarize the design changes, introduce the final design, and the results of the radiation tolerance measurements on the final card.

  10. Replacement of the moderator cell unit of JRR-3's cold neutron source facility

    International Nuclear Information System (INIS)

    Hazawa, Tomoya; Nagahori, Kazuhisa; Kusunoki, Tsuyoshi

    2006-10-01

    The moderator cell of the JRR-3's cold neutron source (CNS) facility, converts thermal neutrons into cold neutrons by passing through liquid cold hydrogen. The cold neutrons are used for material and life science research such as the neutron scattering. The CNS has been operated since the start of JRR-3's in 1990. The moderator cell containing liquid hydrogen is made of stainless steel. The material irradiation lifetime is limited to 7 years due to irradiation brittleness. The first replacement was done by using a spare part made in France. This replacement work of 2006 was carried out by using the domestic moderator cell unit. The following technologies were developed for the moderator cell unit production. 1) Technical development of black treatment on moderator cell surface to increase radiation heat. 2) Development of bending technology of concentric triple tubes consisting from inside tube, Outside tube and Vacuum insulation tube. 3) Development of manufacturing technique of the moderator cell with complicated shapes. According to detail planed work procedures, replacement work was carried out. As results, the working days were reduced to 80% of old ones. The radiation dose was also reduced due to reduction of working days. It was verified by measurement of neutrons characteristics that the replaced moderator cell has the same performance as that of the old moderator cell. The domestic manufacturing of the moderator cell was succeeded. As results, the replacement cost was reduced by development of domestic production technology. (author)

  11. Inventory of power plants in the United States. [By state within standard Federal Regions, using county codes

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-12-01

    The purpose of this inventory of power plants is to provide a ready reference for planners whose focus is on the state, standard Federal region, and/or national level. Thus the inventory is compiled alphabetically by state within standard Federal regions. The units are listed alphabetically within electric utility systems which in turn are listed alphabetically within states. The locations are identified to county level according to the Federal Information Processing Standards Publication Counties and County Equivalents of the States of the United States. Data compiled include existing and projected electrical generation units, jointly owned units, and projected construction units.

  12. United States‐Mexican border watershed assessment: Modeling nonpoint source pollution in Ambos Nogales

    Science.gov (United States)

    Norman, Laura M.

    2007-01-01

    Ecological considerations need to be interwoven with economic policy and planning along the United States‐Mexican border. Non‐point source pollution can have significant implications for the availability of potable water and the continued health of borderland ecosystems in arid lands. However, environmental assessments in this region present a host of unique issues and problems. A common obstacle to the solution of these problems is the integration of data with different resolutions, naming conventions, and quality to create a consistent database across the binational study area. This report presents a simple modeling approach to predict nonpoint source pollution that can be used for border watersheds. The modeling approach links a hillslopescale erosion‐prediction model and a spatially derived sediment‐delivery model within a geographic information system to estimate erosion, sediment yield, and sediment deposition across the Ambos Nogales watershed in Sonora, Mexico, and Arizona. This paper discusses the procedures used for creating a watershed database to apply the models and presents an example of the modeling approach applied to a conservation‐planning problem.

  13. Real-time speckle variance swept-source optical coherence tomography using a graphics processing unit.

    Science.gov (United States)

    Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D

    2012-07-01

    Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.

  14. Quality of Source Water from Public-Supply Wells in the United States, 1993-2007

    Science.gov (United States)

    Toccalino, Patricia L.; Norman, Julia E.; Hitt, Kerie J.

    2010-01-01

    More than one-third of the Nation's population receives their drinking water from public water systems that use groundwater as their source. The U.S. Geological Survey (USGS) sampled untreated source water from 932 public-supply wells, hereafter referred to as public wells, as part of multiple groundwater assessments conducted across the Nation during 1993-2007. The objectives of this study were to evaluate (1) contaminant occurrence in source water from public wells and the potential significance of contaminant concentrations to human health, (2) national and regional distributions of groundwater quality, and (3) the occurrence and characteristics of contaminant mixtures. Treated finished water was not sampled. The 932 public wells are widely distributed nationally and include wells in selected parts of 41 states and withdraw water from parts of 30 regionally extensive aquifers used for public water supply. These wells are distributed among 629 unique public water systems-less than 1 percent of all groundwater-supplied public water systems in the United States-but the wells were randomly selected within the sampled hydrogeologic settings to represent typical aquifer conditions. Samples from the 629 systems represent source water used by one-quarter of the U.S. population served by groundwater-supplied public water systems, or about 9 percent of the entire U.S. population in 2008. One groundwater sample was collected prior to treatment or blending from each of the 932 public wells and analyzed for as many as six water-quality properties and 215 contaminants. Consistent with the terminology used in the Safe Drinking Water Act (SDWA), all constituents analyzed in water samples in this study are referred to as 'contaminants'. More contaminant groups were assessed in this study than in any previous national study of public wells and included major ions, nutrients, radionuclides, trace elements, pesticide compounds, volatile organic compounds (VOCs), and fecal

  15. Matlab Source Code for Species Transport through Nafion Membranes in Direct Ethanol, Direct Methanol, and Direct Glucose Fuel Cells

    OpenAIRE

    JH, Summerfield; MW, Manley

    2016-01-01

    A simple simulation of chemical species movement is presented. The species traverse a Nafion membrane in a fuel cell. Three cells are examined: direct methanol, direct ethanol, and direct glucose. The species are tracked using excess proton concentration, electric field strength, and voltage. The Matlab computer code is provided.

  16. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  17. Surviving "Payment by Results": a simple method of improving clinical coding in burn specialised services in the United Kingdom.

    Science.gov (United States)

    Wallis, Katy L; Malic, Claudia C; Littlewood, Sonia L; Judkins, Keith; Phipps, Alan R

    2009-03-01

    Coding inpatient episodes plays an important role in determining the financial remuneration of a clinical service. Insufficient or incomplete data may have very significant consequences on its viability. We created a document that improves the coding process in our Burns Centre. At Yorkshire Regional Burns Centre an inpatient summary sheet was designed to prospectively record and present essential information on a daily basis, for use in the coding process. The level of care was also recorded. A 3-month audit was conducted to assess the efficacy of the new forms. Forty-nine patients were admitted to the Burns Centre with a mean age of 27.6 years and TBSA ranging from 0.5% to 65%. The total stay in the Burns Centre was 758 days, of which 22% were at level B3-B5 and 39% at level B2. The use of the new discharge document identified potential income of about 500,000 GB pound sterling at our local daily tariffs for high dependency and intensive care. The new form is able to ensure a high quality of coding with a possible direct impact on the financial resources accrued for burn care.

  18. Recruitment and rate coding organisation for soleus motor units across entire range of voluntary isometric plantar flexions.

    Science.gov (United States)

    Oya, Tomomichi; Riek, Stephan; Cresswell, Andrew G

    2009-10-01

    Unlike upper limb muscles, it remains undocumented as to how motor units in the soleus muscle are organised in terms of recruitment range and discharge rates with respect to their recruitment and de-recruitment thresholds. The possible influence of neuromodulation, such as persistent inward currents (PICs) on lower limb motor unit recruitment and discharge rates has also yet to be reported. To address these issues, electromyographic (EMG) activities from the soleus muscle were recorded using selective branched-wire intramuscular electrodes during ramp-and-hold contractions with intensities up to maximal voluntary contraction (MVC). The multiple single motor unit activities were then derived using a decomposition technique. The onset-offset hysteresis of motor unit discharge, i.e. a difference between recruitment and de-recruitment thresholds, as well as PIC magnitude calculated by a paired motor unit analysis were used to examine the neuromodulatory effects on discharge behaviours, such as minimum firing rate, peak firing rate and degree of increase in firing rate. Forty-two clearly identified motor units from five subjects revealed that soleus motor units are recruited progressively from rest to contraction strengths close to 95% of MVC, with low-threshold motor units discharging action potentials slower at their recruitment and with a lower peak rate than later recruited high-threshold units. This observation is in contrast to the 'onion skin phenomenon' often reported for the upper limb muscles. Based on positive correlations of the peak discharge rates, initial rates and recruitment order of the units with the magnitude of the onset-offset hysteresis and not PIC contribution, we conclude that discharge behaviours among motor units appear to be related to a variation in an intrinsic property other than PICs.

  19. Are biogenic emissions a significant source of summertime atmospheric toluene in the rural Northeastern United States?

    Directory of Open Access Journals (Sweden)

    M. L. White

    2009-01-01

    Full Text Available Summertime atmospheric toluene enhancements at Thompson Farm in the rural northeastern United States were unexpected and resulted in a toluene/benzene seasonal pattern that was distinctly different from that of other anthropogenic volatile organic compounds. Consequently, three hydrocarbon sources were investigated for potential contributions to the enhancements during 2004–2006. These included: (1 increased warm season fuel evaporation coupled with changes in reformulated gasoline (RFG content to meet US EPA summertime volatility standards, (2 local industrial emissions and (3 local vegetative emissions. The contribution of fuel evaporation emission to summer toluene mixing ratios was estimated to range from 16 to 30 pptv d−1, and did not fully account for the observed enhancements (20–50 pptv in 2004–2006. Static chamber measurements of alfalfa, a crop at Thompson Farm, and dynamic branch enclosure measurements of loblolly pine trees in North Carolina suggested vegetative emissions of 5 and 12 pptv d−1 for crops and coniferous trees, respectively. Toluene emission rates from alfalfa are potentially much larger as these plants were only sampled at the end of the growing season. Measured biogenic fluxes were on the same order of magnitude as the influence from gasoline evaporation and industrial sources (regional industrial emissions estimated at 7 pptv d−1 and indicated that local vegetative emissions make a significant contribution to summertime toluene enhancements. Additional studies are needed to characterize the variability and factors controlling toluene emissions from alfalfa and other vegetation types throughout the growing season.

  20. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    Science.gov (United States)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  1. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    Science.gov (United States)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  2. Assessment of RELAP5/MOD2 code using loss of offsite power transient data of KNU [Korea Nuclear Unit] No. 1 Plant

    International Nuclear Information System (INIS)

    Chung, Bud-Dong; Kim, Hho-Jung

    1990-04-01

    This report presents a code assessment study based on a real plant transient that occurred on June 9, 1981 at the KNU number-sign 1 (Korea Nuclear Unit Number 1). KNU number-sign 1 is a two-loop Westinghouse PWR plant of 587 Mwe. The loss of offsite power transient occurred at the 77.5% reactor power with 0.5%/hr power ramp. The real plant data were collected from available on-line plant records and computer diagnostics. The transient was simulated by RELAP5/MOD2/36.05 and the results were compared with the plant data to assess the code weaknesses and strengths. Some nodalization studies were performed to contribute to developing a guideline for PWR nodalization for the transient analysis. 5 refs., 18 figs., 3 tabs

  3. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    Science.gov (United States)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  4. PURDU-WINCOF: A computer code for establishing the performance of a fan-compressor unit with water ingestion

    Science.gov (United States)

    Leonardo, M.; Tsuchiya, T.; Murthy, S. N. B.

    1982-01-01

    A model for predicting the performance of a multi-spool axial-flow compressor with a fan during operation with water ingestion was developed incorporating several two-phase fluid flow effects as follows: (1) ingestion of water, (2) droplet interaction with blades and resulting changes in blade characteristics, (3) redistribution of water and water vapor due to centrifugal action, (4) heat and mass transfer processes, and (5) droplet size adjustment due to mass transfer and mechanical stability considerations. A computer program, called the PURDU-WINCOF code, was generated based on the model utilizing a one-dimensional formulation. An illustrative case serves to show the manner in which the code can be utilized and the nature of the results obtained.

  5. A regional modeling framework of phosphorus sources and transport in streams of the southeastern United States

    Science.gov (United States)

    Garcia, Ana Maria.; Hoos, Anne B.; Terziotti, Silvia

    2011-01-01

    We applied the SPARROW model to estimate phosphorus transport from catchments to stream reaches and subsequent delivery to major receiving water bodies in the Southeastern United States (U.S.). We show that six source variables and five land-to-water transport variables are significant (p < 0.05) in explaining 67% of the variability in long-term log-transformed mean annual phosphorus yields. Three land-to-water variables are a subset of landscape characteristics that have been used as transport factors in phosphorus indices developed by state agencies and are identified through experimental research as influencing land-to-water phosphorus transport at field and plot scales. Two land-to-water variables – soil organic matter and soil pH – are associated with phosphorus sorption, a significant finding given that most state-developed phosphorus indices do not explicitly contain variables for sorption processes. Our findings for Southeastern U.S. streams emphasize the importance of accounting for phosphorus present in the soil profile to predict attainable instream water quality. Regional estimates of phosphorus associated with soil-parent rock were highly significant in explaining instream phosphorus yield variability. Model predictions associate 31% of phosphorus delivered to receiving water bodies to geology and the highest total phosphorus yields in the Southeast were catchments with already high background levels that have been impacted by human activity.

  6. Development of an expert system for tsunami warning: a unit source approach

    International Nuclear Information System (INIS)

    Roshan, A.D.; Pisharady, Ajai S.; Bishnoi, L.R.; Shah, Meet

    2015-01-01

    Coastal region of India has been experiencing tsunamis since historical times. Many nuclear facilities including nuclear power plants (NPPs), located along the coast are thus exposed to the hazards of tsunami. For the safety of these facilities as well as the safety of the citizens it is necessary to predict the possibility of occurrence of tsunamis for a recorded earthquake event and evaluate the tsunami hazard posed by the earthquake. To address these concerns, this work aims to design an expert system for Tsunami Warning for the Indian Coast with emphasis on evaluation of tsunami heights and arrival times at various nuclear facility sites. The expert system identifies possibility or otherwise of a tsunamigenic event based on earthquake data inputs. Rupture parameters are worked out for the event and unit tsunami source estimations which are available as precomputed database are combined appropriately to estimate the wave heights and time of arrivals at desired locations along the coast. The system also predicts tsunami wave heights at some pre-defined locations such as Nuclear Power Plant (NPP) and other nuclear facility sites. Time of arrivals of first wave along Indian coast is also evaluated

  7. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities Project

    Energy Technology Data Exchange (ETDEWEB)

    Kevin J. Coppersmith; Lawrence A. Salomone; Chris W. Fuller; Laura L. Glaser; Kathryn L. Hanson; Ross D. Hartleb; William R. Lettis; Scott C. Lindvall; Stephen M. McDuffie; Robin K. McGuire; Gerry L. Stirewalt; Gabriel R. Toro; Robert R. Youngs; David L. Slayter; Serkan B. Bozkurt; Randolph J. Cumbest; Valentina Montaldo Falero; Roseanne C. Perman' Allison M. Shumway; Frank H. Syms; Martitia (Tish) P. Tuttle

    2012-01-31

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  8. Central and Eastern United States (CEUS) Seismic Source Characterization (SSC) for Nuclear Facilities

    International Nuclear Information System (INIS)

    Coppersmith, Kevin J.; Salomone, Lawrence A.; Fuller, Chris W.; Glaser, Laura L.; Hanson, Kathryn L.; Hartleb, Ross D.; Lettis, William R.; Lindvall, Scott C.; McDuffie, Stephen M.; McGuire, Robin K.; Stirewalt, Gerry L.; Toro, Gabriel R.; Youngs, Robert R.; Slayter, David L.; Bozkurt, Serkan B.; Cumbest, Randolph J.; Falero, Valentina Montaldo; Perman, Roseanne C.; Shumway, Allison M.; Syms, Frank H.; Tuttle, Martitia P.

    2012-01-01

    This report describes a new seismic source characterization (SSC) model for the Central and Eastern United States (CEUS). It will replace the Seismic Hazard Methodology for the Central and Eastern United States, EPRI Report NP-4726 (July 1986) and the Seismic Hazard Characterization of 69 Nuclear Plant Sites East of the Rocky Mountains, Lawrence Livermore National Laboratory Model, (Bernreuter et al., 1989). The objective of the CEUS SSC Project is to develop a new seismic source model for the CEUS using a Senior Seismic Hazard Analysis Committee (SSHAC) Level 3 assessment process. The goal of the SSHAC process is to represent the center, body, and range of technically defensible interpretations of the available data, models, and methods. Input to a probabilistic seismic hazard analysis (PSHA) consists of both seismic source characterization and ground motion characterization. These two components are used to calculate probabilistic hazard results (or seismic hazard curves) at a particular site. This report provides a new seismic source model. Results and Findings The product of this report is a regional CEUS SSC model. This model includes consideration of an updated database, full assessment and incorporation of uncertainties, and the range of diverse technical interpretations from the larger technical community. The SSC model will be widely applicable to the entire CEUS, so this project uses a ground motion model that includes generic variations to allow for a range of representative site conditions (deep soil, shallow soil, hard rock). Hazard and sensitivity calculations were conducted at seven test sites representative of different CEUS hazard environments. Challenges and Objectives The regional CEUS SSC model will be of value to readers who are involved in PSHA work, and who wish to use an updated SSC model. This model is based on a comprehensive and traceable process, in accordance with SSHAC guidelines in NUREG/CR-6372, Recommendations for Probabilistic

  9. Investigation of some possible changes in Am-Be neutron source configuration in order to increase the thermal neutron flux using Monte Carlo code

    Science.gov (United States)

    Basiri, H.; Tavakoli-Anbaran, H.

    2018-01-01

    Am-Be neutrons source is based on (α, n) reaction and generates neutrons in the energy range of 0-11 MeV. Since the thermal neutrons are widely used in different fields, in this work, we investigate how to improve the source configuration in order to increase the thermal flux. These suggested changes include a spherical moderator instead of common cylindrical geometry, a reflector layer and an appropriate materials selection in order to achieve the maximum thermal flux. All calculations were done by using MCNP1 Monte Carlo code. Our final results indicated that a spherical paraffin moderator, a layer of beryllium as a reflector can efficiently increase the thermal neutron flux of Am-Be source.

  10. An improvement of estimation method of source term to the environment for interfacing system LOCA for typical PWR using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Ahn, Kwang Il [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    Interfacing-system loss-of-coolant-accident (ISLOCA) has been identified as the most hazardous accident scenario in the typical PWR plants. The present study as an effort to improve the knowledge of the source term to the environment during ISLOCA focuses on an improvement of the estimation method. The improvement was performed to take into account an effect of broken pipeline and auxiliary building structures relevant to ISLOCA. An estimation of the source term to the environment was for the OPR-1000 plants by MELOCR code version 1.8.6. The key features of the source term showed that the massive amount of fission products departed from the beginning of core degradation to the vessel breach. The release amount of fission products may be affected by the broken pipeline and the auxiliary building structure associated with release pathway.

  11. Overview of Development and Deployment of Codes, Standards and Regulations Affecting Energy Storage System Safety in the United States

    Energy Technology Data Exchange (ETDEWEB)

    Conover, David R.

    2014-08-22

    This report acquaints stakeholders and interested parties involved in the development and/or deployment of energy storage systems (ESS) with the subject of safety-related codes, standards and regulations (CSRs). It is hoped that users of this document gain a more in depth and uniform understanding of safety-related CSR development and deployment that can foster improved communications among all ESS stakeholders and the collaboration needed to realize more timely acceptance and approval of safe ESS technology through appropriate CSR.

  12. Development and verification of a leningrad NPP unit 1 living PSA model in the INL SAPHIRE code format for prompt operational safety level monitoring

    International Nuclear Information System (INIS)

    Bronislav, Vinnikov

    2007-01-01

    The first part of the paper presents results of the work, that was carried out in complete conformity with the Technical Assignment, which was developed by the Leningrad Nuclear Power Plant. The initial scientific and technical information, contained into the In-Depth Safety Assessment Reports, was given to the author of the work. This information included graphical Fault Trees of Safety Systems and Auxiliary Technical Systems, Event Trees for the necessary number of Initial Events, and also information about failure probabilities of basic components of the nuclear unit. On the basis of this information and fueling it to the Usa Idaho National Laboratory (INL) SAPHIRE code, we have developed an electronic version of the Data Base for failure probabilities of the components of technical systems. Then, we have developed both the electronic versions of the necessary Fault Trees, and an electronic versions of the necessary Event Trees. And at last, we have carried out the linkage of the Event Trees. This work has resulted in the Living PSA (LPSA - Living Probabilistic Safety Assessment) Model of the Leningrad NPP Unit 1. The LPSA-model is completely adapted to be consistent with the USA INL SAPHIRE Risk Monitor. The second part of the paper results in analysis of fire consequences in various places of Leningrad NPP Unit 1. The computations were carried out with the help of the LPSA-model, developed in SAPHIRE code format. On the basis of the computations the order of priority of implementation of fire prevention measures was established. (author)

  13. SIMCRI: a simple computer code for calculating nuclear criticality parameters

    International Nuclear Information System (INIS)

    Nakamaru, Shou-ichi; Sugawara, Nobuhiko; Naito, Yoshitaka; Katakura, Jun-ichi; Okuno, Hiroshi.

    1986-03-01

    This is a user's manual for a simple criticality calculation code SIMCRI. The code has been developed to facilitate criticality calculation on a single unit of nuclear fuel. SIMCRI makes an extensive survey with a little computing time. Cross section library MGCL for SIMCRI is the same one for the Monte Carlo criticality code KENOIV; it is, therefore, easy to compare the results of the two codes. SIMCRI solves eigenvalue problems and fixed source problems based on the one space point B 1 equation. The results include infinite and effective multiplication factor, critical buckling, migration area, diffusion coefficient and so on. SIMCRI is comprised in the criticality safety evaluation code system JACS. (author)

  14. Calculations of fuel burn-up and radionuclide inventory in the syrian miniature neutron source reactor using the WIMSD4 code

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    Calculations of the fuel burn up and radionuclide inventory in the Miniature Neutron Source Reactor after 10 years (the reactor core expected life) of the reactor operating time are presented in this paper. The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnt up and plutonium produced in the reactor core, the concentrations and radioactivities of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well

  15. 22 CFR 228.13 - Special source rules requiring procurement from the United States.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 1 2010-04-01 2010-04-01 false Special source rules requiring procurement from... ON SOURCE, ORIGIN AND NATIONALITY FOR COMMODITIES AND SERVICES FINANCED BY USAID Conditions Governing Source and Nationality of Commodity Procurement Transactions for USAID Financing § 228.13 Special source...

  16. European inter-comparison of Monte Carlo codes users for the uncertainty calculation of the kerma in air beside a caesium-137 source; Intercomparaison europeenne d'utilisateurs de codes monte carlo pour le calcul d'incertitudes sur le kerma dans l'air aupres d'une source de cesium-137

    Energy Technology Data Exchange (ETDEWEB)

    De Carlan, L.; Bordy, J.M.; Gouriou, J. [CEA Saclay, LIST, Laboratoire National Henri Becquerel, Laboratoire de Metrologie de la Dose 91 - Gif-sur-Yvette (France)

    2010-07-01

    Within the frame of the CONRAD European project (Coordination Network for Radiation Dosimetry), and more precisely within a work group paying attention to uncertainty assessment in computational dosimetry and aiming at comparing different approaches, the authors report the simulation of an irradiator containing a caesium 137 source to calculate the kerma in air as well as its uncertainty due to different parameters. They present the problem geometry, recall the studied issues (kerma uncertainty, influence of capsule source, influence of the collimator, influence of the air volume surrounding the source). They indicate the codes which have been used (MNCP, Fluka, Penelope, etc.) and discuss the obtained results for the first issue

  17. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    International Nuclear Information System (INIS)

    Liu, T; Lin, H; Xu, X; Su, L; Shi, C; Tang, X; Bednarz, B

    2016-01-01

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  18. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Su, L [John Hopkins University, Baltimore, MD (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States); Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  19. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  20. Detrital zircon provenance of the Hartselle Sandstone Unit, Southeastern USA: Insights into sediment source, paleogeography, and setting

    Science.gov (United States)

    Harthy, M. A.; Gifford, J.

    2017-12-01

    The Hartselle sandstone is an excellent example of an Oil sand, a resource rich in bitumen. The unit is a light-colored thick-bedded to massive quartzose sandstone, that is widespread across an area from Georgia in the east to Mississippi in the west, and south from Alabama to Kentucky as a northern border. Formation thickness ranges from 0 to more than 150 feet. The unit has been stratigraphically dated to the Middle-Upper Mississippian age. One hypothesis suggests that the sandstone unit formed from the geological remains of barrier islands located in the ocean between Gondwana and Laurentia. The Hartselle is thought to have formed by the movement waves and currents along the shoreline, which carried sand and concentrated it into a set of northwest to southeast trending barrier islands. Transgression-regression events shifted the islands back and forth in relation to the position of the shoreline, leading to the large areal extent of the unit. However, the current data are not enough to explain the geographical position of the Hartselle sandstone unit as it is not running parallel to the ancient shoreline. Another mystery is the source of the sand, some believing the source was from the south (Gondwana) and others that erosion was from the north (Laurentia). Detrital zircon provenance analysis will address the uncertainty in sediment source. We will compare zircon U-Pb age spectra to possible Laurentian and Gondwanan source areas to discriminate between these possibilities. In addition, the age of the youngest detrital zircon population will provide additional constraints on the maximum age of deposition for the unit. These detrital ages will also help us to understand the tectonic setting at the time of Hartselle deposition. Lastly, we aim to explain the widespread nature of the unit and the processes involved in the formation of the Hartselle sandstone. When taken together, these interpretations will illuminate the age, depositional and tectonic setting of a

  1. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  2. Compressed air as a source of inhaled oxidants in intensive care units.

    Science.gov (United States)

    Thibeault, D W; Rezaiekhaligh, M H; Ekekezie, I; Truog, W E

    1999-01-01

    Exhaled gas from mechanically ventilated preterm infants was found to have similar oxidant concentrations, regardless of lung disease, leading to the hypothesis that wall outlet gases were an oxidant source. Oxidants in compressed room air and oxygen from wall outlets were assessed in three hospitals. Samples were collected by flowing wall outlet gas through a heated humidifier and an ice-packed condenser. Nitric oxide (NO) was measured in intensive care room air and in compressed air with and without a charcoal filter using a Sievers NOA280 nitric oxide analyzer (Boulder, CO). Oxidants were measured by spectrophotometry and expressed as nMol equivalents of H2O2/mL. The quantity of oxidant was also expressed as amount of Vitamin C (nMol/mL) added until the oxidant was nondetectable. This quantity of Vitamin C was also expressed in Trolox Equivalent Antioxidant Capacity (TEAC) units (mMol/L). Free and total chlorine were measured with a Chlorine Photometer. Oxidants were not found in compressed oxygen and were only found in compressed air when the compression method used tap water. At a compressed room air gas flow of 1.5 L/min, the total volume of condensate was 20.2 +/- 1 mL/hr. The oxidant concentration was 1.52 +/- 0.09 nMol/mL equivalents of H2O2/mL of sample and 30.8 +/- 1.2 nMol/hr; 17.9% of that found in tap water. Oxidant reduction required 2.05 +/-0.12 nMol/mL vitamin C, (1.78 +/- 0.1 x 10(-3) TEAC units). Free and total chlorine in tap water were 0.3 +/- 0.02 mg/mL and 2.9 +/- 0.002 mg/mL, respectively. Outlet gas contained 0.4 +/- 0.06 mg/mL and 0.07 + 0.01 mg/mL total and free chlorine, respectively; both 14% of tap water. When a charcoal filter was installed in the hospital with oxidants in compressed air, oxidants were completely removed. Nursery room air contained 12.4 +/- 0.5 ppb NO; compressed wall air without a charcoal filter, 8.1 +/- 0.1 ppb and compressed air with a charcoal filter 12.5 +/- 0.5 ppb. A charcoal filter does not remove NO. (Table

  3. Present status of reactor physics in the United States and Japan-I. 5. Development of the MVP Monte Carlo Code at JAERI

    International Nuclear Information System (INIS)

    Mori, T.; Okumura, K.; Nagaya, Y.

    2001-01-01

    The MVP general-purpose continuous-energy Monte Carlo code for neutron and photon transport calculations, together with its multigroup version GMVP, has been developed since the late 1980's at Japan Atomic Energy Research Institute (JAERI). These two codes were designed for vector supercomputers at the first stage, and then a parallel processing capability was added for several computers including workstation clusters. The first versions of the codes were released for domestic use in 1994, with cross-section libraries based on JENDL, ENDF/B, etc. Since then, many functions have been added for production use. Special features and main capabilities are as follows: 1. vectorization and parallelization; 2. combinatorial geometry with multiple-lattice capability and the statistical geometry model; 3. the probability table method for unresolved resonance; 4. realistic calculations of power reactors at arbitrary temperatures; 5. depletion calculations; 6. perturbation calculations for an eigenvalue (k eff ) problem; 7. useful tallies for improvement of the multigroup method such as effective macroscopic and/or microscopic cross sections, and so on. The MVP code is widely used in Japan, especially in the field of reactor physics analyses. Recently, the development work has concentrated on capabilities of applying the code to accelerator-driven subcritical reactors. For this purpose, we have been adding functions for the high-energy particle transport capability and simulations of the Feynman-α experiment (noise analysis). As a first step of extension of energy range and particles treated in MVP, the physics model of neutron reactions was modified to treat the (z, anything) reaction (MT = 5) in the ENDF-6 format. For a benchmark test of the modified MVP code, the TIARA shielding experiment on iron with quasi-mono-energetic p- 7 Li neutrons for E p 5 68 MeV was analyzed by using the LA- 150 cross-section library. In all the calculations, the measured spectrum of the source

  4. Concentrations and Sources of Airborne Particles in a Neonatal Intensive Care Unit

    Science.gov (United States)

    Licina, Dusan; Bhangar, Seema; Brooks, Brandon; Baker, Robyn; Firek, Brian; Tang, Xiaochen; Morowitz, Michael J.; Banfield, Jillian F.; Nazaroff, William W.

    2016-01-01

    Premature infants in neonatal intensive care units (NICUs) have underdeveloped immune systems, making them susceptible to adverse health consequences from air pollutant exposure. Little is known about the sources of indoor airborne particles that contribute to the exposure of premature infants in the NICU environment. In this study, we monitored the spatial and temporal variations of airborne particulate matter concentrations along with other indoor environmental parameters and human occupancy. The experiments were conducted over one year in a private-style NICU. The NICU was served by a central heating, ventilation and air-conditioning (HVAC) system equipped with an economizer and a high-efficiency particle filtration system. The following parameters were measured continuously during weekdays with 1-min resolution: particles larger than 0.3 μm resolved into 6 size groups, CO2 level, dry-bulb temperature and relative humidity, and presence or absence of occupants. Altogether, over sixteen periods of a few weeks each, measurements were conducted in rooms occupied with premature infants. In parallel, a second monitoring station was operated in a nearby hallway or at the local nurses’ station. The monitoring data suggest a strong link between indoor particle concentrations and human occupancy. Detected particle peaks from occupancy were clearly discernible among larger particles and imperceptible for submicron (0.3–1 μm) particles. The mean indoor particle mass concentrations averaged across the size range 0.3–10 μm during occupied periods was 1.9 μg/m3, approximately 2.5 times the concentration during unoccupied periods (0.8 μg/m3). Contributions of within-room emissions to total PM10 mass in the baby rooms averaged 37–81%. Near-room indoor emissions and outdoor sources contributed 18–59% and 1–5%, respectively. Airborne particle levels in the size range 1–10 μm showed strong dependence on human activities, indicating the importance of indoor

  5. Concentrations and Sources of Airborne Particles in a Neonatal Intensive Care Unit.

    Directory of Open Access Journals (Sweden)

    Dusan Licina

    Full Text Available Premature infants in neonatal intensive care units (NICUs have underdeveloped immune systems, making them susceptible to adverse health consequences from air pollutant exposure. Little is known about the sources of indoor airborne particles that contribute to the exposure of premature infants in the NICU environment. In this study, we monitored the spatial and temporal variations of airborne particulate matter concentrations along with other indoor environmental parameters and human occupancy. The experiments were conducted over one year in a private-style NICU. The NICU was served by a central heating, ventilation and air-conditioning (HVAC system equipped with an economizer and a high-efficiency particle filtration system. The following parameters were measured continuously during weekdays with 1-min resolution: particles larger than 0.3 μm resolved into 6 size groups, CO2 level, dry-bulb temperature and relative humidity, and presence or absence of occupants. Altogether, over sixteen periods of a few weeks each, measurements were conducted in rooms occupied with premature infants. In parallel, a second monitoring station was operated in a nearby hallway or at the local nurses' station. The monitoring data suggest a strong link between indoor particle concentrations and human occupancy. Detected particle peaks from occupancy were clearly discernible among larger particles and imperceptible for submicron (0.3-1 μm particles. The mean indoor particle mass concentrations averaged across the size range 0.3-10 μm during occupied periods was 1.9 μg/m3, approximately 2.5 times the concentration during unoccupied periods (0.8 μg/m3. Contributions of within-room emissions to total PM10 mass in the baby rooms averaged 37-81%. Near-room indoor emissions and outdoor sources contributed 18-59% and 1-5%, respectively. Airborne particle levels in the size range 1-10 μm showed strong dependence on human activities, indicating the importance of indoor

  6. Core Follow Calculation for Palo Verde Unit 1 in Cycles 1 through 4 using DeCART2D/MASTER4.0 Code System

    International Nuclear Information System (INIS)

    Jeong, Hee Jeong; Choi, Yonghee; Kim, Sungmin; Lee, Kyunghoon

    2017-01-01

    To verify and validate the DeCART2D/MASTER4.0 design system, core follow calculations of Palo Verde Unit 1(PV-1) in cycles 1 through 4 are performed. The calculation results are compared with the measured data and will be used in the generation of bias and uncertainty factors in the DeCART2D/MASTER4.0 design system. The DeCART2D/MASTER codes system has been developed in KAERI for the PWR (Pressurized water reactors) core design including SMRs (Small Modular Reactors). Core follow calculations of Pale Verde Unit 1 in Cycles 1 through 4 have been performed. Reactivities, assembly powers and startup parameters such as EPC, RW, ITC and IBW are compared with the measured data. This work will be used in the generation of bias and uncertainty factors in DeCART2D/MASTER4.0 design system.

  7. Radioactive waste evacuation of the sources of a low dose rate brachytherapy unit

    International Nuclear Information System (INIS)

    Serrada, A.; Huerga, C.; Santa Olalla, I.; Vicedo, A.; Corredoira, E.; Plaza, R.; Vidal, J.; Tellez, M.

    2006-01-01

    Introduction The second class Radioactive Installation start -up authorization makes responsible for its security to the installation exploiter and supervisor. The specifications established in the authorization, which are mandatory, point out several actions, some of these actions are the hermeticity tests of radioactive sources an radiologic controls of environment dosimetry. It is necessary to optimize the time spent in each activity, managing them as reasonably as possible. An important matter to take into account is to keep and control only those radioactive or radiological equipment which, even if are in work, have an appropriate performance for the patient treatment Material And Method a Paz hospital has an intracavity brachytherapy (L.D.R.), Curietron model. The Radioprotection Department proposed to remove from service the unit due to its age, this was carried out by the Commission of Guarantee and Quality Control. There were different solutions taken into account to decommission the unit, finally the option chosen as the most convenient for the installation was to manage directly the withdrawal of the radioactive material which consisted of seven Cs-137 probes model CsM1 and total nominal certificated activity of 7770 MBq ( 210 mCi ) dated in May 2005. It also has to be considered as a radioactive waste the inner storage elements of the Curietron and the transport and storage curie stock, built with depleted uranium. To accomplish this aim an evacuation container was designed consisting of an alloy of low-melting point (M.C.P.96), which fulfills the transport conditions imposed by E.N.R.E.S.A. ( Empresa Nacional de Residuos Radiactivos, S.A). A theoretical calculation was performed to estimate the thickness of the shield which adequate to the rate of dose in contact demanded. Accuracy of these calculations has been verified using T.L. dosimetry. Results The radiation levels during the extraction intervention of the radioactive probes and its transfer to

  8. Patient examinations using electrical impedance tomography—sources of interference in the intensive care unit

    International Nuclear Information System (INIS)

    Frerichs, Inéz; Pulletz, Sven; Elke, Gunnar; Gawelczyk, Barbara; Frerichs, Alexander; Weiler, Norbert

    2011-01-01

    Electrical impedance tomography (EIT) is expected to become a valuable tool for monitoring mechanically ventilated patients due to its ability to continuously assess regional lung ventilation and aeration. Several sources of interference with EIT examinations exist in intensive care units (ICU). Our objectives are to demonstrate how some medical nursing and monitoring devices interfere with EIT measurements and modify the EIT scans and waveforms, which approaches can be applied to minimize these effects and how possible misinterpretation can be avoided. We present four cases of EIT examinations of adult ICU patients. Two of the patients were subjected to pulsation therapy using a pulsating air suspension mattress while being ventilated by high-frequency oscillatory or conventional pressure-controlled ventilation, respectively. The EIT signal modulation synchronous with the occurrence of the pulsating wave was 2.3 times larger than the periodic modulation synchronous with heart rate and high-frequency oscillations. During conventional ventilation, the pulsating mattress induced an EIT signal fluctuation with a magnitude corresponding to about 20% of the patient's tidal volume. In the third patient, interference with EIT examination was caused by continuous cardiac output monitoring. The last patient's examination was disturbed by impedance pneumography when excitation currents of similar frequency to EIT were used. In all subjects, the generation of functional EIT scans was compromised and interpretation of regional ventilation impossible. Discontinuation of pulsation therapy and of continuous cardiac output and impedance respiration monitoring immediately improved the EIT signal and scan quality. Offline processing of the disturbed data using frequency filtering enabled partial retrieval of relevant information. We conclude that thoracic EIT examinations in the ICU require cautious interpretation because of possible mechanical and electromagnetic

  9. High Incidence and Levels of Ochratoxin A in Wines Sourced from the United States.

    Science.gov (United States)

    De Jesus, Christopher Lawrence; Bartley, Amanda; Welch, Aaron Z; Berry, John P

    2017-12-21

    Ochratoxin A (OTA) is one of the most prevalent mycotoxin contaminants of food crops. Among the agricultural products consequently contaminated by OTA is wine. In the present study, a sample of wines sourced from the United States was assessed for OTA. Wines were primarily analyzed by high-performance liquid chromatography with fluorescence detection (HPLC-FD) coupled to a liquid-liquid extraction (LLE) technique which was developed and validated as a simplified sample preparation approach. More than 85% of the wines evaluated were found to contain OTA, at levels above the limit-of-detection (LOD = 0.1 µg L -1 ), and 76% were above the limit-of-quantitation (LOQ = 0.3 µg L -1 ) for the LLE/HPLC-FD method. More than two-thirds of the wines above the LOQ were found to exceed 1 µg L -1 . Complementary analysis by HPLC coupled to tandem mass spectrometry (HPLC-MS/MS) confirmed OTA in 74% of the OTA-positive wines (i.e., >LOQ by HPLC-FD). Overall, both the occurrence and measured levels of OTA were generally high, specifically relative to previous assessments of OTA in wine, and two of the wines were above the only current (European Union) regulatory limit of two parts-per-billion (ppb, ~2 µg L -1 ). Possible trends with respect to geographical region and/or growing climate are noted. As the first assessment of U.S. wines in more than a decade, the overall high occurrence and levels of OTA in wine, and possible geographic and climatic trends, point to a need for regular surveillance of wines, as well as investigation of the relevant contributors to OTA occurrence toward mitigating contamination and exposure risks.

  10. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  11. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  12. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Science.gov (United States)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  13. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Directory of Open Access Journals (Sweden)

    Lin Hui

    2017-01-01

    Full Text Available Monte Carlo (MC simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  14. 49 CFR 578.6 - Civil penalties for violations of specified provisions of Title 49 of the United States Code.

    Science.gov (United States)

    2010-10-01

    ... vehicle distributed in commerce for sale in the United States that willfully fails to attach the label... for a civil penalty of not more than $140,000 a day for each violation. (h) Automobile fuel economy... penalty of $5.50 multiplied by each .1 of a mile a gallon by which the applicable average fuel economy...

  15. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  16. Growth and Identification of Bacteria in N-Halamine Dental Unit Waterline Tubing Using an Ultrapure Water Source

    Science.gov (United States)

    Porteous, Nuala; Luo, Jie; Hererra, Monica; Schoolfield, John; Sun, Yuyu

    2011-01-01

    This study examined bacterial growth and type on biofilm-controlling dental unit waterline (DUWL) tubing (T) and control manufacturer's tubing (C) in a laboratory DUWL model using ultrapure source water that was cycled through the lines. Sections of tubing lines were detached and examined for biofilm growth using SEM imaging at six sampling periods. Bacteria from inside surfaces of T and C, source unit, and reservoir were cultured and enumerated. At six months, organisms were molecularly identified from the alignment matches obtained from the top three BLAST searches for the 16S region. There was a 1–3 log increase in organism growth in a clean, nonsterile reservoir within an hour. Biofilm was established on the inside surfaces of C within three weeks, but not on T. Proteobacteria, and Sphingomonas spp. were identified in the source reservoir and C line, and a variation of the genera was found in T line. PMID:22220171

  17. A study of physics of sub-critical multiplicative systems driven by sources and the utilization of deterministic codes in calculation of this systems

    International Nuclear Information System (INIS)

    Antunes, Alberi

    2008-01-01

    This work presents the Physics of Source Driven Systems (ADS). It shows some statics and K i netics parameters of the reactor Physics and when it is sub critical, that are important in evaluation and definition of these systems. The objective is to demonstrate that there are differences in parameters when the reactor is critical. Moreover, the work shows the differences observed in the parameters for different calculation models. Two calculation methodologies are shown In this dissertation: Gandini and Salvatores and Dulla, and some parameters are calculated. The ANISN deterministic transport code is used in calculation in order to compare these parameters. In a subcritical configuration of IPEN-MB-01 Reactor driven by an external source some parameters are calculated. The conclusions about calculation realized are presented in end of work. (author)

  18. Calculation of gamma ray dose buildup factors in water for isotropic point, plane mono directional and line sources using MCNP code

    International Nuclear Information System (INIS)

    Atak, H.; Celikten, O. S.; Tombakoglu, M.

    2009-01-01

    Gamma ray dose buildup factors in water for isotropic point, plane mono directional and infinite/finite line sources were calculated using the MCNP code. The buildup factors are determined for gamma ray energies of 1, 2, 3 and 4 Mev and for shield thicknesses of 1, 2, 4 and 7 mean free paths. The calculated buildup factors were then fitted in the Taylor and Berger forms. For the line sources a buildup factor table was also constructed using the Sievert function and the constants in Taylor form derived in this study to compare with the Monte Carlo results. All buildup factors were compared with the tabulated data given in literature. In order to reduce the statistical errors on buildup factors, 'forced collision' option was used in the MCNP calculations.

  19. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  20. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  1. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  2. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  3. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  4. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  5. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  6. Influence of source configuration on spectral composition of gamma-ray beams from 60Co teletherapy units

    International Nuclear Information System (INIS)

    Ehrlich, M.; Soares, C.G.; Jackson, B.; Lanoue, P.

    1978-01-01

    Measurements were made of the photon spectra of simulated 60 Co teletherapy beams. Various source configurations and source environments of practical interest were employed for this purpose. The sources consisted of activated cobalt pellets packed into steel capsules. Several combinations of capsule diameters and heights of pellet layers were used. The spacer materials filling the remaining capsule volume were chosen to be of high, intermediate, or low atomic number. The capsules could be inserted, one at a time, into a compartment of either tungsten or brass, simulating the central section of popular commercial 60 Co teletherapy units. There also was some choice in the atomic number of the structural elements holding the capsules in place in the compartment. The largest contribution of scattered photons was found to occur when the materials close to the source pellets had atomic numbers in the vicinity of 30. (author)

  7. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  8. A national reconnaissance for pharmaceuticals and other organic wastewater contaminants in the United States - II) Untreated drinking water sources

    Science.gov (United States)

    Focazio, M.J.; Kolpin, D.W.; Barnes, K.K.; Furlong, E.T.; Meyer, M.T.; Zaugg, S.D.; Barber, L.B.; Thurman, M.E.

    2008-01-01

    Numerous studies have shown that a variety of manufactured and natural organic compounds such as pharmaceuticals, steroids, surfactants, flame retardants, fragrances, plasticizers and other chemicals often associated with wastewaters have been detected in the vicinity of municipal wastewater discharges and livestock agricultural facilities. To provide new data and insights about the environmental presence of some of these chemicals in untreated sources of drinking water in the United States targeted sites were sampled and analyzed for 100 analytes with sub-parts per billion detection capabilities. The sites included 25 ground- and 49 surface-water sources of drinking water serving populations ranging from one family to over 8 million people.

  9. Sources of Social Support among International College Students in the United States

    Science.gov (United States)

    Bhochhibhoya, Amir; Dong, Yue; Branscum, Paul

    2017-01-01

    International students are challenged due to the abrupt change in social support. The purpose of this study was to operationalize different sources of social support and evaluate determinants of mental health among international students (n = 328). An instrument was developed to measure four distinct sources of social support. Repeated measures…

  10. Communities with Source Separated Organics Programs, United States, 2015, EPA Region 9

    Data.gov (United States)

    U.S. Environmental Protection Agency — This GIS dataset contains polygon features that represent communities with residential organics collection programs in the United States. EPA used US Census Bureau...

  11. 26 CFR 1.862-1 - Income specifically from sources without the United States.

    Science.gov (United States)

    2010-04-01

    ... formulas, goodwill, trademarks, trade brands, franchises, and other like property; (v) Gains, profits, and income from the sale of real property located without the United States; and (vi) Gains, profits, and...

  12. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  13. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  14. mPUMA: a computational approach to microbiota analysis by de novo assembly of operational taxonomic units based on protein-coding barcode sequences.

    Science.gov (United States)

    Links, Matthew G; Chaban, Bonnie; Hemmingsen, Sean M; Muirhead, Kevin; Hill, Janet E

    2013-08-15

    Formation of operational taxonomic units (OTU) is a common approach to data aggregation in microbial ecology studies based on amplification and sequencing of individual gene targets. The de novo assembly of OTU sequences has been recently demonstrated as an alternative to widely used clustering methods, providing robust information from experimental data alone, without any reliance on an external reference database. Here we introduce mPUMA (microbial Profiling Using Metagenomic Assembly, http://mpuma.sourceforge.net), a software package for identification and analysis of protein-coding barcode sequence data. It was developed originally for Cpn60 universal target sequences (also known as GroEL or Hsp60). Using an unattended process that is independent of external reference sequences, mPUMA forms OTUs by DNA sequence assembly and is capable of tracking OTU abundance. mPUMA processes microbial profiles both in terms of the direct DNA sequence as well as in the translated amino acid sequence for protein coding barcodes. By forming OTUs and calculating abundance through an assembly approach, mPUMA is capable of generating inputs for several popular microbiota analysis tools. Using SFF data from sequencing of a synthetic community of Cpn60 sequences derived from the human vaginal microbiome, we demonstrate that mPUMA can faithfully reconstruct all expected OTU sequences and produce compositional profiles consistent with actual community structure. mPUMA enables analysis of microbial communities while empowering the discovery of novel organisms through OTU assembly.

  15. Utilizing GPUs to Accelerate Turbomachinery CFD Codes

    Science.gov (United States)

    MacCalla, Weylin; Kulkarni, Sameer

    2016-01-01

    GPU computing has established itself as a way to accelerate parallel codes in the high performance computing world. This work focuses on speeding up APNASA, a legacy CFD code used at NASA Glenn Research Center, while also drawing conclusions about the nature of GPU computing and the requirements to make GPGPU worthwhile on legacy codes. Rewriting and restructuring of the source code was avoided to limit the introduction of new bugs. The code was profiled and investigated for parallelization potential, then OpenACC directives were used to indicate parallel parts of the code. The use of OpenACC directives was not able to reduce the runtime of APNASA on either the NVIDIA Tesla discrete graphics card, or the AMD accelerated processing unit. Additionally, it was found that in order to justify the use of GPGPU, the amount of parallel work being done within a kernel would have to greatly exceed the work being done by any one portion of the APNASA code. It was determined that in order for an application like APNASA to be accelerated on the GPU, it should not be modular in nature, and the parallel portions of the code must contain a large portion of the code's computation time.

  16. Computer code determination of tolerable accel current and voltage limits during startup of an 80 kV MFTF sustaining neutral beam source

    International Nuclear Information System (INIS)

    Mayhall, D.J.; Eckard, R.D.

    1979-01-01

    We have used a Lawrence Livermore Laboratory (LLL) version of the WOLF ion source extractor design computer code to determine tolerable accel current and voltage limits during startup of a prototype 80 kV Mirror Fusion Test Facility (MFTF) sustaining neutral beam source. Arc current limits are also estimated. The source extractor has gaps of 0.236, 0.721, and 0.155 cm. The effective ion mass is 2.77 AMU. The measured optimum accel current density is 0.266 A/cm 2 . The gradient grid electrode runs at 5/6 V/sub a/ (accel voltage). The suppressor electrode voltage is zero for V/sub a/ < 3 kV and -3 kV for V/sub a/ greater than or equal to 3 kV. The accel current density for optimum beam divergence is obtained for 1 less than or equal to V/sub a/ less than or equal to 80 kV, as are the beam divergence and emittance

  17. CONTAIN code calculations of the effects on the source term of CsI to I/sub 2/ conversion due to severe hydrogen burns

    International Nuclear Information System (INIS)

    Valdez, G.D.; Williams, D.C.

    1986-01-01

    In experiments conducted at Sandia National Laboratories large amounts of elemental iodine were produced when CsI-Al 2 O 3 aerosol was exposed to hydrogen/air combustion. To evaluate some of the implications of the iodide conversion (observed to occur with up to 75% efficiency) for the severe accident source term, computational simulations of representative accident sequences were conducted with the CONTAIN code. The following conclusions can be drawn from this preliminary source term assessment: (1) If the containment sprays are inoperative during the accident, or failed by the hydrogen burn, the late-time source term is almost tripled when the iodide is converted to I 2 . (2) With the sprays active, the amount released without conversion of the CsI aerosol is 63% higher than for the case when conversion occurs. (3) For the case where CsI is converted to I 2 continued operation of the sprays reduces the release by a factor of 40, relative to the case in which the sprays fail at the time of the hydrogen burn. When there is no conversion, the reduction factor for continued spray operation is about a factor of 9, relative to the failed spray case

  18. Site investigation SFR. Rock type coding, overview geological mapping and identification of rock units and possible deformation zones in drill cores from the construction of SFR

    Energy Technology Data Exchange (ETDEWEB)

    Petersson, Jesper (Vattenfall Power Consultant AB, Stockholm (Sweden)); Curtis, Philip; Bockgaard, Niclas (Golder Associates AB (Sweden)); Mattsson, Haakan (GeoVista AB, Luleaa (Sweden))

    2011-01-15

    This report presents the rock type coding, overview lithological mapping and identification of rock units and possible deformation zones in drill cores from 32 boreholes associated with the construction of SFR. This work can be seen as complementary to single-hole interpretations of other older SFR boreholes earlier reported in /Petersson and Andersson 2010/: KFR04, KFR08, KFR09, KFR13, KFR35, KFR36, KFR54, KFR55, KFR7A, KFR7B and KFR7C. Due to deficiencies in the available material, the necessary activities have deviated somewhat from the established methodologies used during the recent Forsmark site investigations for the final repository for spent nuclear fuel. The aim of the current work has been, wherever possible, to allow the incorporation of all relevant material from older boreholes in the ongoing SFR geological modelling work in spite of the deficiencies. The activities include: - Rock type coding of the original geological mapping according to the nomenclature used during the preceding Forsmark site investigation. As part of the Forsmark site investigation such rock type coding has already been performed on most of the old SFR boreholes if the original geological mapping results were available. This earlier work has been complemented by rock type coding on two further boreholes: KFR01 and KFR02. - Lithological overview mapping, including documentation of (1) rock types, (2) ductile and brittle-ductile deformation and (3) alteration for drill cores from eleven of the boreholes for which no original geological borehole mapping was available (KFR31, KFR32, KFR34, KFR37,KFR38, KFR51, KFR69, KFR70, KFR71, KFR72 and KFR89). - Identification of possible deformation zones and merging of similar rock types into rock units. This follows SKB's established criteria and methodology of the geological Single-hole interpretation (SHI) process wherever possible. Deviations from the standard SHI process are associated with the lack of data, for example BIPS images

  19. Site investigation SFR. Rock type coding, overview geological mapping and identification of rock units and possible deformation zones in drill cores from the construction of SFR

    International Nuclear Information System (INIS)

    Petersson, Jesper; Curtis, Philip; Bockgaard, Niclas; Mattsson, Haakan

    2011-01-01

    This report presents the rock type coding, overview lithological mapping and identification of rock units and possible deformation zones in drill cores from 32 boreholes associated with the construction of SFR. This work can be seen as complementary to single-hole interpretations of other older SFR boreholes earlier reported in /Petersson and Andersson 2010/: KFR04, KFR08, KFR09, KFR13, KFR35, KFR36, KFR54, KFR55, KFR7A, KFR7B and KFR7C. Due to deficiencies in the available material, the necessary activities have deviated somewhat from the established methodologies used during the recent Forsmark site investigations for the final repository for spent nuclear fuel. The aim of the current work has been, wherever possible, to allow the incorporation of all relevant material from older boreholes in the ongoing SFR geological modelling work in spite of the deficiencies. The activities include: - Rock type coding of the original geological mapping according to the nomenclature used during the preceding Forsmark site investigation. As part of the Forsmark site investigation such rock type coding has already been performed on most of the old SFR boreholes if the original geological mapping results were available. This earlier work has been complemented by rock type coding on two further boreholes: KFR01 and KFR02. - Lithological overview mapping, including documentation of (1) rock types, (2) ductile and brittle-ductile deformation and (3) alteration for drill cores from eleven of the boreholes for which no original geological borehole mapping was available (KFR31, KFR32, KFR34, KFR37,KFR38, KFR51, KFR69, KFR70, KFR71, KFR72 and KFR89). - Identification of possible deformation zones and merging of similar rock types into rock units. This follows SKB's established criteria and methodology of the geological Single-hole interpretation (SHI) process wherever possible. Deviations from the standard SHI process are associated with the lack of data, for example BIPS images, or a

  20. Constraining wintertime sources of inorganic chlorine over the northeast United States

    Science.gov (United States)

    Haskins, J.; Jaegle, L.; Shah, V.; Lopez-Hilfiker, F.; Lee, B. H.; Campuzano Jost, P.; Schroder, J. C.; Day, D. A.; Fiddler, M. N.; Holloway, J. S.; Sullivan, A.; Veres, P. R.; Weber, R. J.; Dibb, J. E.; Brown, S. S.; Jimenez, J. L.; Thornton, J. A.

    2017-12-01

    Wintertime multiphase chlorine chemistry is thought to play a significant role in the regional distribution of oxidants, the lifetime of VOCs, and the transport of NOx downwind of urban sources. However, the sources and chemistry of reactive chlorine remain highly uncertain. During the WINTER 2015 aircraft campaign, the inorganic chlorine budget was dominated by HCl (g) and total particulate chloride, accounting for greater than 85% of the total chlorine budget within the boundary layer. The total concentration of inorganic chlorine compounds found over marine regions was 1014 pptv and 609 pptv over continental regions with variability found to be driven by changes in meteorological conditions, particle liquid water content, particle pH, and proximity to large anthropogenic sources. However, displacement of particle chloride was often not a large enough source to fully explain the concentrations of gas phase Cly compounds. We use the GEOS-Chem global chemical transport model to simulate the emissions, gas-particle partitioning, and downwind transport and deposition of Cly during winter. Simulated concentrations of HCl, particle chloride, and other dominant Cly compounds are compared to measurements made during the WINTER aircraft campaign. The relative roles of Cly sources from sea-salt aerosol and anthropogenic sources such as power plants, biomass burning and road salt are explored.

  1. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  2. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  3. The diverging paths of German and United States policies for renewable energy: Sources of difference

    International Nuclear Information System (INIS)

    Laird, Frank N.; Stefes, Christoph

    2009-01-01

    The United States and Germany started out with very similar policies for renewable energy after the energy crisis of the 1970s. By the year 2000 they were on very different policy paths and, as a result, the German renewable energy industry has moved well ahead of that in the United States, both in terms of installed capacity in the country and in terms of creating a highly successful export market. In this paper, we reject some of the conventional explanations for this difference. Instead, these differences arise from the intersection of contingent historical events with the distinctive institutional and social structures that affect policy making in each country. Our analysis of the historical path-dependent dynamics of each country suggests that those who wish to further renewable energy policy in the United States need to take into account these institutional and social factors so that they will better be able to exploit the next set of favorable historical circumstances.

  4. Calculations of the thermal and fast neutron fluxes in the Syrian miniature neutron source reactor using the MCNP-4C code.

    Science.gov (United States)

    Khattab, K; Sulieman, I

    2009-04-01

    The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.

  5. Microcontroller based motion control interface unit for double slit type beam emittance monitor for H- ion source

    International Nuclear Information System (INIS)

    Holikatti, A.C.; Jain, Rahul; Karnewar, A.K.; Sonawane, B.B.; Maurya, N.K.; Puntambekar, T.A.

    2015-01-01

    The Indian Spallation Neutron Source (ISNS), proposed to be developed at RRCAT, will use a 1 GeV H - linac and an accumulator ring to produce high flux of pulsed neutrons via spallation process. The development activity of front end of 1H - linac for ISNS is under progress at RRCAT, for which a pulsed H - ion source of 50 keV energy, 30 mA current with pulse width of 500 μs has been developed at RRCAT. In this paper, we present the design and development of a microcontroller based motion control interface unit for double slit type beam emittance monitor for the H - ion source. This is an interceptive type of beam diagnostic device, which is used for the quantitative measurement of transverse emittance and beam intensity profile

  6. Qualitative risk assessment for the 100-FR-1 source operable unit

    International Nuclear Information System (INIS)

    Corporation, I.T.

    1994-08-01

    This report provides the Qualitative risk assessment (QRA) for the waste sites associated with the 100-FR-1 Operable Unit. The QRA is an evaluation of risk for a predefined set of human and ecological exposure scenarios. It is not intended to replace or be a substitute for a baseline risk assessment. The QRA is streamlined to consider only two human health scenarios (frequent-and occasional-use) with four exposure pathways (soil ingestion, fugitive dust inhalation, inhalation of volatile organics, and external radiation exposure) and a limited ecological evaluation. The use of these scenarios and pathways was agreed to by the 100 Area Tri-Party unit managers

  7. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  8. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  9. Case studies using the United States Coast Guard's Oil Identification System for petroleum spill source identification

    International Nuclear Information System (INIS)

    Grosser, P.W.; Castellano, F.P.

    1993-01-01

    The Oil Identification System (OIS) was developed in the 1970's at the Coast Guard Research and Development Center, to determine the unique, intrinsic properties which would allow the matching of a spilled oil with its correct source. The Central Oil Identification Laboratory (COIL) was established in 1978 as the operating facility to implement the OIS. The OIS encompasses four analytical methods; thin layer chromatography, fluorescence spectroscopy, infrared spectroscopy and gas chromatography. A sample can be studied according to each individual method or multi-methods approach can be chosen if no single technique gives unequivocal results. Combined these methods are greater than 99% effective. The authors recently utilized the OIS and the COIL for three petroleum spill investigations in New York. As part of the investigation to determine the source(s) of several different petroleum product spills, OIS was conducted along with a review of groundwater sample chromatograms

  10. Unraveling the sources of ground level ozone in the Intermountain Western United States using Pb isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Christensen, John N. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Weiss-Penzias, Peter [University of California at Santa Cruz, Santa Cruz, CA (United States); Fine, Rebekka [University of Nevada, Reno, NV (United States); McDade, Charles E.; Trzepla, Krystyna [University of California at Davis, Crocker Nuclear Laboratory, Davis, CA (United States); Brown, Shaun T. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States); Gustin, Mae Sexauer [University of Nevada, Reno, NV (United States)

    2015-10-15

    Ozone as an atmospheric pollutant is largely produced by anthropogenic precursors and can significantly impact human and ecosystem health, and climate. The U.S. Environmental Protection Agency has recently proposed lowering the ozone standard from 75 ppbv (MDA8 = Maximum Daily 8-Hour Average) to between 65 and 70 ppbv. This will result in remote areas of the Intermountain West that includes many U.S. National Parks being out of compliance, despite a lack of significant local sources. We used Pb isotope fingerprinting and back-trajectory analysis to distinguish sources of imported ozone to Great Basin National Park in eastern Nevada. During discrete Chinese Pb events (> 1.1 ng/m{sup 3} & > 80% Asian Pb) trans-Pacific transported ozone was 5 ± 5.5 ppbv above 19 year averages for those dates. In contrast, concentrations during regional transport from the Los Angeles and Las Vegas areas were 15 ± 2 ppbv above the long-term averages, and those characterized by high-altitude transport 3 days prior to sampling were 19 ± 4 ppbv above. However, over the study period the contribution of trans-Pacific transported ozone increased at a rate of 0.8 ± 0.3 ppbv/year, suggesting that Asian inputs will exceed regional and high altitude sources by 2015–2020. All of these sources will impact regulatory compliance with a new ozone standard, given increasing global background. - Highlights: • Ozone can significantly impact human and ecosystem health and climate. • Pb isotopes and back-trajectory analysis were used to distinguish sources of O{sub 3}. • Baseline concentrations in the Western US are ~ 54 ppbv. • During discrete Asia events O{sub 3} increased by 5 ± 5.5 ppbv and during S CA events by 15 ± 2 ppbv. • Data indicate that Asian ozone inputs will exceed other sources by 2015–2020.

  11. Synthesizer for decoding a coded short wave length irradiation

    International Nuclear Information System (INIS)

    1976-01-01

    The system uses point irradiation source, typically an X-ray emitter, which illuminates a three dimensional object consisting of a set of parallel planes, each of which acts as a source of coded information. The secondary source images are superimposed on a common flat screen. The decoding system comprises an imput light-screen detector, a picture screen amplifier, a beam deflector, on output picture screen, an optical focussing unit including three lenses, a masking unit, an output light screen detector and a video signal reproduction unit of cathode ray tube from, or similar, to create a three dimensional image of the object. (G.C.)

  12. Office of Codes and Standards resource book. Section 1, Building energy codes and standards

    Energy Technology Data Exchange (ETDEWEB)

    Hattrup, M.P.

    1995-01-01

    The US Department of Energy`s (DOE`s) Office of Codes and Standards has developed this Resource Book to provide: A discussion of DOE involvement in building codes and standards; a current and accurate set of descriptions of residential, commercial, and Federal building codes and standards; information on State contacts, State code status, State building construction unit volume, and State needs; and a list of stakeholders in the building energy codes and standards arena. The Resource Book is considered an evolving document and will be updated occasionally. Users are requested to submit additional data (e.g., more current, widely accepted, and/or documented data) and suggested changes to the address listed below. Please provide sources for all data provided.

  13. Pollution Sources and Mortality Rates across Rural-Urban Areas in the United States

    Science.gov (United States)

    Hendryx, Michael; Fedorko, Evan; Halverson, Joel

    2010-01-01

    Purpose: To conduct an assessment of rural environmental pollution sources and associated population mortality rates. Methods: The design is a secondary analysis of county-level data from the Environmental Protection Agency (EPA), Department of Agriculture, National Land Cover Dataset, Energy Information Administration, Centers for Disease Control…

  14. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  15. Factors associated with sources, transport, and fate of volatile organic compounds and their mixtures in aquifers of the United States

    Science.gov (United States)

    Squillace, P.J.; Moran, M.J.

    2007-01-01

    Factors associated with sources, transport, and fate of volatile organic compounds (VOCs) in groundwater from aquifers throughout the United States were evaluated using statistical methods. Samples were collected from 1631 wells throughout the conterminous United States between 1996 and 2002 as part of the National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey. Water samples from wells completed in aquifers used to supply drinking water were analyzed for more than 50 VOCs. Wells were primarily rural domestic water supplies (1184), followed by public water supplies (216); the remaining wells (231) supplied a variety of uses. The median well depth was 50 meters. Age-date information shows that about 60% of the samples had a fraction of water recharged after 1953. Chloroform, toluene, 1,2,4-trimethylbenzene, and perchloroethene were some of the frequently detected VOCs. Concentrations generally were less than 1 ??g/L. Source factors include, in order of importance, general land-use activity, septic/sewer density, and sites where large concentrations of VOCs are potentially released, such as leaking underground storage tanks. About 10% of all samples had VOC mixtures that were associated with concentrated sources; 20% were associated with dispersed sources. Important transport factors included well/screen depth, precipitation/groundwater recharge, air temperature, and various soil characteristics. Dissolved oxygen was strongly associated with VOCs and represents the fate of many VOCs in groundwater. Well type (domestic or public water supply) was also an important explanatory factor. Results of multiple analyses show the importance of (1) accounting for both dispersed and concentrated sources of VOCs, (2) measuring dissolved oxygen when sampling wells to help explain the fate of VOCs, and (3) limiting the type of wells sampled in monitoring networks to avoid unnecessary variance in the data, or controlling for this variance during data analysis.

  16. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    Science.gov (United States)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  17. Local source impacts on primary and secondary aerosols in the Midwestern United States

    Science.gov (United States)

    Jayarathne, Thilina; Rathnayake, Chathurika M.; Stone, Elizabeth A.

    2016-04-01

    Atmospheric particulate matter (PM) exhibits heterogeneity in composition across urban areas, leading to poor representation of outdoor air pollutants in human exposure assessments. To examine heterogeneity in PM composition and sources across an urban area, fine particulate matter samples (PM2.5) were chemically profiled in Iowa City, IA from 25 August to 10 November 2011 at two monitoring stations. The urban site is the federal reference monitoring (FRM) station in the city center and the peri-urban site is located 8.0 km to the west on the city edge. Measurements of PM2.5 carbonaceous aerosol, inorganic ions, molecular markers for primary sources, and secondary organic aerosol (SOA) tracers were used to assess statistical differences in composition and sources across the two sites. PM2.5 mass ranged from 3 to 26 μg m-3 during this period, averaging 11.2 ± 4.9 μg m-3 (n = 71). Major components of PM2.5 at the urban site included organic carbon (OC; 22%), ammonium (14%), sulfate (13%), nitrate (7%), calcium (2.9%), and elemental carbon (EC; 2.2%). Periods of elevated PM were driven by increases in ammonium, sulfate, and SOA tracers that coincided with hot and dry conditions and southerly winds. Chemical mass balance (CMB) modeling was used to apportion OC to primary sources; biomass burning, vegetative detritus, diesel engines, and gasoline engines accounted for 28% of OC at the urban site and 24% of OC at the peri-urban site. Secondary organic carbon from isoprene and monoterpene SOA accounted for an additional 13% and 6% of OC at the urban and peri-urban sites, respectively. Differences in biogenic SOA across the two sites were associated with enhanced combustion activities in the urban area and higher aerosol acidity at the urban site. Major PM constituents (e.g., OC, ammonium, sulfate) were generally well-represented by a single monitoring station, indicating a regional source influence. Meanwhile, nitrate, biomass burning, food cooking, suspended dust, and

  18. Source contributions to United States ozone and particulate matter over five decades from 1970 to 2020

    Science.gov (United States)

    Nopmongcol, Uarporn; Alvarez, Yesica; Jung, Jaegun; Grant, John; Kumar, Naresh; Yarwood, Greg

    2017-10-01

    Evaluating long-term air quality trends can demonstrate effectiveness of control strategies and guide future air quality management planning. Observations have shown that ozone (O3) and fine particulate matter (PM2.5) in the US have declined since as early as 1980 in some areas. But observation trends alone cannot separate effects of changes in local and global emissions to US air quality which are important to air quality planners. This study uses a regional model (CAMx) nested within a global model (GEOS-Chem) to characterize regional changes in O3 and PM2.5 due to the intercontinental transport and local/regional emissions representing six modeling years within five decades (1970-2020). We use the CAMx Source Apportionment Technology (OSAT/PSAT) to estimate contributions from 6 source sectors in 7 source regions plus 6 other groups for a total of 48 tagged contributions. On-road mobile sources consistently make the largest U.S. anthropogenic emissions contribution to O3 in all cities examined even though they decline substantially from 1970 to 2005 and also from 2005 to 2020. Off-road mobile source contributions increase from 1970 to 2005 and then decrease after 2005 in all of the cities. The boundary conditions, mostly from intercontinental transport, contribute more than 20 ppb to high maximum daily 8-h average (MDA8) O3 for all six years. We found that lowering NOx emissions raises O3 formation efficiency (OFE) across all emission categories which will limit potential O3 benefits of local NOx strategies in the near future. PM2.5 benefited from adoption of control devices between 1970 and 1980 and has continued to decline through 2005 and expected to decline further by 2020. Area sources such as residential, commercial and fugitive dust emissions stand out as making large contributions to PM2.5 that are not declining. Inter-regional transport is less important in 2020 than 1990 for both pollutants.

  19. Inconsistencies among secondary sources of Chukar Partridge (Alectoris chukar) introductions to the United States

    OpenAIRE

    Moulton, Michael P.; Cropper, Wendell P.; Broz, Andrew J.

    2015-01-01

    The propagule pressure hypothesis asserts that the number of individuals released is the key determinant of whether an introduction will succeed or not. It remains to be shown whether propagule pressure is more important than either species-level or site-level factors in determining the fate of an introduction. Studies claiming to show that propagule pressure is the primary determinant of introduction success must assume that the historical record as reported by secondary sources is complete ...

  20. Inconsistencies among secondary sources of Chukar Partridge (Alectoris chukar introductions to the United States

    Directory of Open Access Journals (Sweden)

    Michael P. Moulton

    2015-11-01

    Full Text Available The propagule pressure hypothesis asserts that the number of individuals released is the key determinant of whether an introduction will succeed or not. It remains to be shown whether propagule pressure is more important than either species-level or site-level factors in determining the fate of an introduction. Studies claiming to show that propagule pressure is the primary determinant of introduction success must assume that the historical record as reported by secondary sources is complete and accurate. Here, examine a widely introduced game bird, the Chukar (Alectoris chukar, to the USA. We compare the records reported by two secondary sources (Long, 1981; Lever, 1987 to those in a primary source (Christensen, 1970 and to a recent study by Sol et al. (2012. Numerous inconsistencies exist in the records reported by Sol et al. (2012, Long (1981 and Lever (1987 when compared to the primary record of Christensen (1970. As reported by Christensen (1970, very large numbers of Chukars were released unsuccessfully in some states. Our results strongly imply that factors other than sheer numbers are more important. Site-to-site differences are the most likely explanation for the variation in success.

  1. Background PM2.5 source apportionment in the remote Northwestern United States

    Science.gov (United States)

    Hadley, Odelle L.

    2017-10-01

    This study used the Environmental Protection Agency's positive matrix factorization model (EPA PMF5.0) to identify five primary source factors contributing to the ambient PM2.5 concentrations at Cheeka Peak Atmospheric Observatory (CPO), Neah Bay WA between January 2011 and December 2014. CPO is home to both an IMPROVE (Interagency Monitoring for Protected Visual Environments) and a NCore multi-pollutant monitoring site. Chemically resolved particulate data from the IMPROVE site was the input data to EPA PMF5.0 and the resulting source factors were derived solely from these data. Solutions from the model were analyzed in context with trace gas and meteorological data collected at the NCore site located roughly 10 m away. Seasonal and long-term trends were analyzed for all five factors and provide the first complete source apportionment analysis of PM2.5 at this remote location. The first factor, identified as marine-traffic residual fuel oil (RFO), was the highest contributor to PM2.5 during late summer. Over the 4-year analysis, the RFO percent contribution to total PM2.5 declined. This is consistent with previous studies and may be attributed to regulations restricting the sulfur content of ship fuel. Biomass combustion emissions (BMC) and sea salt were the largest PM2.5 sources observed at CPO in winter, accounting for over 80% of the fine particulate. BMC accounted for a large percent of the fine particulate pollution when winds were easterly, or continental. Sea salt was the dominant winter factor when winds blew from the west. Measured trace carbon monoxide (CO) and reactive nitrogen species (NOy) were most strongly correlated with the BMC factor and continental winds. The fourth factor was identified as aged crustal material, or dust. In all three years, dust peaked in the spring and was associated exclusively with north-easterly winds. The last factor was identified as aged sea salt mixed with nitrate, sulfate, and other components common to RFO and BMC

  2. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  3. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    International Nuclear Information System (INIS)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F.; Piro, Anthony L.

    2015-01-01

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M ⊙ (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M ⊙ of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M ⊙ , hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R ⊙

  4. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    Energy Technology Data Exchange (ETDEWEB)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F. [TAPIR, Walter Burke Institute for Theoretical Physics, MC 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Piro, Anthony L., E-mail: morozvs@tapir.caltech.edu [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2015-11-20

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.

  5. Quality traits of pork semimembranosus and triceps brachii muscles sourced from the United States and Mexico.

    Science.gov (United States)

    Delgado-Suárez, E J; Rubio-Lozano, M S; Toledo-López, V M; Torrescano-Urrutia, G R; Ponce-Alquicira, E; Huerta-Leidenz, N

    2016-12-01

    The study included fresh pork semimembranosus (SM, n=289) and triceps brachii (TB, n=283) muscles sourced from meat packers of Mexico and the USA. Samples were analyzed for moisture, protein, and fat content, pH, shear force (WBSF), cook loss, water holding capacity (WHC), instrumental color, emulsion capacity (EC) and stability (ES), and consumer sensory ratings. SM from the USA had lower WBSF (P0.05) across countries. TB from Mexico had higher (Ppork exhibits better technological properties, while country of origin has less effect on consumer acceptability. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. A Multiagent Energy Management System for a Small Microgrid Equipped with Power Sources and Energy Storage Units

    Science.gov (United States)

    Radziszewska, Weronika; Nahorski, Zbigniew

    An Energy Management System (EMS) for a small microgrid is presented, with both demand and production side management. The microgrid is equipped with renewable and controllable power sources (like a micro gas turbine), energy storage units (batteries and flywheels). Energy load is partially scheduled to avoid extreme peaks of power demand and to possibly match forecasted energy supply from the renewable power sources. To balance the energy in the network on line, a multiagent system is used. Intelligent agents of each device are proactively acting towards balancing the energy in the network, and at the same time optimizing the cost of operation of the whole system. A semi-market mechanism is used to match a demand and a production of the energy. Simulations show that the time of reaching a balanced state does not exceed 1 s, which is fast enough to let execute proper balancing actions, e.g. change an operating point of a controllable energy source. Simulators of sources and consumption devices were implemented in order to carry out exhaustive tests.

  7. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  8. Source Code Analysis Laboratory (SCALe)

    Science.gov (United States)

    2012-04-01

    products (including services) and processes. The agency has also published ISO / IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO / IEC 17025 : 2005 General requirements for the competence of testing and

  9. Deaths Attributable to Diabetes in the United States: Comparison of Data Sources and Estimation Approaches.

    Science.gov (United States)

    Stokes, Andrew; Preston, Samuel H

    2017-01-01

    The goal of this research was to identify the fraction of deaths attributable to diabetes in the United States. We estimated population attributable fractions (PAF) for cohorts aged 30-84 who were surveyed in the National Health Interview Survey (NHIS) between 1997 and 2009 (N = 282,322) and in the National Health and Nutrition Examination Survey (NHANES) between 1999 and 2010 (N = 21,814). Cohort members were followed prospectively for mortality through 2011. We identified diabetes status using self-reported diagnoses in both NHIS and NHANES and using HbA1c in NHANES. Hazard ratios associated with diabetes were estimated using Cox model adjusted for age, sex, race/ethnicity, educational attainment, and smoking status. We found a high degree of consistency between data sets and definitions of diabetes in the hazard ratios, estimates of diabetes prevalence, and estimates of the proportion of deaths attributable to diabetes. The proportion of deaths attributable to diabetes was estimated to be 11.5% using self-reports in NHIS, 11.7% using self-reports in NHANES, and 11.8% using HbA1c in NHANES. Among the sub-groups that we examined, the PAF was highest among obese persons at 19.4%. The proportion of deaths in which diabetes was assigned as the underlying cause of death (3.3-3.7%) severely understated the contribution of diabetes to mortality in the United States. Diabetes may represent a more prominent factor in American mortality than is commonly appreciated, reinforcing the need for robust population-level interventions aimed at diabetes prevention and care.

  10. Deaths Attributable to Diabetes in the United States: Comparison of Data Sources and Estimation Approaches.

    Directory of Open Access Journals (Sweden)

    Andrew Stokes

    Full Text Available The goal of this research was to identify the fraction of deaths attributable to diabetes in the United States.We estimated population attributable fractions (PAF for cohorts aged 30-84 who were surveyed in the National Health Interview Survey (NHIS between 1997 and 2009 (N = 282,322 and in the National Health and Nutrition Examination Survey (NHANES between 1999 and 2010 (N = 21,814. Cohort members were followed prospectively for mortality through 2011. We identified diabetes status using self-reported diagnoses in both NHIS and NHANES and using HbA1c in NHANES. Hazard ratios associated with diabetes were estimated using Cox model adjusted for age, sex, race/ethnicity, educational attainment, and smoking status.We found a high degree of consistency between data sets and definitions of diabetes in the hazard ratios, estimates of diabetes prevalence, and estimates of the proportion of deaths attributable to diabetes. The proportion of deaths attributable to diabetes was estimated to be 11.5% using self-reports in NHIS, 11.7% using self-reports in NHANES, and 11.8% using HbA1c in NHANES. Among the sub-groups that we examined, the PAF was highest among obese persons at 19.4%. The proportion of deaths in which diabetes was assigned as the underlying cause of death (3.3-3.7% severely understated the contribution of diabetes to mortality in the United States.Diabetes may represent a more prominent factor in American mortality than is commonly appreciated, reinforcing the need for robust population-level interventions aimed at diabetes prevention and care.

  11. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  12. Current status of control of radiation sources and radioactive materials in the United Republic of Tanzania

    International Nuclear Information System (INIS)

    Nyaruba, M.M.; Mompome, W.K.

    2001-01-01

    A Protection from Radiation Act was enacted in Tanzania in 1983 to regulate the use of ionizing radiation and protect people against its danger. The Act established a regulatory authority known as National Radiation Commission (NRC), which is the corporate body to enforce the law and regulations. From the beginning of 2000, the NRC has kept inventory of 200 and 324 radiation installations, and radiation sources and radioactive materials in the country, respectively; and has provided personnel monitoring services to 665 radiation workers. However, due to the trade liberalization that is currently being experienced in the country, the increase in the number of radiation practices is observed yearly. To cope with the situation, the whole system of notification, authorization, registration and licensing needs to be improved. The improvement has now started by amending the existing Protection from Radiation Act. (author)

  13. Non-Linear Transmission Line (NLTL) Microwave Source Lecture Notes the United States Particle Accelerator School

    Energy Technology Data Exchange (ETDEWEB)

    Russell, Steven J. [Los Alamos National Laboratory; Carlsten, Bruce E. [Los Alamos National Laboratory

    2012-06-26

    We will quickly go through the history of the non-linear transmission lines (NLTLs). We will describe how they work, how they are modeled and how they are designed. Note that the field of high power, NLTL microwave sources is still under development, so this is just a snap shot of their current state. Topics discussed are: (1) Introduction to solitons and the KdV equation; (2) The lumped element non-linear transmission line; (3) Solution of the KdV equation; (4) Non-linear transmission lines at microwave frequencies; (5) Numerical methods for NLTL analysis; (6) Unipolar versus bipolar input; (7) High power NLTL pioneers; (8) Resistive versus reactive load; (9) Non-lineaer dielectrics; and (10) Effect of losses.

  14. Simulation of the turbine trip of Unit 1 of the Laguna Verde nuclear power plant using the code Simulate-3K

    International Nuclear Information System (INIS)

    Alegria A, A.; Filio L, C.; Ortiz V, J.

    2017-09-01

    In order to compare the results obtained from the model developed in the Comision Nacional de Seguridad Nuclear y Salvaguardias (CNSNS) with the code Simulate-3K (S3K) with respect to those reported by the process computer of the Central (SIIP), the simulation of the turbine trip transient was carried out, caused by the firing of the main generator, the low differential pressure of oil of its seals and the automatic Scram of Unit 1 of the Laguna Verde nuclear power plant, at 87% of power nominal during the operation cycle 16. Since the reactor was brought to a safe stop due to Scram, was enough to simulate 20 seconds to observe the maximum increase in pressure with S3K. In this work, the following parameters are shown and compared: the neutron flux, the thermal power, the pressure in the dome, the flow at the entrance to the core, the steam flow that leaves the vessel and the minimal critical power ratio (MCPR). The neutron flux of the average power range monitors of the nuclear power plant was compared with the S3K detectors model. Finally, the MCPR was calculated with a different correlation to that of the fuel supplier and its deviation from its safety limit was determined. In conclusion, the results obtained show the current state of the model for the simulation of reactivity transients and the opportunity areas to consolidate this tool in support of the process of licensing refueling in the CNSNS. (Author)

  15. Comparison of 2015 Medicare relative value units for gender-specific procedures: Gynecologic and gynecologic-oncologic versus urologic CPT coding. Has time healed gender-worth?

    Science.gov (United States)

    Benoit, M F; Ma, J F; Upperman, B A

    2017-02-01

    In 1992, Congress implemented a relative value unit (RVU) payment system to set reimbursement for all procedures covered by Medicare. In 1997, data supported that a significant gender bias existed in reimbursement for gynecologic compared to urologic procedures. The present study was performed to compare work and total RVU's for gender specific procedures effective January 2015 and to evaluate if time has healed the gender-based RVU worth. Using the 2015 CPT codes, we compared work and total RVU's for 50 pairs of gender specific procedures. We also evaluated 2015 procedure related provider compensation. The groups were matched so that the procedures were anatomically similar. We also compared 2015 to 1997 RVU and fee schedules. Evaluation of work RVU's for the paired procedures revealed that in 36 cases (72%), male vs female procedures had a higher wRVU and tRVU. For total fee/reimbursement, 42 (84%) male based procedures were compensated at a higher rate than the paired female procedures. On average, male specific surgeries were reimbursed at an amount that was 27.67% higher for male procedures than for female-specific surgeries. Female procedure based work RVU's have increased minimally from 1997 to 2015. Time and effort have trended towards resolution of some gender-related procedure worth discrepancies but there are still significant RVU and compensation differences that should be further reviewed and modified as surgical time and effort highly correlate. Copyright © 2016. Published by Elsevier Inc.

  16. Study of the source-detector system geometry using the MCNP-X code in the flowrate measurement with radioactive tracers

    Energy Technology Data Exchange (ETDEWEB)

    Avilan Puertas, Eddie, E-mail: epuertas@nuclear.ufrj.br [Universidad Central de Venezuela (UCV), Facultad de Ingenieria, Departamento de Fisica Aplicada, Caracas (Venezuela, Bolivarian Republic of); Braz, Delson, E-mail: delson@lin.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandao, Luis E.; Salgado, Cesar M., E-mail: brandao@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    The use radioactive tracers for flow rate measurement is applied to a great variety of situations, however the accuracy of the technique is highly dependent of the adequate choice of the experimental measurement conditions. To measure flow rate of fluids in ducts partially filled, is necessary to measure the fluid flow velocity and the fluid height. The flow velocity can be measured with the cross correlation function and the fluid level, with a fluid level meter system. One of the error factors when measuring flow rate, is on the correct setting of the source-detector of the fluid level meter system. The goal of the present work is to establish by mean of MCNP-X code simulations the experimental parameters to measure the fluid level. The experimental tests will be realized in a flow rate system of 10 mm of diameter of acrylic tube for water and oil as fluids. The radioactive tracer to be used is the {sup 82}Br and for the detection will be employed two 1″ NaI(Tl) scintillator detectors, shielded with collimators of 0.5 cm and 1 cm of circular aperture diameter. (author)

  17. Heroin-related overdose: The unexplored influences of markets, marketing and source-types in the United States

    Science.gov (United States)

    Mars, Sarah G.; Fessel, Jason N.; Bourgois, Philippe; Montero, Fernando; Karandinos, George; Ciccarone, Daniel

    2015-01-01

    Heroin overdose, more accurately termed ‘heroin-related overdose’ due to the frequent involvement of other drugs, is the leading cause of mortality among regular heroin users. (Degenhardt et al., 2010) Heroin injectors are at greater risk of hospital admission for heroin-related overdose (HOD) in the eastern United States where Colombian-sourced powder heroin is sold than in the western US where black ‘tar’ heroin predominates. (Unick et al., 2014) This paper examines under-researched influences on HOD, both fatal and non-fatal, using data from a qualitative study of injecting drug users of black tar heroin in San Francisco and powder heroin in Philadelphia Data were collected through in-depth, semi-structured interviews carried out in 2012 that were conducted against a background of longer-term participant-observation, ethnographic studies of drug users and dealers in Philadelphia (2007–12) and of users in San Francisco (1994–2007, 2012). Our findings suggest three types of previously unconsidered influences on overdose risk that arise both from structural socio-economic factors and from the physical properties of the heroin source-types: 1) retail market structure including information flow between users; 2) marketing techniques such as branding, free samples and pricing and 3) differences in the physical characteristics of the two major heroin source forms and how they affect injecting techniques and vascular health. Although chosen for their contrasting source-forms, we found that the two cities have contrasting dominant models of drug retailing: San Francisco respondents tended to buy through private dealers and Philadelphia respondents frequented an open-air street market where heroin is branded and free samples are distributed, although each city included both types of drug sales. These market structures and marketing techniques shape the availability of information regarding heroin potency and its dissemination among users who tend to seek out

  18. Heroin-related overdose: The unexplored influences of markets, marketing and source-types in the United States.

    Science.gov (United States)

    Mars, Sarah G; Fessel, Jason N; Bourgois, Philippe; Montero, Fernando; Karandinos, George; Ciccarone, Daniel

    2015-09-01

    Heroin overdose, more accurately termed 'heroin-related overdose' due to the frequent involvement of other drugs, is the leading cause of mortality among regular heroin users. (Degenhardt et al., 2010) Heroin injectors are at greater risk of hospital admission for heroin-related overdose (HOD) in the eastern United States where Colombian-sourced powder heroin is sold than in the western US where black 'tar' heroin predominates. (Unick et al., 2014) This paper examines under-researched influences on HOD, both fatal and non-fatal, using data from a qualitative study of injecting drug users of black tar heroin in San Francisco and powder heroin in Philadelphia Data were collected through in-depth, semi-structured interviews carried out in 2012 that were conducted against a background of longer-term participant-observation, ethnographic studies of drug users and dealers in Philadelphia (2007-12) and of users in San Francisco (1994-2007, 2012). Our findings suggest three types of previously unconsidered influences on overdose risk that arise both from structural socio-economic factors and from the physical properties of the heroin source-types: 1) retail market structure including information flow between users; 2) marketing techniques such as branding, free samples and pricing and 3) differences in the physical characteristics of the two major heroin source forms and how they affect injecting techniques and vascular health. Although chosen for their contrasting source-forms, we found that the two cities have contrasting dominant models of drug retailing: San Francisco respondents tended to buy through private dealers and Philadelphia respondents frequented an open-air street market where heroin is branded and free samples are distributed, although each city included both types of drug sales. These market structures and marketing techniques shape the availability of information regarding heroin potency and its dissemination among users who tend to seek out the

  19. Neutron and photon measurements through concrete from a 15 GeV electron beam on a target-comparison with models and calculations. [Intermediate energy source term, Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, T M [Stanford Linear Accelerator Center, CA (USA)

    1979-02-15

    Measurements of neutron and photon dose equivalents from a 15 GeV electron beam striking an iron target inside a scale model of a PEP IR hall are described, and compared with analytic-empirical calculations and with the Monte Carlo code, MORSE. The MORSE code is able to predict both absolute neutron and photon dose equivalents for geometries where the shield is relatively thin, but fails as the shield thickness is increased. An intermediate energy source term is postulated for analytic-empirical neutron shielding calculations to go along with the giant resonance and high energy terms, and a new source term due to neutron capture is postulated for analytic-empirical photon shielding calculations. The source strengths for each energy source term, and each type, are given from analysis of the measurements.

  20. Sources of ozone and sulfate in northeastern United States. Final report

    International Nuclear Information System (INIS)

    Husain, L.

    1981-01-01

    Daily measurements of 7 Be, 32 P, 33 P, O 3 , SO 4 2- and certain trace elements were made at Whiteface Mountain, NY during summer 1977 and June 1978-December 1979. The results were used to delineate the sources of O 3 . It is shown that when 7 Be concentration is combined with 7 Be/ 32 P ratio they are reliable tracers of stratospheric O 3 , whereas SO 4 2- is a promising tracer of transported anthropogenic O 3 . On days of high 7 Be concentrations, 7 Be, O 3 and 7 Be/ 32 P ratios peaked together, whereas the sulfate concentrations were very low. A day later, the SO 4 2- concentration increased to unusually high level, while 7 Be, 7 Be/ 32 P ratio and to a much lesser degree O 3 decreased. From these results it is inferred that the subsidence of stratospheric air enhances surface O 3 concentrations. Furthermore, the backside of the weather system responsible for this subsidence also favors the advection of polluted air containing anthropogenic O 3 to this site. Therefore, a stratospheric intrusion may intensify a surface photochemical O 3 episode. However, based on approx. 500 days of 7 Be and O 3 data, the stratosphere does not appear to be directly responsible for the episodic high O 3 (approx. 100 ppbv). To study surface transport of pollutant, concentrations of SO 4 2- and trace elements were compared with backward surface air trajectories. This analysis showed that episodic high concentrations were associated with air transported from midwestern states

  1. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  2. Sources of Legal Regulation of Mergers, Acquisitions, Consolidations, Joint Stock Companies in Russia and Corporations in the United States

    Directory of Open Access Journals (Sweden)

    Stanislav E. Kuzmin

    2015-01-01

    Full Text Available The article outlines general characteristics of the sources of law, regulating relations associated with mergers, consolidations, acquisitions of joint stock companies in Russia and corporations in the United States respectively in the Russian legislation and the legislation of the United States and individual States. Both in Russia and in the USA there is a constitutional separation of powers between the Federal authorities and the Subjects of the Federation/States respectively. In both countries legal regulation of mergers and acquisitions of corporations is carried out first of all by a number of laws. These laws fall into three main groups: securities laws, antitrust (competition laws and civil and joint-stock legislation in Russia and corporate laws in the US. All the three groups are federal laws in Russia, while in the US the first two are federal too, but the last one is state laws. It is necessary to highlight the important role of judicial decisions in the United States on legal regulation of mergers, acquisitions, takeovers in comparison with Russia, which is due to the differences in the legal systems of the states in question. However, although Russia is not a state of case law, such legal acts as the resolution of the Plenum of the Supreme Commercial Court will undoubtedly have an impact on law enforcement practice and, consequently, on the regulation of relevant relations. Of particular importance are the findings of the Constitutional Court, whose decisions may cancel acts or their separate provisions provided they are recognized as unconstitutional. Such acts are repealed. Decisions of courts and other bodies based on acts or their separate provisions, recognized by the Constitutional Court of the Russian Federation unconstitutional, are not subject to execution and shall be revised in accordance with the Federal law. The US case law implies existence of a hierarchy of precedents according to which decisions adopted by the

  3. Performance of double source boiler with coal-fired and solar power tower heat for supercritical power generating unit

    International Nuclear Information System (INIS)

    Zhang, Maolong; Du, Xiaoze; Pang, Liping; Xu, Chao; Yang, Lijun

    2016-01-01

    An approach of high-efficiency utilization of solar energy was proposed, by which the high concentrated heat received by the solar tower was integrated to the supercritical coal-fired boiler. Two schemes that solar energy was used to heat superheat steam or subcooled feed water were presented. The thermodynamic and heat transfer models were established. For a practical 660 MW supercritical power generating unit, the standard coal consumption of power generation could be decreased by more than 17 g/kWh by such double source boiler. The drawbacks of both schemes were found and then were amended by adding a flue gas bypass to the boiler. It also can be concluded that the maximum solar contribution of two schemes for the gross power generation are 6.11% and 4.90%, respectively. The solar power efficiency of the re-modified designs were demonstrated be superior to that of PS10. In terms of turbine efficiency, the comparisons with Solar Two plant having similar initial temperature found that the efficiency of Scheme I was 5.25% higher than that of Solar Two while the advantage of Scheme II was existing either. Additionally, in two schemes with flue bypass when the medium was extracted, the thermal efficiency of boiler could be improved as well. - Highlights: • High concentrated solar tower heat is integrated to the supercritical coal-fired boiler. • The double source boiler can use solar energy to heat superheat steam or subcooled feed water. • Power generating coal consumption can be reduced by more than 17 g/kWh by the double source boiler. • The solar contribution of double source boiler for the gross power generation can be as high as 6.11%.

  4. Gender pay gap and employment sector: sources of earnings disparities in the United States, 1970-2010.

    Science.gov (United States)

    Mandel, Hadas; Semyonov, Moshe

    2014-10-01

    Using data from the IPUMS-USA, the present research focuses on trends in the gender earnings gap in the United States between 1970 and 2010. The major goal of this article is to understand the sources of the convergence in men's and women's earnings in the public and private sectors as well as the stagnation of this trend in the new millennium. For this purpose, we delineate temporal changes in the role played by major sources of the gap. Several components are identified: the portion of the gap attributed to gender differences in human-capital resources; labor supply; sociodemographic attributes; occupational segregation; and the unexplained portion of the gap. The findings reveal a substantial reduction in the gross gender earnings gap in both sectors of the economy. Most of the decline is attributed to the reduction in the unexplained portion of the gap, implying a significant decline in economic discrimination against women. In contrast to discrimination, the role played by human capital and personal attributes in explaining the gender pay gap is relatively small in both sectors. Differences between the two sectors are not only in the size and pace of the reduction but also in the significance of the two major sources of the gap. Working hours have become the most important factor with respect to gender pay inequality in both sectors, although much more dominantly in the private sector. The declining gender segregation may explain the decreased impact of occupations on the gender pay gap in the private sector. In the public sector, by contrast, gender segregation still accounts for a substantial portion of the gap. The findings are discussed in light of the theoretical literature on sources of gender economic inequality and in light of the recent stagnation of the trend.

  5. Trusting Social Media as a Source of Health Information: Online Surveys Comparing the United States, Korea, and Hong Kong.

    Science.gov (United States)

    Song, Hayeon; Omori, Kikuko; Kim, Jihyun; Tenzek, Kelly E; Morey Hawkins, Jennifer; Lin, Wan-Ying; Kim, Yong-Chan; Jung, Joo-Young

    2016-03-14

    The Internet has increasingly become a popular source of health information by connecting individuals with health content, experts, and support. More and more, individuals turn to social media and Internet sites to share health information and experiences. Although online health information seeking occurs worldwide, limited empirical studies exist examining cross-cultural differences in perceptions about user-generated, experience-based information compared to expertise-based information sources. To investigate if cultural variations exist in patterns of online health information seeking, specifically in perceptions of online health information sources. It was hypothesized that Koreans and Hongkongers, compared to Americans, would be more likely to trust and use experience-based knowledge shared in social Internet sites, such as social media and online support groups. Conversely, Americans, compared to Koreans and Hongkongers, would value expertise-based knowledge prepared and approved by doctors or professional health providers more. Survey questionnaires were developed in English first and then translated into Korean and Chinese. The back-translation method ensured the standardization of questions. Surveys were administered using a standardized recruitment strategy and data collection methods. A total of 826 participants living in metropolitan areas from the United States (n=301), Korea (n=179), and Hong Kong (n=337) participated in the study. We found significant cultural differences in information processing preferences for online health information. A planned contrast test revealed that Koreans and Hongkongers showed more trust in experience-based health information sources (blogs: t451.50=11.21, Psocial networking sites [SNS]: t466.75=11.36, P<.001) and also reported using blogs (t515.31=6.67, P<.001) and SNS (t529.22=4.51, P<.001) more frequently than Americans. Americans showed a stronger preference for using expertise-based information sources (eg, Web

  6. Closure simulation of the MSIV of Unit 1 of the Laguna Verde nuclear power plant using the Simulate 3K code

    International Nuclear Information System (INIS)

    Alegria A, A.

    2015-09-01

    In this paper the simulation of closure transient of all main steam isolation valves (MSIV) was performed with the Simulate-3K (S-3K) code for the Unit 1 of the Laguna Verde nuclear power plant (NPP-LV), which operates to thermal power of 2317 MWt, corresponding to the cycle 15 of operation. The set points for the performance of systems correspond to those set out in transient analysis: 3 seconds for the closure of all MSIV; the start of Scram when 121% of the neutron flux is reached, respect from baseline before the transient; the opening by peer of safety relief valves (SRV) in relief mode when the set point of the pressure is reached, the shoot of the feedwater flow seconds after the start of closing of the MSIV and the shoot of the recirculation water pumps when the pressure is reached in the dome of 1048 psig. The simulation time was of 57 seconds, with the top 50 to reach the steady state, from which the closure of all MSIV starts. In this paper the behavior of the pressure in the dome are analyzed, thermal power, neutron flux, the collapsed water level, the flow at the entrance of core, the steam flow coming out of vessel and the flow through of the SRV; the fuel temperature, the minimal critical power ratio, the readings in the instrumentation systems and reactivities. Instrumentation systems were implemented to analyze the neutron flux, these consist of 96 local power range monitors (LPRM) located in different radial and axial positions of the core and 4 channels of average power range monitors, which grouped at 24 LPRM each one. LPRM response to the change of neutron flux in the center of the core, at different axial positions is also shown. Finally, the results show that the safety limit MCPR is not exceeded. (Author)

  7. Review and evaluation of the Millstone Unit 3 probabilistic safety study. Containment failure modes, radiological source - terms and offsite consequences

    International Nuclear Information System (INIS)

    Khatib-Rahbar, M.; Pratt, W.; Ludewig, H.

    1985-09-01

    A technical review and evaluation of the Millstone Unit 3 probabilistic safety study has been performed. It was determined that; (1) long-term damage indices (latent fatalities, person-rem, etc.) are dominated by late failure of the containment, (2) short-term damage indices (early fatalities, etc.) are dominated by bypass sequences for internally initiated events, while severe seismic sequences can also contribute significantly to early damage indices. These overall estimates of severe accident risk are extremely low compared with other societal sources of risk. Furthermore, the risks for Millstone-3 are comparable to risks from other nuclear plants at high population sites. Seismically induced accidents dominate the severe accident risks at Millstone-3. Potential mitigative features were shown not to be cost-effective for internal events. Value-impact analysis for seismic events showed that a manually actuated containment spray system might be cost-effective

  8. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  9. Discharge source coupled to a deceleration unit for anion beam generation: Application to H{sub 2}{sup −} photodetachment

    Energy Technology Data Exchange (ETDEWEB)

    Rudnev, V.; Ureña, A. González [Unidad de Láseres y Haces Moleculares, Instituto Pluridisciplinar, Universidad Complutense, Juan XXIII-1, Madrid 28040 (Spain)

    2013-12-15

    A cathode discharge source coupled to a deceleration unit for anion beam generation is described. The discharge source, made of stainless steel or duralumin electrodes and Macor insulators, is attached to the exit nozzle valve plate at one end, and to an Einzel lens to the other end. Subsequently, a cylindrical retardation unit is attached to the Einzel lens to decelerate the ions in order to optimize the laser beam interaction time required for spectroscopic investigations. The compact device is able to produce beam intensities of the order of 2 × 10{sup 12} anions/cm{sup 2} s and 20 μrad of angular divergence with kinetic energies ranging from 30 to 120 eV. Using distinct gas mixtures for the supersonic expansion together with a linear time-of-flight spectrometer, anions of great relevance in molecular astrophysics like, for example, H{sub 2}{sup −}, C{sub 3}H{sup −}, C{sub 2}{sup −}, C{sub 2}H{sup −}, HCN{sub 2}{sup −}, CO{sub 2}{sup −}, CO{sub 2}H{sup −}, C{sub 4}{sup −}, C{sub 4}H{sup −}, C{sub 5}H{sub 4}{sup −}, C{sub 5}H{sub 6}{sup −}, C{sub 7}N{sup −}, and C{sub 10}N{sup −} were produced. Finally, in order to demonstrate the capability of the experimental technique the photodetachment cross-section of the metastable H{sub 2}{sup −}, predominantly in the (v = 0, J = 26) state, was measured following laser excitation at λ{sub exc}= 565 nm obtaining a value of σ{sub ph}= 0.04 Å. To the best of our knowledge, it is the first time that this anion cross-section has been measured.

  10. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  11. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    Energy Technology Data Exchange (ETDEWEB)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Grimm, Simon L. [University of Bern, Space Research and Planetary Sciences, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Mordasini, Christoph; Oreshenko, Maria; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Bonnefoy, Mickaël [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Ehrenreich, David, E-mail: baptiste.lavie@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Observatoire de l’Université de Genève, 51 chemin des Maillettes, 1290, Sauverny (Switzerland)

    2017-09-01

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).

  12. Simulation of equivalent dose due to accidental electron beam loss in Indus-1 and Indus-2 synchrotron radiation sources using FLUKA code

    International Nuclear Information System (INIS)

    Sahani, P.K.; Dev, Vipin; Singh, Gurnam; Haridas, G.; Thakkar, K.K.; Sarkar, P.K.; Sharma, D.N.

    2008-01-01

    Indus-1 and Indus-2 are two Synchrotron radiation sources at Raja Ramanna Centre for Advanced Technology (RRCAT), India. Stored electron energy in Indus-1 and Indus-2 are 450MeV and 2.5GeV respectively. During operation of storage ring, accidental electron beam loss may occur in addition to normal beam losses. The Bremsstrahlung radiation produced due to the beam losses creates a major radiation hazard in these high energy electron accelerators. FLUKA, the Monte Carlo radiation transport code is used to simulate the accidental beam loss. The simulation was carried out to estimate the equivalent dose likely to be received by a trapped person closer to the storage ring. Depth dose profile in water phantom for 450MeV and 2.5GeV electron beam is generated, from which percentage energy absorbed in 30cm water phantom (analogous to human body) is calculated. The simulation showed the percentage energy deposition in the phantom is about 19% for 450MeV electron and 4.3% for 2.5GeV electron. The dose build up factor in 30cm water phantom for 450MeV and 2.5GeV electron beam are found to be 1.85 and 2.94 respectively. Based on the depth dose profile, dose equivalent index of 0.026Sv and 1.08Sv are likely to be received by the trapped person near the storage ring in Indus-1 and Indus-2 respectively. (author)

  13. REE enrichment in granite-derived regolith deposits of the southeast United States: Prospective source rocks and accumulation processes

    Science.gov (United States)

    Foley, Nora K.; Ayuso, Robert A.; Simandl, G.J.; Neetz, M.

    2015-01-01

    The Southeastern United States contains numerous anorogenic, or A-type, granites, which constitute promising source rocks for REE-enriched ion adsorption clay deposits due to their inherently high concentrations of REE. These granites have undergone a long history of chemical weathering, resulting in thick granite-derived regoliths, akin to those of South China, which supply virtually all heavy REE and Y, and a significant portion of light REE to global markets. Detailed comparisons of granite regolith profiles formed on the Stewartsville and Striped Rock plutons, and the Robertson River batholith (Virginia) indicate that REE are mobile and can attain grades comparable to those of deposits currently mined in China. A REE-enriched parent, either A-type or I-type (highly fractionated igneous type) granite, is thought to be critical for generating the high concentrations of REE in regolith profiles. One prominent feature we recognize in many granites and mineralized regoliths is the tetrad behaviour displayed in REE chondrite-normalized patterns. Tetrad patterns in granite and regolith result from processes that promote the redistribution, enrichment, and fractionation of REE, such as late- to post- magmatic alteration of granite and silicate hydrolysis in the regolith. Thus, REE patterns showing tetrad effects may be a key for discriminating highly prospective source rocks and regoliths with potential for REE ion adsorption clay deposits.

  14. Dietary sources of energy, solid fats, and added sugars among children and adolescents in the United States.

    Science.gov (United States)

    Reedy, Jill; Krebs-Smith, Susan M

    2010-10-01

    The objective of this research was to identify top dietary sources of energy, solid fats, and added sugars among 2- to 18-year-olds in the United States. Data from the National Health and Nutrition Examination Survey, a cross-sectional study, were used to examine food sources (percentage contribution and mean intake with standard errors) of total energy (data from 2005-2006) and energy from solid fats and added sugars (data from 2003-2004). Differences were investigated by age, sex, race/ethnicity, and family income, and the consumption of empty calories-defined as the sum of energy from solid fats and added sugars-was compared with the corresponding discretionary calorie allowance. The top sources of energy for 2- to 18-year-olds were grain desserts (138 kcal/day), pizza (136 kcal/day), and soda (118 kcal/day). Sugar-sweetened beverages (soda and fruit drinks combined) provided 173 kcal/day. Major contributors varied by age, sex, race/ethnicity, and income. Nearly 40% of total energy consumed (798 of 2,027 kcal/day) by 2- to 18-year-olds were in the form of empty calories (433 kcal from solid fat and 365 kcal from added sugars). Consumption of empty calories far exceeded the corresponding discretionary calorie allowance for all sex-age groups (which range from 8% to 20%). Half of empty calories came from six foods: soda, fruit drinks, dairy desserts, grain desserts, pizza, and whole milk. There is an overlap between the major sources of energy and empty calories: soda, grain desserts, pizza, and whole milk. The landscape of choices available to children and adolescents must change to provide fewer unhealthy foods and more healthy foods with less energy. Identifying top sources of energy and empty calories can provide targets for changes in the marketplace and food environment. However, product reformulation alone is not sufficient-the flow of empty calories into the food supply must be reduced.

  15. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  16. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  17. Calculations of fuel burn up and radionuclide inventories in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION codes

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor as a function of the reactor operating time for 10, 20, and 30 k W operating power levels. The uranium burn up rate and burn up percentage, the amounts of the plutonium isotopes, the concentrations and radioactivities of the fission products and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well. The CITATION code is used to calculate the changes in the effective multiplication factor of the reactor.(author)

  18. Unclassified Source Term and Radionuclide Data for Corrective Action Unit 98: Frenchman Flat Nevada Test Site, Nevada, Rev. No.: 0

    Energy Technology Data Exchange (ETDEWEB)

    Farnham, Irene

    2005-09-01

    Frenchman Flat is one of several areas of the Nevada Test Site (NTS) used for underground nuclear testing (Figure 1-1). These nuclear tests resulted in groundwater contamination in the vicinity of the underground test areas. As a result, the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) is currently conducting a corrective action investigation (CAI) of the Frenchman Flat underground test areas. Since 1996, the Nevada Division of Environmental Protection (NDEP) has regulated NNSA/NSO corrective actions through the ''Federal Facility Agreement and Consent Order'' ([FFACO], 1996). Appendix VI of the FFACO agreement, ''Corrective Action Strategy'', was revised on December 7, 2000, and describes the processes that will be used to complete corrective actions, including those in the Underground Test Area (UGTA) Project. The individual locations covered by the agreement are known as corrective action sites (CASs), which are grouped into corrective action units (CAUs). The UGTA CASs are grouped geographically into five CAUs: Frenchman Flat, Central Pahute Mesa, Western Pahute Mesa, Yucca Flat/Climax Mine, and Rainier Mesa/Shoshone Mountain (Figure 1-1). These CAUs have distinctly different contaminant source, geologic, and hydrogeologic characteristics related to their location (FFACO, 1996). The Frenchman Flat CAU consists of 10 CASs located in the northern part of Area 5 and the southern part of Area 11 (Figure 1-1). This report documents the evaluation of the information and data available on the unclassified source term and radionuclide contamination for Frenchman Flat, CAU 98. The methodology used to estimate hydrologic source terms (HSTs) for the Frenchman Flat CAU is also documented. The HST of an underground nuclear test is the portion of the total inventory of radionuclides that is released over time into the groundwater following the test. The total residual inventory

  19. Unclassified Source Term and Radionuclide Data for Corrective Action Unit 98: Frenchman Flat Nevada Test Site, Nevada

    International Nuclear Information System (INIS)

    Farnham, Irene

    2005-01-01

    Frenchman Flat is one of several areas of the Nevada Test Site (NTS) used for underground nuclear testing (Figure 1-1). These nuclear tests resulted in groundwater contamination in the vicinity of the underground test areas. As a result, the U.S. Department of Energy (DOE), National Nuclear Security Administration Nevada Site Office (NNSA/NSO) is currently conducting a corrective action investigation (CAI) of the Frenchman Flat underground test areas. Since 1996, the Nevada Division of Environmental Protection (NDEP) has regulated NNSA/NSO corrective actions through the ''Federal Facility Agreement and Consent Order'' ([FFACO], 1996). Appendix VI of the FFACO agreement, ''Corrective Action Strategy'', was revised on December 7, 2000, and describes the processes that will be used to complete corrective actions, including those in the Underground Test Area (UGTA) Project. The individual locations covered by the agreement are known as corrective action sites (CASs), which are grouped into corrective action units (CAUs). The UGTA CASs are grouped geographically into five CAUs: Frenchman Flat, Central Pahute Mesa, Western Pahute Mesa, Yucca Flat/Climax Mine, and Rainier Mesa/Shoshone Mountain (Figure 1-1). These CAUs have distinctly different contaminant source, geologic, and hydrogeologic characteristics related to their location (FFACO, 1996). The Frenchman Flat CAU consists of 10 CASs located in the northern part of Area 5 and the southern part of Area 11 (Figure 1-1). This report documents the evaluation of the information and data available on the unclassified source term and radionuclide contamination for Frenchman Flat, CAU 98. The methodology used to estimate hydrologic source terms (HSTs) for the Frenchman Flat CAU is also documented. The HST of an underground nuclear test is the portion of the total inventory of radionuclides that is released over time into the groundwater following the test. The total residual inventory of radionuclides associated with one or

  20. Quantifying underreporting of law-enforcement-related deaths in United States vital statistics and news-media-based data sources: A capture-recapture analysis.

    Science.gov (United States)

    Feldman, Justin M; Gruskin, Sofia; Coull, Brent A; Krieger, Nancy

    2017-10-01

    Prior research suggests that United States governmental sources documenting the number of law-enforcement-related deaths (i.e., fatalities due to injuries inflicted by law enforcement officers) undercount these incidents. The National Vital Statistics System (NVSS), administered by the federal government and based on state death certificate data, identifies such deaths by assigning them diagnostic codes corresponding to "legal intervention" in accordance with the International Classification of Diseases-10th Revision (ICD-10). Newer, nongovernmental databases track law-enforcement-related deaths by compiling news media reports and provide an opportunity to assess the magnitude and determinants of suspected NVSS underreporting. Our a priori hypotheses were that underreporting by the NVSS would exceed that by the news media sources, and that underreporting rates would be higher for decedents of color versus white, decedents in lower versus higher income counties, decedents killed by non-firearm (e.g., Taser) versus firearm mechanisms, and deaths recorded by a medical examiner versus coroner. We created a new US-wide dataset by matching cases reported in a nongovernmental, news-media-based dataset produced by the newspaper The Guardian, The Counted, to identifiable NVSS mortality records for 2015. We conducted 2 main analyses for this cross-sectional study: (1) an estimate of the total number of deaths and the proportion unreported by each source using capture-recapture analysis and (2) an assessment of correlates of underreporting of law-enforcement-related deaths (demographic characteristics of the decedent, mechanism of death, death investigator type [medical examiner versus coroner], county median income, and county urbanicity) in the NVSS using multilevel logistic regression. We estimated that the total number of law-enforcement-related deaths in 2015 was 1,166 (95% CI: 1,153, 1,184). There were 599 deaths reported in The Counted only, 36 reported in the NVSS

  1. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  2. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  3. Challenges to code status discussions for pediatric patients.

    Directory of Open Access Journals (Sweden)

    Katherine E Kruse

    Full Text Available In the context of serious or life-limiting illness, pediatric patients and their families are faced with difficult decisions surrounding appropriate resuscitation efforts in the event of a cardiopulmonary arrest. Code status orders are one way to inform end-of-life medical decision making. The objectives of this study are to evaluate the extent to which pediatric providers have knowledge of code status options and explore the association of provider role with (1 knowledge of code status options, (2 perception of timing of code status discussions, (3 perception of family receptivity to code status discussions, and (4 comfort carrying out code status discussions.Nurses, trainees (residents and fellows, and attending physicians from pediatric units where code status discussions typically occur completed a short survey questionnaire regarding their knowledge of code status options and perceptions surrounding code status discussions.Single center, quaternary care children's hospital.203 nurses, 31 trainees, and 29 attending physicians in 4 high-acuity pediatric units responded to the survey (N = 263, 90% response rate. Based on an objective knowledge measure, providers demonstrate poor understanding of available code status options, with only 22% of providers able to enumerate more than two of four available code status options. In contrast, provider groups self-report high levels of familiarity with available code status options, with attending physicians reporting significantly higher levels than nurses and trainees (p = 0.0125. Nurses and attending physicians show significantly different perception of code status discussion timing, with majority of nurses (63.4% perceiving discussions as occurring "too late" or "much too late" and majority of attending physicians (55.6% perceiving the timing as "about right" (p<0.0001. Attending physicians report significantly higher comfort having code status discussions with families than do nurses or trainees

  4. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  5. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  6. Anthropogenic organic compounds in source water of select community water systems in the United States, 2002-10

    Science.gov (United States)

    Valder, Joshua F.; Delzer, Gregory C.; Kingsbury, James A.; Hopple, Jessica A.; Price, Curtis V.; Bender, David A.

    2014-01-01

    Drinking water delivered by community water systems (CWSs) comes from one or both of two sources: surface water and groundwater. Source water is raw, untreated water used by CWSs and is usually treated before distribution to consumers. Beginning in 2002, the U.S. Geological Survey’s (USGS) National Water-Quality Assessment Program initiated Source Water-Quality Assessments (SWQAs) at select CWSs across the United States, primarily to characterize the occurrence of a large number of anthropogenic organic compounds that are predominantly unregulated by the U.S. Environmental Protection Agency. Source-water samples from CWSs were collected during 2002–10 from 20 surface-water sites (river intakes) and during 2002–09 from 448 groundwater sites (supply wells). River intakes were sampled approximately 16 times during a 1-year sampling period, and supply wells were sampled once. Samples were monitored for 265 anthropogenic organic compounds. An additional 3 herbicides and 16 herbicide degradates were monitored in samples collected from 8 river intakes and 118 supply wells in areas where these compounds likely have been used. Thirty-seven compounds have an established U.S. Environmental Protection Agency (EPA) Maximum Contaminant Level (MCL) for drinking water, 123 have USGS Health-Based Screening Levels (HBSLs), and 29 are included on the EPA Contaminant Candidate List 3. All compounds detected in source water were evaluated both with and without an assessment level and were grouped into 13 categories (hereafter termed as “use groups”) based on their primary use or source. The CWS sites were characterized in a national context using an extract of the EPA Safe Drinking Water Information System to develop spatially derived and system-specific ancillary data. Community water system information is contained in the EPA Public Supply Database, which includes 2,016 active river intakes and 112,099 active supply wells. Ancillary variables including population served

  7. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  8. Radioactive releases of nuclear power plants: the code ASTEC

    International Nuclear Information System (INIS)

    Sdouz, G.; Pachole, M.

    1999-11-01

    In order to adopt potential countermeasures to protect the population during the course of an accident in a nuclear power plant a fast prediction of the radiation exposure is necessary. The basic input value for such a dispersion calculation is the source term, which is the description of the physical and chemical behavior of the released radioactive nuclides. Based on a source term data base a pilot system has been developed to determine a relevant source term and to generate the input file for the dispersion code TAMOS of the Zentralanstalt fuer Meteorologie und Geodynamik (ZAMG). This file can be sent directly as an attachment of e-mail to the TAMOS user for further processing. The source terms for 56 European nuclear power plant units are included in the pilot version of the code ASTEC (Austrian Source Term Estimation Code). The use of the system is demonstrated in an example based on an accident in the unit TEMELIN-1. In order to calculate typical core inventories for the data bank the international computer code OBIGEN 2.1 was installed and applied. The report has been completed with a discussion on the optimal data transfer. (author)

  9. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  10. Health Literacy and Preferences for Sources of Child Health Information of Mothers With Infants in the Neonatal Intensive Care Unit.

    Science.gov (United States)

    Skeens, Kristen; Logsdon, M Cynthia; Stikes, Reetta; Ryan, Lesa; Sparks, Kathryn; Hayes, Pauline; Myers, John; Davis, Deborah Winders

    2016-08-01

    Parents of infants hospitalized in the neonatal intensive care unit (NICU) frequently need guidance to prepare them for the care and health promotion of their child after hospital discharge. The health literacy of the parents should be considered so that education can be tailored to meet their needs. It is also important to understand the parents' preferences for how, and from whom, they receive education. The purpose of this study was to identify health literacy levels of parents of infants in an NICU and preferences for who they want to provide them with education. An exploratory, descriptive design was used to assess participant health literacy and preferences for obtaining child health information. Only mothers (no fathers) with babies in the NICU were available to complete the survey. Mean participant age was 26.4 years (SD = 6.7). Participants had a mean Rapid Estimate of Adult Literacy in Medicine, Revised, score of 5.64 (SD = 2.4), indicating a low level of health literacy. Questions regarding when to administer medication were correctly answered by 69% of participants. Proper medication dosage was understood by 92% of participants; however, only 30% were able to correctly convert measurements. One-on-one discussions with a physician were the preferred source of health information for 80% of participants. The current exploratory study provides new information that will help inform the development of future studies and increase awareness of nurses regarding health literacy and the specific types of skills for which parents need the most help.

  11. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  12. Software testing and source code for the calculation of clearance values. Final report; Erprobung von Software und Quellcode zur Berechnung von Freigabewerten. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Meyering, Henrich

    2016-11-15

    The GRS research project was aimed to the test the appropriateness of the software package ''residual radioactivity'' (RESRAD) for the calculation of clearance values according to German and European regulations. Comparative evaluations were performed with RESRAD-OFFSITE and the code SiWa-PRO DSS used by GRS and the GRS program code ARTM. It is recommended to use RESRAD-OFFSITE for comparative calculations. The dose relevant air-path dispersion of radionuclides should not be modeled using RESRAD-OFFSITE, the use of ARTM is recommended. The sensitivity analysis integrated into RESRAD-OFFSITE allows a fast identification of crucial parameters.

  13. GAMMA-CLOUD: a computer code for calculating gamma-exposure due to a radioactive cloud released from a point source

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, O [Chugoku Electric Power Co. Inc., Hiroshima (Japan); Sawaguchi, Y; Kaneko, M

    1979-03-01

    A computer code, designated GAMMA-CLOUD, has been developed by specialists of electric power companies to meet requests from the companies to have a unified means of calculating annual external doses from routine releases of radioactive gaseous effluents from nuclear power plants, based on the Japan Atomic Energy Commission's guides for environmental dose evaluation. GAMMA-CLOUD is written in FORTRAN language and its required capacity is less than 100 kilobytes. The average ..gamma..-exposure at an observation point can be calculated within a few minutes with comparable precision to other existing codes.

  14. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  15. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  16. Comparing the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Van Rompaey, B.; Zaidman, A.E.; Van Deursen, A.; Demeyer, S.

    2008-01-01

    This paper represents an extension to our previous work: Mining software repositories to study coevolution of production & test code. Proceedings of the International Conference on Software Testing, Verification, and Validation (ICST), IEEE Computer Society, 2008; doi:10.1109/ICST.2008.47

  17. To report the obtained results in the simulation with the FCS-11 and Presto codes of the two first operation cycles of the Laguna Verde Unit 1 reactor

    International Nuclear Information System (INIS)

    Montes T, J.L.; Moran L, J.M.; Cortes C, C.C.

    1990-08-01

    The objective of this work is to establish a preliminary methodology to carry out analysis of recharges for the reactor of the Laguna Verde U-1, by means of the evaluation of the state of the reactor core in its first two operation cycles using the FCS2 and Presto-B codes. (Author)

  18. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  19. The Text of Tile Master Agreement between the Agency and the United States of America Governing Sales of Source, By- Product and Special Nuclear Materials for research Purposes

    International Nuclear Information System (INIS)

    1974-01-01

    The text of the Master Agreement Governing Sales of Source, Bye Product and Special Nuclear Materials for Research Purposes, which has been concluded between the Agency and the Government of the United States of America, is reproduced herein for the information of all Members,

  20. Microbial pathogens in source and treated waters from drinking water treatment plants in the United States and implications for human health

    Science.gov (United States)

    An occurrence survey was conducted on selected pathogens in source and treated drinking water collected from 25 drinking water treatment plants (DWTPs) in the United States. Water samples were analyzed for the protozoa Giardia and Cryptosporidium (EPA Method 1623); the fungi Aspe...

  1. Attitudes of U.S. retailers toward China, Canada, and the United States as manufacturing sources for furniture: an assessment of competitive priorities

    Science.gov (United States)

    Urs Buehlmann; Matthew Bumgardner; Torsten Lihra; Mary Frye

    2006-01-01

    While much has been written regarding the declining global competitiveness of U.S. furniture manufacturing and the subsequent loss of domestic market share and jobs, less is known about the role of retailers in furniture importing. This study investigated the attitudes of U.S. furniture retailers toward China, Canada, and the United States as manufacturing sources for...

  2. Dietary sources of methylated arsenic species in urine of the United States population, NHANES 2003-2010.

    Directory of Open Access Journals (Sweden)

    B Rey deCastro

    Full Text Available BACKGROUND: Arsenic is an ubiquitous element linked to carcinogenicity, neurotoxicity, as well as adverse respiratory, gastrointestinal, hepatic, and dermal health effects. OBJECTIVE: Identify dietary sources of speciated arsenic: monomethylarsonic acid (MMA, and dimethylarsinic acid (DMA. METHODS: Age-stratified, sample-weighted regression of NHANES (National Health and Nutrition Examination Survey 2003-2010 data (∼8,300 participants ≥6 years old characterized the association between urinary arsenic species and the additional mass consumed of USDA-standardized food groups (24-hour dietary recall data, controlling for potential confounders. RESULTS: For all arsenic species, the rank-order of age strata for median urinary molar concentration was children 6-11 years > adults 20-84 years > adolescents 12-19 years, and for all age strata, the rank-order was DMA > MMA. Median urinary molar concentrations of methylated arsenic species ranged from 0.56 to 3.52 µmol/mol creatinine. Statistically significant increases in urinary arsenic species were associated with increased consumption of: fish (DMA; fruits (DMA, MMA; grain products (DMA, MMA; legumes, nuts, seeds (DMA; meat, poultry (DMA; rice (DMA, MMA; rice cakes/crackers (DMA, MMA; and sugars, sweets, beverages (MMA. And, for adults, rice beverage/milk (DMA, MMA. In addition, based on US (United States median and 90th percentile consumption rates of each food group, exposure from the following food groups was highlighted: fish; fruits; grain products; legumes, nuts, seeds; meat, poultry; and sugars, sweets, beverages. CONCLUSIONS: In a nationally representative sample of the US civilian, noninstitutionalized population, fish (adults, rice (children, and rice cakes/crackers (adolescents had the largest associations with urinary DMA. For MMA, rice beverage/milk (adults and rice cakes/crackers (children, adolescents had the largest associations.

  3. Other Solid Waste Incineration (OSWI) Units Standards of Performance for New Stationary Sources and Emission Guidelines for Existing Sources Fact Sheets

    Science.gov (United States)

    This page contains a November 2005, and and November 2006 fact sheet with information regarding the final and proposed NSPS and Emission Guidelines for Existing Sources for OSWI. This document provides a summary of the information for this regulation

  4. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  5. The OpenPMU Platform for Open Source Phasor Measurements

    OpenAIRE

    Laverty, David M.; Best, Robert J.; Brogan, Paul; Al-Khatib, Iyad; Vanfretti, Luigi; Morrow, D John

    2013-01-01

    OpenPMU is an open platform for the development of phasor measurement unit (PMU) technology. A need has been identified for an open-source alternative to commercial PMU devices tailored to the needs of the university researcher and for enabling the development of new synchrophasor instruments from this foundation. OpenPMU achieves this through open-source hardware design specifications and software source code, allowing duplicates of the OpenPMU to be fabricated under open-source licenses. Th...

  6. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  7. Responsibilities of the exporting state derived from the application of the code of conduct on the safety and security of radioactive sources and the guidance on the import and export

    International Nuclear Information System (INIS)

    Vidal, Dora

    2008-01-01

    Full text: 'The exporting state in deciding whether to authorize an export of radioactive sources should satisfy itself, insofar as practicable: 1) That the recipient is authorized by the importing state to receive and possess the source in accordance with its laws and regulations; 2) That the importing state has the appropriate technical and administrative capability, resources and regulatory structure needed for the management of the source(s) in a manner consistent with the guidance in the code, and consider, based upon available information: i) Whether the recipient has been engaged in clandestine or illegal procurement of radioactive sources; ii) Whether an import or export authorization for radioactive sources has been denied to the recipient or importing state, or whether the recipient or importing state has made diverted for purposes inconsistent with the code any import or export of radioactive sources previously authorized; and iii) The risk of diversion or malicious acts involving radioactive sources'. It also should take, once it has decided to authorize the export, 'appropriate steps to ensure that the export is conducted in a manner consistent with existing relevant international standards relating to the transport of radioactive materials an the importing State is notified in advance of each shipment'. The Guidance has done a great effort in fixing the requirements that the importing State has to fulfill and it is the exporting State which has to verify, satisfy, and consider if these requirements are in place. It is remarkable the responsibility of the exporting state in analyzing the export from the point of view of the capabilities of the importing State to manage the sources with the purpose of use it has and even once it is not useful any more. This paper has the intention of bringing to reflect the responsibility of the exporting State in relation to those radioactive sources that are exported or are to be exported and have to fulfill with the

  8. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  9. United States Congressional Districts from LEGIS source data, Geographic NAD83, LOSCO (2004) [us_congress_LEGIS_2001

    Data.gov (United States)

    Louisiana Geographic Information Center — United States Congressional Districts. The district boundaries are the result of legislative acts and redistricting. Reapportionment (redistricting) occurs during...

  10. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  11. The effectiveness of environmental strategies on noise reduction in a pediatric intensive care unit: creation of single-patient bedrooms and reducing noise sources.

    Science.gov (United States)

    Kol, Emine; Aydın, Perihan; Dursun, Oguz

    2015-07-01

    Noise is a substantial problem for both patients and healthcare workers in hospitals. This study aimed to determine the effectiveness of environmental strategies (creating single-patient rooms and reducing noise sources) in noise reduction in a pediatric intensive care unit. Noise measurement in the unit was conducted in two phases. In the first phase, measurements aimed at determining the unit's present level of noise were performed over 4 weeks in December 2013. During the month following the first measurement phase, the intensive care unit (ICU) was moved to a new location and noise-reducing strategies were implemented. The second phase, in May 2014, measured noise levels in the newly constructed environment. The noise levels before and after environmental changes were statistically significant at 72.6 dB-A and 56 dB-A, respectively (p noise-reducing strategies can be effective in controlling environmental noise in the ICU. © 2015, Wiley Periodicals, Inc.

  12. Conceptual design of a FGM thermoelectric energy conversion system for high temperature heat source. 1. Design of thermoelectric energy conversion unit

    International Nuclear Information System (INIS)

    Kambe, Mitsuru; Teraki, Junichi; Hirano, Toru.

    1996-01-01

    Thermoelectric (TE) power conversion system has been focused as a candidate of direct energy conversion systems for high temperature heat source to meet the various power requirements in next century. A concept of energy conversion unit by using TE cell elements combined with FGM compliant pads has been presented to achieve high thermal energy density as well as high energy conversion efficiency. An energy conversion unit consists of 8 couples of P-N cell elements sandwiched between two FGM compliant pads. Performance analysis revealed that the power generated by this unit was 11 watts which is nearly ten times as much as conventional unit of the same size. Energy conversion efficiency of 12% was expected based on the assumption of ZT = 1. All the member of compliant pads as well as TE cells could be bonded together to avoid thermal resistance. (author)

  13. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  14. Investigation of the Effects of Tissue Inhomogeneities on the Dosimetric Parameters of a Cs-137 Brachytherapy Source using the MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Mehdi Zehtabian

    2010-09-01

    Full Text Available Introduction: Brachytherapy is the use of small encapsulated radioactive sources in close vicinity of tumors. Various methods are used to obtain the dose distribution around brachytherapy sources. TG-43 is a dosimetry protocol proposed by the AAPM for determining dose distributions around brachytherapy sources. The goal of this study is to update this protocol for presence of bone and air inhomogenities.  Material and Methods: To update the dose rate constant parameter of the TG-43 formalism, the MCNP4C simulations were performed in phantoms composed of water-bone and water-air combinations. The values of dose at different distances from the source in both homogeneous and inhomogeneous phantoms were estimated in spherical tally cells of 0.5 mm radius using the F6 tally. Results: The percentages of dose reductions in presence of air and bone inhomogenities for the Cs-137 source were found to be 4% and 10%, respectively. Therefore, the updated dose rate constant (Λ will also decrease by the same percentages.   Discussion and Conclusion: It can be easily concluded that such dose variations are more noticeable when using lower energy sources such as Pd-103 or I-125.

  15. Electronic Contracts and the Personal data Protection of the Consumer: Sources Dialogue Between the Consumer Protection Code and the Internet Civil Mark.

    Directory of Open Access Journals (Sweden)

    Rosane Leal Da Silva

    2016-10-01

    Full Text Available This paper analyzes the personal data protection of the consumer and your vulnerability in interactive electronic contracts, aiming to point means of defense. For this, uses the deductive approach and starts of the electronic contracting to discuss the legal protection of the consumer in light of the capturing and processing of personal data by the furnisher. Considering the absence of law about personal data, concludes that electronic contracting expands the consumer vulnerability, which requires the principles application of the Consumer Protection Code, adding the Internet Civil Mark in relation to the privacy protection.

  16. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  17. Integrated computer codes for nuclear power plant severe accident analysis

    International Nuclear Information System (INIS)

    Jordanov, I.; Khristov, Y.

    1995-01-01

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs

  18. Integrated computer codes for nuclear power plant severe accident analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jordanov, I; Khristov, Y [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika

    1996-12-31

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs.

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. A study of the reverse cycle defrosting performance on a multi-circuit outdoor coil unit in an air source heat pump – Part I: Experiments

    International Nuclear Information System (INIS)

    Qu, Minglu; Xia, Liang; Deng, Shiming; Jiang, Yiqiang

    2012-01-01

    Highlights: ► We experimental study the defrosting performance on a multi-circuit outdoor coil unit in an ASHP unit. ► We find that defrosting is quicker on the airside of upper circuits than that on the lower circuits. ► We discuss the effects of downwards flowing of the melted frost along the outdoor coil surface on defrosting performance. -- Abstract: When an air source heat pump (ASHP) unit operates in heating mode, frost can be accumulated on the surface of its finned outdoor coil which normally has multiple parallel circuits on its refrigerant side for minimized refrigerant pressure loss and enhanced heat transfer efficiency. On its airside, however, there is usually no segmentation corresponding to the number of refrigerant circuit. Frosting deteriorates the operation and energy efficiency of the ASHP unit and periodic defrosting becomes necessary. Currently the most widely used standard defrosting method for ASHPs is reverse cycle defrost. This paper, the first part of a two-part series, reports on the experimental part of a study of the reverse cycle defrosting performance on a multi-circuit outdoor coil unit in an experimental 6.5 kW heating capacity residential ASHP unit. Firstly the experimental ASHP unit is described and experimental procedures detailed. Secondly, the experimental results are reported. This is followed by the discussion on the effects of downwards flowing of the melted frost along a multi-circuit outdoor coil surface on defrosting performance. Finally, the evaluation of the defrosting efficiency for the experimental ASHP unit is provided. In the second part of the series, a modeling analysis on the effects of downwards flowing of the melted frost along the multi-circuit outdoor coil surface on defrosting performance of the experimental ASHP unit will be presented.