WorldWideScience

Sample records for source coding component

  1. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  2. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  3. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  4. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  5. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  6. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  7. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  8. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  9. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  10. Design codes for gas cooled reactor components

    International Nuclear Information System (INIS)

    1990-12-01

    High-temperature gas-cooled reactor (HTGR) plants have been under development for about 30 years and experimental and prototype plants have been operated. The main line of development has been electricity generation based on the steam cycle. In addition the potential for high primary coolant temperature has resulted in research and development programmes for advanced applications including the direct cycle gas turbine and process heat applications. In order to compare results of the design techniques of various countries for high temperature reactor components, the IAEA established a Co-ordinated Research Programme (CRP) on Design Codes for Gas-Cooled Reactor Components. The Federal Republic of Germany, Japan, Switzerland and the USSR participated in this Co-ordinated Research Programme. Within the frame of this CRP a benchmark problem was established for the design of the hot steam header of the steam generator of an HTGR for electricity generation. This report presents the results of that effort. The publication also contains 5 reports presented by the participants. A separate abstract was prepared for each of these reports. Refs, figs and tabs

  11. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  12. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  13. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  14. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  15. Fabrication of ion source components by electroforming

    International Nuclear Information System (INIS)

    Schechter, D.E.; Sluss, F.

    1983-01-01

    Several components of the Oak Ridge National Laboratory (ORNL)/Magnetic Fusion Test Facility (MFTF-B) ion source have been fabricated utilizing an electroforming process. A procedure has been developed for enclosing coolant passages in copper components by electrodepositing a thick (greater than or equal to 0.75-mm) layer of copper (electroforming) over the top of grooves machined into the copper component base. Details of the procedure to fabricate acceleration grids and other ion source components are presented

  16. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  17. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  18. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  19. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  20. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  1. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  2. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  3. Design Procedure of Graphite Components by ASME HTR Codes

    International Nuclear Information System (INIS)

    Kang, Ji-Ho; Jo, Chang Keun

    2016-01-01

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet

  4. Design Procedure of Graphite Components by ASME HTR Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ji-Ho; Jo, Chang Keun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet.

  5. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  6. ASME code and ratcheting in piping components. Final technical report

    International Nuclear Information System (INIS)

    Hassan, T.; Matzen, V.C.

    1999-01-01

    The main objective of this research is to develop an analysis program which can accurately simulate ratcheting in piping components subjected to seismic or other cyclic loads. Ratcheting is defined as the accumulation of deformation in structures and materials with cycles. This phenomenon has been demonstrated to cause failure to piping components (known as ratcheting-fatigue failure) and is yet to be understood clearly. The design and analysis methods in the ASME Boiler and Pressure Vessel Code for ratcheting of piping components are not well accepted by the practicing engineering community. This research project attempts to understand the ratcheting-fatigue failure mechanisms and improve analysis methods for ratcheting predictions. In the first step a state-of-the-art testing facility is developed for quasi-static cyclic and seismic testing of straight and elbow piping components. A systematic testing program to study ratcheting is developed. Some tests have already been performed and the rest will be completed by summer'99. Significant progress has been made in the area of constitutive modeling. A number of sophisticated constitutive models have been evaluated in terms of their simulations for a broad class of ratcheting responses. From the knowledge gained from this evaluation study two improved models are developed. These models are demonstrated to have promise in simulating ratcheting responses in piping components. Hence, implementation of these improved models in widely used finite element programs, ANSYS and/or ABAQUS, is in progress. Upon achieving improved finite element programs for simulation of ratcheting, the ASME Code provisions for ratcheting of piping components will be reviewed and more rational methods will be suggested. Also, simplified analysis methods will be developed for operability studies of piping components and systems. Some of the future works will be performed under the auspices of the Center for Nuclear Power Plant Structures

  7. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  8. Nuclear component design ontology building based on ASME codes

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    The adoption of ontology analysis in the study of concept knowledge acquisition and representation for the nuclear component design process based on computer-supported cooperative work (CSCW) makes it possible to share and reuse numerous concept knowledge of multi-disciplinary domains. A practical ontology building method is accordingly proposed based on Protege knowledge model in combination with both top-down and bottom-up approaches together with Formal Concept Analysis (FCA). FCA exhibits its advantages in the way it helps establish and improve taxonomic hierarchy of concepts and resolve concept conflict occurred in modeling multi-disciplinary domains. With Protege-3.0 as the ontology building tool, a nuclear component design ontology based ASME codes is developed by utilizing the ontology building method. The ontology serves as the basis to realize concept knowledge sharing and reusing of nuclear component design. (authors)

  9. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  10. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  11. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  12. Decision criteria for software component sourcing: steps towards a framework

    NARCIS (Netherlands)

    Kusters, R.J.; Pouwelse, L.; Martin, H.; Trienekens, J.J.M.; Hammoudi, Sl.; Maciaszek, L.; Missikoff, M.M.; Camp, O.; Cordeiro, J.

    2016-01-01

    Software developing organizations nowadays have a wide choice when it comes to sourcing software components. This choice ranges from developing or adapting in-house developed components via buying closed source components to utilizing open source components. This study seeks to determine criteria

  13. Investigation of genes coding for inflammatory components in Parkinson's disease.

    Science.gov (United States)

    Håkansson, Anna; Westberg, Lars; Nilsson, Staffan; Buervenich, Silvia; Carmine, Andrea; Holmberg, Björn; Sydow, Olof; Olson, Lars; Johnels, Bo; Eriksson, Elias; Nissbrandt, Hans

    2005-05-01

    Several findings obtained recently indicate that inflammation may contribute to the pathogenesis in Parkinson's disease (PD). Genetic variants of genes coding for components involved in immune reactions in the brain might therefore influence the risk of developing PD or the age of disease onset. Five single nucleotide polymorphisms (SNPs) in the genes coding for interferon-gamma (IFN-gamma; T874A in intron 1), interferon-gamma receptor 2 (IFN-gamma R2; Gln64Arg), interleukin-10 (IL-10; G1082A in the promoter region), platelet-activating factor acetylhydrolase (PAF-AH; Val379Ala), and intercellular adhesion molecule 1 (ICAM-1; Lys469Glu) were genotyped, using pyrosequencing, in 265 patients with PD and 308 controls. None of the investigated SNPs was found to be associated with PD; however, the G1082A polymorphism in the IL-10 gene promoter was found to be related to the age of disease onset. Linear regression showed a significantly earlier onset with more A-alleles (P = 0.0095; after Bonferroni correction, P = 0.048), resulting in a 5-year delayed age of onset of the disease for individuals having two G-alleles compared with individuals having two A-alleles. The results indicate that the IL-10 G1082A SNP could possibly be related to the age of onset of PD. Copyright 2005 Movement Disorder Society.

  14. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  15. Visualizing Debugging Activity in Source Code Repositories

    OpenAIRE

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the bug evolution in time.

  16. Visualizing Debugging Activity in Source Code Repositories

    NARCIS (Netherlands)

    Voinea, Lucian; Telea, Alexandru

    2007-01-01

    We present the use of the CVSgrab visualization tool for understanding the debugging activity in the Mozilla project. We show how to display the distribution of different bug types over the project structure, locate project components which undergo heavy debugging activity, and get insight in the

  17. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  18. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  19. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  20. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  1. Beamline standard component designs for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Barraza, J.; Brite, C.; Chang, J.; Sanchez, T.; Tcheskidov, V.; Kuzay, T.M.

    1994-01-01

    The Advanced Photon Source (APS) has initiated a design standardization and modularization activity for the APS synchrotron radiation beamline components. These standard components are included in components library, sub-components library and experimental station library. This paper briefly describes these standard components using both technical specifications and side view drawings

  2. A Code Generator for Software Component Services in Smart Devices

    OpenAIRE

    Ahmad, Manzoor

    2010-01-01

    A component is built to be reused and reusability has significant impact on component generality and flexibility requirement. A component model plays a critical role in reusability of software component and defines a set of standards for component implementation, evolution, composition, deployment and standardization of the run-time environment for execution of component. In component based development (CBD), standardization of the runtime environment includes specification of component’s int...

  3. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  4. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  5. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  6. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  7. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  8. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  9. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  10. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  11. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  12. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  13. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  14. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  15. Welding of iridium heat source capsule components

    International Nuclear Information System (INIS)

    Mustaleski, T.M.; Yearwood, J.C.; Burgan, C.E.; Green, L.A.

    1991-01-01

    Interplanetary spacecraft have long used radioisotope thermoelectric generators (RTG) to produce power for instrumentation. These RTG produce electrical energy from the heat generated through the radioactive decay of plutonium-238. The plutonium is present as a ceramic pellet of plutonium oxide. The pellet is encapsulated in a containment shell of iridium. Iridium is the material of choice for these capsules because of its compatibility with the plutonium dioxide. The high-energy beam welding (electron beam and laser) processes used in the fabrication of the capsules has not been published. These welding procedures were originally developed at the Mound Laboratories and have been adapted for use at the Oak Ridge Y-12 Plant. The work involves joining of thin material in small sizes to exacting tolerances. There are four different electron beam welds on each capsule, with one procedure being used in three locations. There is also a laser weld used to seal the edges of a sintered frit assembly. An additional electron beam weld is also performed to seal each of the iridium blanks in a stainless steel waster sheet prior to forming. In the transfer of these welding procedures from one facility to another, a number of modifications were necessary. These modifications are discussed in detail, as well as the inherent problems in making welds in material which is only 0.005 in. thick. In summary, the paper discusses the welding of thin components of iridium using the high energy beam processes. While the peculiarities of iridium are pertinent to the discussion, much of the information is of general interest to the users of these processes. This is especially true of applications involving thin materials and high-precision assemblies

  16. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  17. NGNP Component Test Capability Design Code of Record

    Energy Technology Data Exchange (ETDEWEB)

    S.L. Austad; D.S. Ferguson; L.E. Guillen; C.W. McKnight; P.J. Petersen

    2009-09-01

    The Next Generation Nuclear Plant Project is conducting a trade study to select a preferred approach for establishing a capability whereby NGNP technology development testing—through large-scale, integrated tests—can be performed for critical HTGR structures, systems, and components (SSCs). The mission of this capability includes enabling the validation of interfaces, interactions, and performance for critical systems and components prior to installation in the NGNP prototype.

  18. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  19. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  20. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  1. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  2. Structured automated code checking through structural components and systems engineering

    NARCIS (Netherlands)

    Coenders, J.L.; Rolvink, A.

    2014-01-01

    This paper presents a proposal to employ the design computing methodology proposed as StructuralComponents (Rolvink et al [6] and van de Weerd et al [7]) as a method to perform a digital verification process to fulfil the requirements related to structural design and engineering as part of a

  3. Status of design code work for metallic high temperature components

    International Nuclear Information System (INIS)

    Bieniussa, K.; Seehafer, H.J.; Over, H.H.; Hughes, P.

    1984-01-01

    The mechanical components of high temperature gas-cooled reactors, HTGR, are exposed to temperatures up to about 1000 deg. C and this in a more or less corrosive gas environment. Under these conditions metallic structural materials show a time-dependent structural behavior. Furthermore changes in the structure of the material and loss of material in the surface can result. The structural material of the components will be stressed originating from load-controlled quantities, for example pressure or dead weight, and/or deformation-controlled quantities, for example thermal expansion or temperature distribution, and thus it can suffer rowing permanent strains and deformations and an exhaustion of the material (damage) both followed by failure. To avoid a failure of the components the design requires the consideration of the following structural failure modes: ductile rupture due to short-term loadings; creep rupture due to long-term loadings; reep-fatigue failure due to cyclic loadings excessive strains due to incremental deformation or creep ratcheting; loss of function due to excessive deformations; loss of stability due to short-term loadings; loss of stability due to long-term loadings; environmentally caused material failure (excessive corrosion); fast fracture due to instable crack growth

  4. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  5. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  6. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  7. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  8. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  9. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  10. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  11. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  12. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  13. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  14. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  15. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  16. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  17. Challenges of the Open Source Component Marketplace in the Industry

    Science.gov (United States)

    Ayala, Claudia; Hauge, Øyvind; Conradi, Reidar; Franch, Xavier; Li, Jingyue; Velle, Ketil Sandanger

    The reuse of Open Source Software components available on the Internet is playing a major role in the development of Component Based Software Systems. Nevertheless, the special nature of the OSS marketplace has taken the “classical” concept of software reuse based on centralized repositories to a completely different arena based on massive reuse over Internet. In this paper we provide an overview of the actual state of the OSS marketplace, and report preliminary findings about how companies interact with this marketplace to reuse OSS components. Such data was gathered from interviews in software companies in Spain and Norway. Based on these results we identify some challenges aimed to improve the industrial reuse of OSS components.

  18. HTGR nuclear heat source component design and experience

    International Nuclear Information System (INIS)

    Peinado, C.O.; Wunderlich, R.G.; Simon, W.A.

    1982-05-01

    The high-temperature gas-cooled reactor (HTGR) nuclear heat source components have been under design and development since the mid-1950's. Two power plants have been designed, constructed, and operated: the Peach Bottom Atomic Power Station and the Fort St. Vrain Nuclear Generating Station. Recently, development has focused on the primary system components for a 2240-MW(t) steam cycle HTGR capable of generating about 900 MW(e) electric power or alternately producing high-grade steam and cogenerating electric power. These components include the steam generators, core auxiliary heat exchangers, primary and auxiliary circulators, reactor internals, and thermal barrier system. A discussion of the design and operating experience of these components is included

  19. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  20. Development of the Sixty Watt Heat-Source hardware components

    International Nuclear Information System (INIS)

    McNeil, D.C.; Wyder, W.C.

    1995-01-01

    The Sixty Watt Heat Source is a nonvented heat source designed to provide 60 thermal watts of power. The unit incorporates a plutonium-238 fuel pellet encapsulated in a hot isostatically pressed General Purpose Heat Source (GPHS) iridium clad vent set. A molybdenum liner sleeve and support components isolate the fueled iridium clad from the T-111 strength member. This strength member serves as the pressure vessel and fulfills the impact and hydrostatic strength requirements. The shell is manufactured from Hastelloy S which prevents the internal components from being oxidized. Conventional drawing operations were used to simplify processing and utilize existing equipment. The deep drawing reqirements for the molybdenum, T-111, and Hastelloy S were developed from past heat source hardware fabrication experiences. This resulted in multiple step drawing processes with intermediate heat treatments between forming steps. The molybdenum processing included warm forming operations. This paper describes the fabrication of these components and the multiple draw tooling developed to produce hardware to the desired specifications. copyright 1995 American Institute of Physics

  1. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  2. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  3. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  4. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  5. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  6. Development and application of computer codes for multidimensional thermalhydraulic analyses of nuclear reactor components

    International Nuclear Information System (INIS)

    Carver, M.B.

    1983-01-01

    Components of reactor systems and related equipment are identified in which multidimensional computational thermal hydraulics can be used to advantage to assess and improve design. Models of single- and two-phase flow are reviewed, and the governing equations for multidimensional analysis are discussed. Suitable computational algorithms are introduced, and sample results from the application of particular multidimensional computer codes are given

  7. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  8. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  9. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  10. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  11. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  12. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  13. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    International Nuclear Information System (INIS)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-01

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor

  14. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-15

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor.

  15. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  16. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  17. Dissolution And Analysis Of Yellowcake Components For Fingerprinting UOC Sources

    International Nuclear Information System (INIS)

    Hexel, Cole R.; Bostick, Debra A.; Kennedy, Angel K.; Begovich, John M.; Carter, Joel A.

    2012-01-01

    There are a number of chemical and physical parameters that might be used to help elucidate the ore body from which uranium ore concentrate (UOC) was derived. It is the variation in the concentration and isotopic composition of these components that can provide information as to the identity of the ore body from which the UOC was mined and the type of subsequent processing that has been undertaken. Oak Ridge National Laboratory (ORNL) in collaboration with Lawrence Livermore and Los Alamos National Laboratories is surveying ore characteristics of yellowcake samples from known geologic origin. The data sets are being incorporated into a national database to help in sourcing interdicted material, as well as aid in safeguards and nonproliferation activities. Geologic age and attributes from chemical processing are site-specific. Isotopic abundances of lead, neodymium, and strontium provide insight into the provenance of geologic location of ore material. Variations in lead isotopes are due to the radioactive decay of uranium in the ore. Likewise, neodymium isotopic abundances are skewed due to the radiogenic decay of samarium. Rubidium decay similarly alters the isotopic signature of strontium isotopic composition in ores. This paper will discuss the chemical processing of yellowcake performed at ORNL. Variations in lead, neodymium, and strontium isotopic abundances are being analyzed in UOC from two geologic sources. Chemical separation and instrumental protocols will be summarized. The data will be correlated with chemical signatures (such as elemental composition, uranium, carbon, and nitrogen isotopic content) to demonstrate the utility of principal component and cluster analyses to aid in the determination of UOC provenance.

  18. Comparison of US and European codes and regulations for the construction of LWR pressure components

    International Nuclear Information System (INIS)

    Maurer, H.A.

    1983-01-01

    The study was intended as a contribution to a stepwise harmonization of European Regulations. The same safety related principles are applied in Europe and in US to assure the quality of all primary system components. Divergencies exist primarily in the organisation of quality assurance. US and European codes and regulations admit only approved materials for the fabrication of pressure components. The German and French requirements ask, however, more restrictive limits as far as trace elements are concerned which, during operation, may contribute to the embrittlement of the material. A further difference results from the considerably larger scope of materials examinations in European countries. A comparative list of the numbers of test specimens required under the different codes was prepared. Also for the hydrostatic test, differences were found. In European countries the test pressure for primary system components vary from 1.1 to 2.0 times the design pressure, while in the US the test pressure of the components is dependent on the design pressure of the entire system, 1.25 times design pressure. (orig./HP)

  19. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  20. An explication of the Graphite Structural Design Code of core components for the High Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Iyoku, Tatsuo; Ishihara, Masahiro; Toyota, Junji; Shiozawa, Shusaku

    1991-05-01

    The integrity evaluation of the core graphite components for the High Temperature Engineering Test Reactor (HTTR) will be carried out based upon the Graphite Structural Design Code for core components. In the application of this design code, it is necessary to make clear the basic concept to evaluate the integrity of core components of HTTR. Therefore, considering the detailed design of core graphite structures such as fuel graphite blocks, etc. of HTTR, this report explicates the design code in detail about the concepts of stress and fatigue limits, integrity evaluation method of oxidized graphite components and thermal irradiation stress analysis method etc. (author)

  1. A UML profile for code generation of component based distributed systems

    International Nuclear Information System (INIS)

    Chiozzi, G.; Karban, R.; Andolfato, L.; Tejeda, A.

    2012-01-01

    A consistent and unambiguous implementation of code generation (model to text transformation) from UML (must rely on a well defined UML (Unified Modelling Language) profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML meta-classes, testing the profile by creating modelling examples, and generating the code. This has allowed us to work on the modelling of E-ELT (European Extremely Large Telescope) control system and instrumentation without knowing what infrastructure will be finally used

  2. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  3. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  4. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  5. Structural integrity assessment of a pressure container component. Design and service code implementation. Case studies

    International Nuclear Information System (INIS)

    Sanzi, H.C.

    2006-01-01

    In the present work, the most important results of the local stresses occurred in the cracked pipes with a axial through-wall crack (outer), produced during operation of a Petrochemical Plant, using finite elements method, are presented. As requested, the component has been verified based 3D FE plastic analysis, under the postulated failure loading, assuring with this method a high degree of accuracy in the results. Codes used by Design and Service, as ASME Section VIII Div. 2 and API 579, have been used in the analysis. (author) [es

  6. 77 FR 6463 - Revisions to Labeling Requirements for Blood and Blood Components, Including Source Plasma...

    Science.gov (United States)

    2012-02-08

    ... Blood Components, Including Source Plasma; Correction AGENCY: Food and Drug Administration, HHS. ACTION..., Including Source Plasma,'' which provided incorrect publication information regarding a 60-day notice that...

  7. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  8. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  9. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  10. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  11. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  12. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  13. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  14. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  15. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  16. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  17. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  18. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  19. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  20. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  1. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  2. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  3. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  4. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  5. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  6. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  7. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  8. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  9. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  10. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  11. Pressure vessel design codes: A review of their applicability to HTGR components at temperatures above 800 deg C

    International Nuclear Information System (INIS)

    Hughes, P.T.; Over, H.H.; Bieniussa, K.

    1984-01-01

    The governments of USA and Federal Republic of Germany have approved of cooperation between the two countries in an endeavour to establish structural design code for gas reactor components intended to operate at temperatures exceeding 800 deg C. The basis of existing codes and their applicability to gas reactor component design are reviewed in this paper. This review has raised a number of important questions as to the direct applicability of the present codes. The status of US and FRG cooperative efforts to obtain answers to these questions are presented

  12. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  13. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  14. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  15. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  16. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  17. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  18. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  19. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  20. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  1. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  2. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  3. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  4. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  5. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  6. Umami taste components and their sources in Asian foods.

    Science.gov (United States)

    Hajeb, P; Jinap, S

    2015-01-01

    Umami, the fifth basic taste, is the inimitable taste of Asian foods. Several traditional and locally prepared foods and condiments of Asia are rich in umami. In this part of world, umami is found in fermented animal-based products such as fermented and dried seafood, and plant-based products from beans and grains, dry and fresh mushrooms, and tea. In Southeast Asia, the most preferred seasonings containing umami are fish and seafood sauces, and also soybean sauces. In the East Asian region, soybean sauces are the main source of umami substance in the routine cooking. In Japan, the material used to obtain umami in dashi, the stock added to almost every Japanese soups and boiled dishes, is konbu or dried bonito. This review introduces foods and seasonings containing naturally high amount of umami substances of both animal and plant sources from different countries in Asia.

  7. Development of WEB Applications of The Component – Open Source

    Directory of Open Access Journals (Sweden)

    Arturo Sergio Medina Castillo

    2013-06-01

    Full Text Available Nowadays software development not starting from scratch, however already has a set of tools provided by frameworks, which enables faster application development, relevant and indispensable factor for supporting continuous improvement processes seeking higher levels of competitiveness in this global society.In all respects the development of Web applications, whether open source or proprietary, is developing rapidly, by providing service levels of communication, interoperability, access to internal and external customers that allows management support different business processes.

  8. Heat source component development program, October 1977--February 1978

    International Nuclear Information System (INIS)

    1978-03-01

    The General Purpose Heat Source (GPHS) is being developed by Los Alamos Scientific Laboratory (LASL) for the Department of Energy (DOE) Division of Nuclear Research and Application (DNRA). The first mission scheduled for the GPHS is the NASA Out-of-Ecliptic Flight in January, 1983. During the current reporting period (October--December, 1977, January--February, 1978), activities in this task were conducted as follows: (1) documentation of results of the reentry thermal, ablation, and thermal stress analyses of the conceptual designs; (2) identification and completion of modifications to the thermal and ablation models used to determine the performance response of the heat source modules during reentry; (3) initiation of modifications to the thermal stress model used to determine the performance response of heat source modules during reentry; (4) completion and documentation of the surface chemistry experiments; (5) initiation and completion of activities in support of LASL to define test plans for the trial design phase of the GPHS development program; (6) participation in the GPHS design review meeting held at DOE/Germantown, Maryland, December 19--20, 1977; and (7) initiation of the thermal analysis of Trial Design 1.1

  9. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  10. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  11. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  12. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  13. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  14. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  15. Preliminary application of the draft code case for alloy 617 for a high temperature component

    International Nuclear Information System (INIS)

    Lee, Hyeong Yeon; Kim, Yong Wan; Song, Kee Nam

    2008-01-01

    The ASME draft Code Case for Alloy 617 was developed in the late 1980s for the design of very-high-temperature gas cooled reactors. The draft Code Case was patterned after the ASME Code Section III Subsection NH and was intended to cover Ni-Cr-Co-Mo Alloy 617 to 982 .deg. C (1800 .deg. F). But the draft Code Case is still in an incomplete status, lacking necessary material properties and design data. In this study, a preliminary evaluation on the creep-fatigue damage for a high temperature hot duct pipe structure has been carried out according to the draft Code Case. The evaluation procedures and results according to the draft Code Case for Alloy 617 material were compared with those of the ASME Subsection NH and RCC-MR for Alloy 800H material. It was shown that many data including material properties, fatigue and creep data should be supplemented for the draft Code Case. However, when the evaluation results on the creep-fatigue damage according to the draft Code Case, ASME-NH and RCC-MR were compared based on the preliminary evaluation, it was shown that the Alloy 617 results from the draft Code Case tended to be more resistant to the creep damage while less resistant to the fatigue damage than those from the ASME-NH and RCC-MR

  16. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  17. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  18. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  19. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  20. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  1. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  2. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  3. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  4. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  5. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  6. Development of an object oriented nodal code using the refined AFEN derived from the method of component decomposition

    International Nuclear Information System (INIS)

    Noh, J. M.; Yoo, J. W.; Joo, H. K.

    2004-01-01

    In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)

  7. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  8. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  9. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  10. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  11. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  12. Simulation of the thermalhydraulic behavior of a molten core within a structure, with the three dimensions three components TOLBIAC code

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, B.; Moreau, G.M.; Pigny S. [Centre d`Etudes Nucleaires de Grenoble (France)

    1995-09-01

    The TOLBIAC code is devoted to the simulation of the behavior of a molten core within a structure (pressure vessel of core catcher), taking into account the relative position of the core components, the wall ablation and the crust formation. The code is briefly described: 3D model, physical properties and constitutive laws. wall ablation and crust model. Two results are presented: the simulation of the COPO experiment (natural convection with water in a 1/2 scale elliptic pressure vessel), and the simulation of the behavior of a corium in a PWR pressure vessel, with ablation and crust formation.

  13. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  14. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  15. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  16. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  17. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  18. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  19. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  20. An accuracy estimation on neutron penetration calculation through concrete shield with PALLAS codes using bunched component nuclides of concrete

    International Nuclear Information System (INIS)

    Sasamoto, Nobuo; Kotegawa, Hiroshi

    1984-11-01

    In order to improve computational efficiency of PALLAS code, an accuracy is estimated on the neutron penetration calculation through a concrete shield, using bunched component nuclides of concrete. The calculated fast neutron flux is observed to depend weakly on how the nuclides are bunched. Contrary to this, the calculated thermal neutron fluxes are strongly dependent on the manner of bunching, mainly due to the fact that iron cross section has exceptionally large negative sensitivity to thermal neutron flux. (author)

  1. R&D for Safety Codes and Standards: Materials and Components Compatibility

    Energy Technology Data Exchange (ETDEWEB)

    Somerday, Brian P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFleur, Chris [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Marchi, Chris San [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-08-01

    This project addresses the following technical barriers from the Safety, Codes and Standards section of the 2012 Fuel Cell Technologies Office Multi-Year Research, Development and Demonstration Plan (section 3.8): (A) Safety data and information: limited access and availability (F) Enabling national and international markets requires consistent RCS (G) Insufficient technical data to revise standards.

  2. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  3. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  4. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  5. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  6. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    Science.gov (United States)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  7. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  8. Application of the ASME-code-case N 47 to a typical thickwalled HTR-component made of Incoloy 800

    International Nuclear Information System (INIS)

    Kemter, F.; Schmidt, A.

    Several components of the HTR-plant are exposed to temperatures beyond 500 0 C, i.e. within the high-temperature range. The service life of those components is not only limited by fatigue damage but also mainly by creep damage and accumulated inelastic strain. These can be conservatively estimated according to the ASME-Code (high temperature part CC N47) by means of the results of elastic calculations, yet this simplified method to provide evidence often leads to calculated overloads such as the present case of the live steam collector of the steam generator of a HTR. For providing the evidence that the actual loads of the component are within permissible limits, comprehensive inelastic analyses have to be referred to in such a case. The two-dimensional inelastic analysis which is reported here in detail shows that the creep and fatigue failure as well as the inelastic extensions of the live steam collectors accumulated during the service time are below the permissible limit stated in the ASME-Code and failure of those components while used in the reactor can this be excluded. (orig.) [de

  9. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  10. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  11. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  12. Fundamental principles for a nuclear design and structural analysis code for HTR components operating at temperatures above 8000C

    International Nuclear Information System (INIS)

    Nickel, H.; Schubert, F.

    1985-01-01

    With reference to the special characteristics of an HTR plant for the supply of nuclear process heat, the investigation of the fundamental principles to form the basis for a high temperature nuclear structural design code has been described. As examples, preliminary design values are proposed for the creep rupture and fatigue behaviour. The linear damage accumulation rule is for practical reasons proposed for the determination of service life, and the difficulties in using this rule are discussed. Finally, using the data obtained in structural analysis, the main areas of investigation which will lead to improvements in the utilization of the materials are discussed. Based on the current information, the working group ''Design Code'' believes that a service life of 70000 h for the heat-exchanging components operating at above 800 0 C can be. (orig.)

  13. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  14. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  15. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  16. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  17. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  18. Design and optimization of components and processes for plasma sources in advanced material treatments

    OpenAIRE

    Rotundo, Fabio

    2012-01-01

    The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, whi...

  19. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  20. Developing a GIS for CO2 analysis using lightweight, open source components

    Science.gov (United States)

    Verma, R.; Goodale, C. E.; Hart, A. F.; Kulawik, S. S.; Law, E.; Osterman, G. B.; Braverman, A.; Nguyen, H. M.; Mattmann, C. A.; Crichton, D. J.; Eldering, A.; Castano, R.; Gunson, M. R.

    2012-12-01

    There are advantages to approaching the realm of geographic information systems (GIS) using lightweight, open source components in place of a more traditional web map service (WMS) solution. Rapid prototyping, schema-less data storage, the flexible interchange of components, and open source community support are just some of the benefits. In our effort to develop an application supporting the geospatial and temporal rendering of remote sensing carbon-dioxide (CO2) data for the CO2 Virtual Science Data Environment project, we have connected heterogeneous open source components together to form a GIS. Utilizing widely popular open source components including the schema-less database MongoDB, Leaflet interactive maps, the HighCharts JavaScript graphing library, and Python Bottle web-services, we have constructed a system for rapidly visualizing CO2 data with reduced up-front development costs. These components can be aggregated together, resulting in a configurable stack capable of replicating features provided by more standard GIS technologies. The approach we have taken is not meant to replace the more established GIS solutions, but to instead offer a rapid way to provide GIS features early in the development of an application and to offer a path towards utilizing more capable GIS technology in the future.

  1. Evaluation of the Inductive Coupling between Equivalent Emission Sources of Components

    Directory of Open Access Journals (Sweden)

    Moisés Ferber

    2012-01-01

    Full Text Available The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method.

  2. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  3. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    Science.gov (United States)

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  4. Quantifying undesired parallel components in Thévenin-equivalent acoustic source parameters

    DEFF Research Database (Denmark)

    Nørgaard, Kren Rahbek; Neely, Stephen T.; Rasetshwane, Daniel M.

    2018-01-01

    in the source parameters. Such parallel components can result from, e.g., a leak in the ear tip or improperly accounting for evanescent modes, and introduce errors into subsequent measurements of impedance and reflectance. This paper proposes a set of additional error metrics that are capable of detecting...

  5. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  6. The source of the intermediate wavelength component of the Earth's magnetic field

    Science.gov (United States)

    Harrison, C. G. A.

    1985-01-01

    The intermediate wavelength component of the Earth's magnetic field has been well documented by observations made by MAGSAT. It has been shown that some significant fraction of this component is likely to be caused within the core of the Earth. Evidence for this comes from analysis of the intermediate wavelength component revealed by spherical harmonics between degrees 14 and 23, in which it is shown that it is unlikely that all of this signal is crustal. Firstly, there is no difference between average continental source strength and average oceanic source strength, which is unlikely to be the case if the anomalies reside within the crust, taking into account the very different nature and thickness of continental and oceanic crust. Secondly, there is almost no latitudinal variation in the source strength, which is puzzling if the sources are within the crust and have been formed by present or past magnetic fields with a factor of two difference in intensity between the equator and the poles. If however most of the sources for this field reside within the core, then these observations are not very surprising.

  7. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  8. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  9. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  10. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  11. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  12. Investigating the association of cardiovascular effects with personal exposure to particle components and sources

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chang-fu, E-mail: changfu@ntu.edu.tw [Department of Public Health, National Taiwan University, Taipei 100, Taiwan (China); Institute of Environmental Health, National Taiwan University, Taipei 100, Taiwan (China); Institute of Occupational Medicine and Industrial Hygiene, Taipei 100, Taiwan (China); Li, Ya-Ru; Kuo, I-Chun [Institute of Environmental Health, National Taiwan University, Taipei 100, Taiwan (China); Hsu, Shih-Chieh [Research Center for Environmental Changes, Academia Sinica, Taipei 115, Taiwan (China); Lin, Lian-Yu; Su, Ta-Chen [Department of Internal Medicine, National Taiwan University Hospital, Taipei 100, Taiwan (China)

    2012-08-01

    Background: Few studies included information on components and sources when exploring the cardiovascular health effects from personal exposure to particulate matters (PM). We previously reported that exposure to PM between 1.0 and 2.5 {mu}m (PM{sub 2.5-1}) was associated with increased cardio-ankle vascular index (CAVI, an arterial stiffness index), while exposure to PM smaller than 0.25 {mu}m (PM{sub 0.25}) decreased the heart rate variability (HRV) indices. The purpose of this study was to investigate the association between PM elements and cardiovascular health effects and identify responsible sources. Methods: In a panel study of seventeen mail carriers, the subjects were followed for 5-6 days while delivering mail outdoors. Personal filter samples of PM{sub 2.5-1} and PM{sub 0.25} were analyzed for their elemental concentrations. The source-specific exposures were further estimated by using absolute principal factor analysis. We analyzed the component- and source-specific health effects on HRV indices and CAVI using mixed models. Results: Several elements in PM{sub 2.5-1} (e.g., cadmium and strontium) were associated with the CAVI. Subsequent analyses showed that an interquartile range increase in exposure to PM from regional sources was significantly associated with a 3.28% increase in CAVI (95% confidence interval (CI), 1.47%-5.13%). This significant effect remained (3.35%, CI: 1.62%-5.11%) after controlling for the ozone exposures. For exposures to PM{sub 0.25}, manganese, calcium, nickel, and chromium were associated with the CAVI and/or the HRV indices. Conclusions: Our study suggests that PM{sub 2.5-1} and PM{sub 0.25} components may be associated with different cardiovascular effects. Health risks from exposure to PM from sources other than vehicle exhaust should not be underappreciated. - Highlights: Black-Right-Pointing-Pointer Increased arterial stiffness was related to the components in particles between 1.0 and 2.5 {mu}m. Black

  13. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  14. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  16. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    International Nuclear Information System (INIS)

    Keney, G.S.

    1981-08-01

    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV

  17. Implementation of wall film condensation model to two-fluid model in component thermal hydraulic analysis code CUPID - 15237

    International Nuclear Information System (INIS)

    Lee, J.H.; Park, G.C.; Cho, H.K.

    2015-01-01

    In the containment of a nuclear reactor, the wall condensation occurs when containment cooling system and structures remove the mass and energy release and this phenomenon is of great importance to ensure containment integrity. If the phenomenon occurs in the presence of non-condensable gases, their accumulation near the condensate film leads to significant reduction in heat transfer during the condensation. This study aims at simulating the wall film condensation in the presence of non-condensable gas using CUPID, a computational multi-fluid dynamics code, which is developed by the Korea Atomic Energy Research Institute (KAERI) for the analysis of transient two-phase flows in nuclear reactor components. In order to simulate the wall film condensation in containment, the code requires a proper wall condensation model and liquid film model applicable to the analysis of the large scale system. In the present study, the liquid film model and wall film condensation model were implemented in the two-fluid model of CUPID. For the condensation simulation, a wall function approach with heat and mass transfer analogy was applied in order to save computational time without considerable refinement for the boundary layer. This paper presents the implemented wall film condensation model and then, introduces the simulation result using CUPID with the model for a conceptual condensation problem in a large system. (authors)

  18. Reduction of PM emissions from specific sources reflected on key components concentrations of ambient PM10

    Science.gov (United States)

    Minguillon, M. C.; Querol, X.; Monfort, E.; Alastuey, A.; Escrig, A.; Celades, I.; Miro, J. V.

    2009-04-01

    The relationship between specific particulate emission control and ambient levels of some PM10 components (Zn, As, Pb, Cs, Tl) was evaluated. To this end, the industrial area of Castellón (Eastern Spain) was selected, where around 40% of the EU glazed ceramic tiles and a high proportion of EU ceramic frits (middle product for the manufacture of ceramic glaze) are produced. The PM10 emissions from the ceramic processes were calculated over the period 2000 to 2007 taking into account the degree of implementation of corrective measures throughout the study period. Abatement systems (mainly bag filters) were implemented in the majority of the fusion kilns for frit manufacture in the area as a result of the application of the Directive 1996/61/CE, leading to a marked decrease in PM10 emissions. On the other hand, ambient PM10 sampling was carried out from April 2002 to July 2008 at three urban sites and one suburban site of the area and a complete chemical analysis was made for about 35 % of the collected samples, by means of different techniques (ICP-AES, ICP-MS, Ion Chromatography, selective electrode and elemental analyser). The series of chemical composition of PM10 allowed us to apply a source contribution model (Principal Component Analysis), followed by a multilinear regression analysis, so that PM10 sources were identified and their contribution to bulk ambient PM10 was quantified on a daily basis, as well as the contribution to bulk ambient concentrations of the identified key components (Zn, As, Pb, Cs, Tl). The contribution of the sources identified as the manufacture and use of ceramic glaze components, including the manufacture of ceramic frits, accounted for more than 65, 75, 58, 53, and 53% of ambient Zn, As, Pb, Cs and Tl levels, respectively (with the exception of Tl contribution at one of the sites). The important emission reductions of these sources during the study period had an impact on ambient key components levels, such that there was a high

  19. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  20. Technetium-99m-Labeled Autologous Serum Albumin: A Personal-Exclusive Source of Serum Component

    OpenAIRE

    Wang, Yuh-Feng; Chen, Yi-Chun; Li, Dian-Kun; Chuang, Mei-Hua

    2011-01-01

    Technetium-99m human serum albumin (99mTc-HSA) is an important radiopharmaceutical required in nuclear medicine studies. However, the risk of transfusion-transmitted infection remains a major safety concern. Autopreparation of serum component acquired from patient provides a “personal-exclusive” source for radiolabeling. This paper is to evaluate the practicality of on-site elusion and subsequent radiolabeling efficacy for serum albumin. Results showed that the autologous elute contained more...

  1. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  2. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  3. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    Directory of Open Access Journals (Sweden)

    Dong-Sup Lee

    2015-01-01

    Full Text Available Independent Component Analysis (ICA, one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: insta- bility and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to vali- date the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  4. Validation of the Open Source Code_Aster Software Used in the Modal Analysis of the Fluid-filled Cylindrical Shell

    Directory of Open Access Journals (Sweden)

    B D. Kashfutdinov

    2017-01-01

    Code_Aster in the field of aerospace systems. Using the free software to analyze the aerospace system components allows enterprises to diminish dependency on the commercial software, reduce the cost of software usage, and adapt the open source software to the specific problems.

  5. Heat source component development program. Report for July--December 1978

    International Nuclear Information System (INIS)

    Foster, E.L. Jr.

    1979-01-01

    This is the seventh of a series of reports describing the results of several analytical and experimental programs being conducted at Battelle-Columbus Laboratories to develop components for advanced radioisotope heat source applications. The heat sources will for the most part be used in advanced static and dynamic power conversion systems. Battelle's support of LASL during the current reporting period has been to determine the operational and reentry response of selected heat source trial designs, and their thermal response to a space shuttle solid propellant fire environment. Thermal, ablation, and thermal stress analyses were conducted using two-dimensional modeling techniques previously employed for the analysis of the earlier trial design versions, and modified in part to improve the modeling accuracy. Further modifications were made to improve the modeling accuracy as described herein. Thermal, ablation, and thermal stress analyses were then conducted for the trial design selected by LASL/DOE for more detailed studies using three-dimensional modeling techniques

  6. Nuclear heat source component design considerations for HTGR process heat reactor plant concept

    International Nuclear Information System (INIS)

    McDonald, C.F.; Kapich, D.; King, J.H.; Venkatesh, M.C.

    1982-05-01

    The coupling of a high-temperature gas-cooled reactor (HTGR) and a chemical process facility has the potential for long-term synthetic fuel production (i.e., oil, gasoline, aviation fuel, hydrogen, etc) using coal as the carbon source. Studies are in progress to exploit the high-temperature capability of an advanced HTGR variant for nuclear process heat. The process heat plant discussed in this paper has a 1170-MW(t) reactor as the heat source and the concept is based on indirect reforming, i.e., the high-temperature nuclear thermal energy is transported [via an intermediate heat exchanger (IHX)] to the externally located process plant by a secondary helium transport loop. Emphasis is placed on design considerations for the major nuclear heat source (NHS) components, and discussions are presented for the reactor core, prestressed concrete reactor vessel (PCRV), rotating machinery, and heat exchangers

  7. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  8. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  9. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  10. Exploiting IoT Technologies and Open Source Components for Smart Seismic Network Instrumentation

    Science.gov (United States)

    Germenis, N. G.; Koulamas, C. A.; Foundas, P. N.

    2017-12-01

    The data collection infrastructure of any seismic network poses a number of requirements and trade-offs related to accuracy, reliability, power autonomy and installation & operational costs. Having the right hardware design at the edge of this infrastructure, embedded software running inside the instruments is the heart of pre-processing and communication services implementation and their integration with the central storage and processing facilities of the seismic network. This work demonstrates the feasibility and benefits of exploiting software components from heterogeneous sources in order to realize a smart seismic data logger, achieving higher reliability, faster integration and less development and testing costs of critical functionality that is in turn responsible for the cost and power efficient operation of the device. The instrument's software builds on top of widely used open source components around the Linux kernel with real-time extensions, the core Debian Linux distribution, the earthworm and seiscomp tooling frameworks, as well as components from the Internet of Things (IoT) world, such as the CoAP and MQTT protocols for the signaling planes, besides the widely used de-facto standards of the application domain at the data plane, such as the SeedLink protocol. By using an innovative integration of features based on lower level GPL components of the seiscomp suite with higher level processing earthworm components, coupled with IoT protocol extensions to the latter, the instrument can implement smart functionality such as network controlled, event triggered data transmission in parallel with edge archiving and on demand, short term historical data retrieval.

  11. Nuclear heat source component design considerations for HTGR process heat reactor plant concept

    International Nuclear Information System (INIS)

    McDonald, C.F.; Kapich, D.; King, J.H.; Venkatesh, M.C.

    1982-01-01

    Using alternate energy sources abundant in the U.S.A. to help curb foreign oil imports is vitally important from both national security and economic standpoints. Perhaps the most forwardlooking opportunity to realize national energy goals involves the integrated use of two energy sources that have an established technology base in the U.S.A., namely nuclear energy and coal. The coupling of a high-temperature gas-cooled reactor (HTGR) and a chemical process facility has the potential for long-term synthetic fuel production (i.e., oil, gasoline, aviation fuel, hydrogen, etc.) using coal as the carbon source. Studies are in progress to exploit the high-temperature capability of an advanced HTGR variant for nuclear process heat. The process heat plant discussed in this paper has a 1170-MW(t) reactor as the heat source and the concept is based on indirect reforming, i.e., the high-temperature nuclear thermal energy is transported (via an intermediate heat exchanger (IHX)) to the externally located process plant by a secondary helium transport loop. Emphasis is placed on design considerations for the major nuclear heat source (NHS) components, and discussions are presented for the reactor core, prestressed concrete reactor vessel (PCRV), rotating machinery, and heat exchangers

  12. Heat source component development program. Quarterly report for April--June 1977

    Energy Technology Data Exchange (ETDEWEB)

    Foster, E.L. Jr. (comp.)

    1977-07-01

    This is the third in a series of quarterly reports describing the results of several experimental programs being conducted at Battelle-Columbus to develop components for advanced radioisotope heat source applications. The heat sources will for the most part be used in advanced static and dynamic power conversion systems. The specific component development efforts which are described include: improved selective and nonselective vents for helium release from the fuel containment; an improved reentry member and an improved impact member, singly and combined. The unitized reentry-impact member (RIM) is under development to be used as a bifunctional ablator. The development of a unitized reentry-impact member (RIM) has been stopped and the efforts are being redirected to the evaluation of materials that could be used in the near term for the module housing of the General Purpose Heat Source (GPHS). This redirection will be particularly felt in the selection of (improved) materials for reentry analysis and in the experimental evaluation of materials in impact tests. Finally thermochemical supporting studies are reported.

  13. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  14. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  15. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  16. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  17. Variability search in M 31 using principal component analysis and the Hubble Source Catalogue

    Science.gov (United States)

    Moretti, M. I.; Hatzidimitriou, D.; Karampelas, A.; Sokolovsky, K. V.; Bonanos, A. Z.; Gavras, P.; Yang, M.

    2018-06-01

    Principal component analysis (PCA) is being extensively used in Astronomy but not yet exhaustively exploited for variability search. The aim of this work is to investigate the effectiveness of using the PCA as a method to search for variable stars in large photometric data sets. We apply PCA to variability indices computed for light curves of 18 152 stars in three fields in M 31 extracted from the Hubble Source Catalogue. The projection of the data into the principal components is used as a stellar variability detection and classification tool, capable of distinguishing between RR Lyrae stars, long-period variables (LPVs) and non-variables. This projection recovered more than 90 per cent of the known variables and revealed 38 previously unknown variable stars (about 30 per cent more), all LPVs except for one object of uncertain variability type. We conclude that this methodology can indeed successfully identify candidate variable stars.

  18. Source Term Characterization for Structural Components in 17 x 17 KOFA Spent Fuel Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Dong Keun; Kook, Dong Hak; Choi, Heui Joo; Choi, Jong Won [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-12-15

    Source terms of metal waste comprising a spent fuel assembly are relatively important when the spent fuel is pyroprocessed, because cesium, strontium, and transuranics are not a concern any more in the aspect of source term of permanent disposal. In this study, characteristics of radiation source terms for each structural component in spent fuel assembly was analyzed by using ORIGEN-S with a assumption that 10 metric tons of uranium is pyroprocessed. At first, mass and volume for each structural component of the fuel assembly were calculated in detail. Activation cross section library was generated by using KENO-VI/ORIGEN-S module for top-end piece and bottom-end piece, because those are located at outer core with different neutron spectrum compared to that of inner core. As a result, values of radioactivity, decay heat, and hazard index were reveled to be 1.40 x 10{sup 15} Bequerels, 236 Watts, 4.34 x 10{sup 9} m{sup 3}-water, respectively, at 10 years after discharge. Those values correspond to 0.7 %, 1.1 %, 0.1 %, respectively, compared to that of spent fuel. Inconel 718 grid plate was shown to be the most important component in the all aspects of radioactivity, decay heat, and hazard index although the mass occupies only 1 % of the total. It was also shown that if the Inconel 718 grid plate is managed separately, the radioactivity and hazard index of metal waste could be decreased to 20 {approx} 45 % and 30 {approx} 45 %, respectively. As a whole, decay heat of metal waste was shown to be negligible in the aspect of disposal system design, while the radioactivity and hazard index are important.

  19. Source Term Characteristics Analysis for Structural Components in PWR spent fuel assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kook, Dong Hak; Choi, Heui Joo; Cho, Dong Keun [KAERI, Daejeon (Korea, Republic of)

    2010-12-15

    Source terms of metal waste comprising a spent fuel assembly are relatively important when the spent fuel is pyroprocessed, because cesium, strontium, and transuranics are not a concern any more in the aspect of source term of permanent disposal. In this study, characteristics of radiation source terms for each structural component in spent fuel assembly was analyzed by using ORIGEN-S with a assumption that 10 metric tons of uranium is pyroprocessed. At first, mass and volume for each structural component of the fuel assembly were calculated in detail. Activation cross section library was generated by using KENO-VI/ORIGEN-S module for top-end piece and bottom-end piece, because those are located at outer core under different neutron spectrum compared to that of inner core. As a result, values of radioactivity, decay heat, and hazard index were reveled to be 1.32x1015 Bequerels, 238 Watts, 4.32x109 m3 water, respectively, at 10 years after discharge. Those values correspond to 0.6 %, 1.1 %, 0.1 %, respectively, compared to that of spent fuel. Inconel 718 grid plate was shown to be the most important component in the all aspects of radioactivity, decay heat, and hazard index although the mass occupies only 1 % of the total. It was also shown that if the Inconel 718 grid plate is managed separately, the radioactivity and hazard index of metal waste could be decreased to 25{approx}50 % and 35{approx}40 %, respectively. As a whole, decay heat of metal waste was shown to be negligible in the aspect of disposal system design, while the radioactivity and hazard index are important

  20. Variation of atmospheric aerosol components and sources during smog episodes in Debrecen, Hungary

    International Nuclear Information System (INIS)

    Angyal, A.; Kertész, Zs.; Szoboszlai, Z.; Szikszai, Z.; Ferenczi, Z.; Furu, E.; Tõrõk, Zs.

    2013-01-01

    Full text: Atmospheric particulate matter (APM) pollution is one of the leading environmental problems in densely populated urban environments. In most cities all around the world high aerosol pollution levels occurs regularly. Debrecen, an average middle-European city is no exception. Every year there are several days when the aerosol pollution level exceeds the alarm threshold value (100 μ-g/m 3 for PM10 in 24- hours average). When the PM10 pollution level remains over this limit value for days, it is called 'smog' by the authorities. In this work we studied the variation of the elemental components and sources of PM10, PM2.5 and PM coarse and their dependence on meteorological conditions in Debrecen during two smog episodes occurred in November 2011. Aerosol samples were collected with 2-hours time resolution with a PIXE International sequential streaker in an urban background site in the downtown of Debrecen. In order to get information about the size distribution of the aerosol elemental components 9-stage cascade impactors were also employed during the sampling campaigns. The elemental composition (Z ≥ 13) were determined by Particle Induced X-Ray Emission (PIXE) at the IBA Laboratory of Atomki. Concentrations of elemental carbon were measured with a smoke stain reflectometer. On this data base source apportionment was carried out by using the positive matrix factorisation (PMF) method. Four factors were identified for both size fractions, including soil dust, traffic, domestic heating, and oil combustion. The time pattern of the aerosol elemental components and PM sources exhibited strong dependence on the mixing layer thickness. We showed that domestic heating had a major contribution to the aerosol pollution. (This work was carried out in the frame of the János Bolyai Research Scholarship of the Hungarian Academy of Sciences and TÁMOP-4.2.2/B-10/1-2010-0024 project). (author)

  1. Experience with the source evaluation board method of procuring technical components for the Fermilab Main Injector

    International Nuclear Information System (INIS)

    Harding, D.J.; Collins, J.P.; Kobliska, G.R.; Chester, N.S.; Pewitt, E.G.; Fowler, W.B.

    1993-01-01

    Fermilab has adopted the Source Evaluation Board (SEB) method for procuring certain major technical components of the Fermilab Main Injector. The SEB procedure is designed to ensure the efficient and effective expenditure of Government funds at the same time that it optimizes the opportunity for attainment of project objectives. A qualitative trade-off is allowed between price and technical factors. The process involves a large amount of work and is only justified for a very limited number of procurements. Fermilab has gained experience with the SEB process in awarding subcontracts for major subassemblies of the Fermilab Main Injector dipoles

  2. Chemical characteristics, deposition fluxes and source apportionment of precipitation components in the Jiaozhou Bay, North China

    Science.gov (United States)

    Xing, Jianwei; Song, Jinming; Yuan, Huamao; Li, Xuegang; Li, Ning; Duan, Liqin; Qu, Baoxiao; Wang, Qidong; Kang, Xuming

    2017-07-01

    To systematically illustrate the chemical characteristics, deposition fluxes and potential sources of the major components in precipitation, 49 rainwater and snow water samples were collected in the Jiaozhou Bay from June 2015 to May 2016. We determined the pH, electric conductivity (EC) and the concentrations of main ions (Na+, K+, Ca2 +, Mg2 +, NH4+, SO42 -, NO3-, Cl- and F-) as well as analyzed their source contributions and atmospheric transport. The results showed that the precipitation samples were severely acidified with an annual volume-weighted mean (VWM) pH of 4.77. The frequency of acid precipitation (pH pollution level over the Jiaozhou Bay. Surprisingly, NH4+ (40.4%), which is higher than Ca2 + (29.3%), is the dominant species of cations, which is different from that in most areas of China. SO42 - was the most abundant anions, and accounted for 41.6% of the total anions. The wet deposition fluxes of sulfur (S) was 12.98 kg ha- 1 yr- 1. Rainfall, emission intensity and long-range transport of natural and anthropogenic pollutants together control the concentrations and wet deposition fluxes of chemical components in the precipitation. Non-sea-salt SO42 - and NO3- were the primary acid components while NH4+ and non-sea-salt Ca2 + were the dominating neutralizing constituents. The comparatively lower rainwater concentration of Ca2 + in the Jiaozhou Bay than that in other regions in Northern China likely to be a cause for the strong acidity of precipitation. Based on the combined enrichment factor and correlation analysis, the integrated contributions of sea-salt, crustal and anthropogenic sources to the total ions of precipitation were estimated to be 28.7%, 14.5% and 56.8%, respectively. However, the marine source fraction of SO42 - may be underestimated as the contribution from marine phytoplankton was neglected. Therefore, the precipitation components in the Jiaozhou Bay present complex chemical characteristics under the combined effects of natural

  3. Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of Space Geodetic Time Series

    Science.gov (United States)

    Gualandi, Adriano; Serpelloni, Enrico; Elina Belardinelli, Maria; Bonafede, Maurizio; Pezzo, Giuseppe; Tolomei, Cristiano

    2015-04-01

    A critical point in the analysis of ground displacement time series, as those measured by modern space geodetic techniques (primarly continuous GPS/GNSS and InSAR) is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies, since PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem. The recovering and separation of the different sources that generate the observed ground deformation is a fundamental task in order to provide a physical meaning to the possible different sources. PCA fails in the BSS problem since it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the displacement time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient deformation signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources

  4. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  5. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  6. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  7. Judicious use of custom development in an open source component architecture

    Science.gov (United States)

    Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.

    2014-12-01

    Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.

  8. Experimental Melting Study of Basalt-Peridotite Hybrid Source: Constrains on Chemistry of Recycled Component

    Science.gov (United States)

    Gao, S.; Takahashi, E.; Matsukage, K. N.; Suzuki, T.; Kimura, J. I.

    2015-12-01

    It is believed that magma genesis of OIB is largely influenced by recycled oceanic crust component involved in the mantle plume (e.g., Hauri et al., 1996; Takahashi & Nakajima., 2002; Sobolev et al., 2007). Mallik & Dasgupta (2012) reported that the wall-rock reaction in MORB-eclogite and peridotite layered experiments produced a spectrum of tholeiitic to alkalic melts. However, the proper eclogite source composition is still under dispute. In order to figure out the geochemistry of recycled component as well as their melting process, we conducted a series of high-P, high-T experiments. Melting experiments (1~10hrs) were performed under 2.9GPa with Boyd-England type piston-cylinder (1460~1540°C for dry experiments, 1400~1500°C for hydrous experiments) and 5GPa with Kawai-type multi-anvil (1550~1650°C for dry experiments, 1350~1550°C for hydrous experiments), at the Magma Factory, Tokyo Tech. Spinel lherzolite KLB-1 (Takahashi 1986) was employed as peridotite component. Two basalts were used as recycled component: Fe-enriched Columbia River basalt (CRB72-180, Takahashi et al., 1998) and N-type MORB (NAM-7, Yasuda et al., 1994). In dry experiments below peridotite dry solidus, melt compositions ranged from basaltic andesite to tholeiite. Opx reaction band generated between basalt and peridotite layer hindered chemical reaction. On the other hand, alkali basalt was formed in hydrous run products because H2O promoted melting process in both layers. Compared with melts formed by N-MORB-peridotite runs, those layered experiments with CRB are enriched in FeO, TiO2, K2O and light REE at given MgO. In other words, melts produced by CRB-peridotite layered experiments are close to alkali basalts in OIB and tholeiite in Hawaii, while those by layered experiments with N-MORB are poor in above elements. Thus we propose that Fe-rich Archean or Proterozoic tholeiite (BVSP 1980) would be a possible candidate for recycled component in OIB source.

  9. MDEP Technical Report TR-CSWG-02. Technical Report on Lessons Learnt on Achieving Harmonisation of Codes and Standards for Pressure Boundary Components in Nuclear Power Plants

    International Nuclear Information System (INIS)

    2013-01-01

    This report was prepared by the Multinational Design Evaluation Program's (MDEP's) Codes and Standards Working Group (CSWG). The primary, long-term goal of MDEP's CSWG is to achieve international harmonisation of codes and standards for pressure-boundary components in nuclear power plants. The CSWG recognised early on that the first step to achieving harmonisation is to understand the extent of similarities and differences amongst the pressure-boundary codes and standards used in various countries. To assist the CSWG in its long-term goals, several standards developing organisations (SDOs) from various countries performed a comparison of their pressure-boundary codes and standards to identify the extent of similarities and differences in code requirements and the reasons for their differences. The results of the code-comparison project provided the CSWG with valuable insights in developing the subsequent actions to take with SDOs and the nuclear industry to pursue harmonisation of codes and standards. The results enabled the CSWG to understand from a global perspective how each country's pressure-boundary code or standard evolved into its current form and content. The CSWG recognised the important fact that each country's pressure-boundary code or standard is a comprehensive, living document that is continually being updated and improved to reflect changing technology and common industry practices unique to each country. The rules in the pressure-boundary codes and standards include comprehensive requirements for the design and construction of nuclear power plant components including design, materials selection, fabrication, examination, testing and overpressure protection. The rules also contain programmatic and administrative requirements such as quality assurance; conformity assessment (e.g., third-party inspection); qualification of welders, welding equipment and welding procedures; non-destructive examination (NDE) practices and

  10. A three-field model of transient 3D multiphase, three-component flow for the computer code IV A3. Pt. 1

    International Nuclear Information System (INIS)

    Kolev, N.I.

    1991-12-01

    This work contains description of the physical and mathematical basis on which the IVA3 computer code relies. After describing the state of the art of the 3D modeling for transient multiphase flows, the model assumptions and the modeling technique used in IVA3 are described. Starting with the principles of conservation of mass, momentum, and energy, the non averaged conservation equations are derived for each of the velocity fields which consist of different isothermal components. Thereafter averaging is applied and the working form of the system of 21 partial differential equations is derived. Special attention is paid to the strict consistence of the modeling technique used in IVA3 with the second principle of thermodynamics. The entropy concept used is derived starting with the unaveraged conservation equations and subsequent averaging. The source terms of the entropy production are carefully defined and the final form of the averaged entropy equation is given ready for direct practical applications. The idea of strong analytical thermodynamic coupling between pressure field and changes of the other thermodynamic properties, which is used for the first time in 3D multi fluid modeling, is presented in detail. After obtaining the working form of the conservation equations, the discretization procedure and the reduction to algebraic problems is presented. The mathematical solution method together with some information about the architecture of IVA3 including the local momentum decoupling and accuracy control is presented too. (orig./GL) [de

  11. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  12. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  13. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  14. A novel design for sap flux data acquisition in large research plots using open source components

    Science.gov (United States)

    Hawthorne, D. A.; Oishi, A. C.

    2017-12-01

    Sap flux sensors are a widely-used tool for estimating in-situ, tree-level transpiration rates. These probes are installed in the stems of multiple trees within a study area and are typically left in place throughout the year. Sensors vary in their design and theory of operation, but all require electrical power for a heating element and produce at least one analog signal that must be digitized for storage. There are two topologies traditionally adopted to energize these sensors and gather the data from them. In one, a single data logger and power source are used. Dedicated cables radiate out from the logger to supply power to each of the probes and retrieve analog signals. In the other layout, a standalone data logger is located at each monitored tree. Batteries must then be distributed throughout the plot to service these loggers. We present a hybrid solution based on industrial control systems that employs a central data logger and battery, but co-locates digitizing hardware with the sensors at each tree. Each hardware node is able to communicate and share power over wire links with neighboring nodes. The resulting network provides a fault-tolerant path between the logger and each sensor. The approach is optimized to limit disturbance of the study plot, protect signal integrity and to enhance system reliability. This open-source implementation is built on the Arduino micro-controller system and employs RS485 and Modbus communications protocols. It is supported by laptop based management software coded in Python. The system is designed to be readily fabricated and programmed by non-experts. It works with a variety of sap-flux measurement techniques and it is able to interface to additional environmental sensors.

  15. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  16. Development and tests of molybdenum armored copper components for MITICA ion source

    Science.gov (United States)

    Pavei, Mauro; Böswirth, Bernd; Greuner, Henri; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo

    2016-02-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  17. Development and tests of molybdenum armored copper components for MITICA ion source

    International Nuclear Information System (INIS)

    Pavei, Mauro; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo; Böswirth, Bernd; Greuner, Henri

    2016-01-01

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results

  18. Development and tests of molybdenum armored copper components for MITICA ion source

    Energy Technology Data Exchange (ETDEWEB)

    Pavei, Mauro, E-mail: mauro.pavei@igi.cnr.it; Marcuzzi, Diego; Rizzolo, Andrea; Valente, Matteo [Consorzio RFX, Corso Stati Uniti, 4, I-35127 Padova (Italy); Böswirth, Bernd; Greuner, Henri [Max-Planck-Institut für Plasmaphysik, Boltzmannstrasse 2, D-85748 Garching (Germany)

    2016-02-15

    In order to prevent detrimental material erosion of components impinged by back-streaming positive D or H ions in the megavolt ITER injector and concept advancement beam source, a solution based on explosion bonding technique has been identified for producing a 1 mm thick molybdenum armour layer on copper substrate, compatible with ITER requirements. Prototypes have been recently manufactured and tested in the high heat flux test facility Garching Large Divertor Sample Test Facility (GLADIS) to check the capability of the molybdenum-copper interface to withstand several thermal shock cycles at high power density. This paper presents both the numerical fluid-dynamic analyses of the prototypes simulating the test conditions in GLADIS as well as the experimental results.

  19. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  20. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  1. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  2. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  3. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  4. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  5. Suppression of lymphocyte proliferation by marijuana components is related to cell number and cell source

    International Nuclear Information System (INIS)

    Klein, T.; Pross, S.; Newton, C.; Friedman, H.

    1986-01-01

    Conflicting reports have appeared concerning the effect of marijuana components on immune responsiveness. The authors have observed that the effect of cannabinoids on lymphocyte proliferation varied with both the concentration of the drug and the mitogen used. They now report that at a constant concentration of drug, the cannabinoid effect varied from no effect to suppression depending upon the number of cells in culture and the organ source of the cells. Dispersed cell suspensions of mouse lymph node, spleen, and thymus were prepared and cultured at varying cell numbers with either delta-9-tetrahydrocannabinol or 11-hydroxy-delta-9-tetrahydrocannabinol and various mitogens. Lymphocyte proliferation was analyzed by 3 H-thymidine incorporation. T-lymphocyte mitogen responses in cultures containing high cell numbers were unaffected by the cannabinoids but as cell numbers were reduced a suppression of the response was observed. Furthermore, thymus cells were considerably more susceptible to cannabinoid suppression than cells from either lymph node or spleen. These results suggest that certain lymphocyte subpopulations are more sensitive to cannabinoid suppression and that in addition to drug concentration other variables such as cell number and cell source must be considered when analyzing cannabinoid effects

  6. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  7. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  8. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  9. A High Performance Chemical Simulation Preprocessor and Source Code Generator, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are a critical component of aerospace research, Earth systems research, and energy research. These simulations enable a...

  10. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  11. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  12. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  13. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  14. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  15. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  16. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  17. Separation of radiated sound field components from waves scattered by a source under non-anechoic conditions

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn

    2010-01-01

    to the source. Thus the radiated free-field component is estimated simultaneously with solving the inverse problem of reconstructing the sound field near the source. The method is particularly suited to cases in which the overall contribution of reflected sound in the measurement plane is significant....

  18. Design and synthesis of single-source molecular precursors to homogeneous multi-component oxide materials

    Science.gov (United States)

    Fujdala, Kyle Lee

    This dissertation describes the syntheses of single-source molecular precursors to multi-component oxide materials. These molecules possess a core metal or element with various combinations of -OSi(O tBu)3, -O2P(OtBu) 2, and -OB[OSi(OtBu)3] 2 ligands. Such molecules decompose under mild thermolytic conditions (models for oxide-supported metal species and multi-component oxides. Significantly, the first complexes to contain three or more heteroelements suitable for use in the TMP method have been synthesized. Compounds for use as single-source molecular precursors have been synthesized containing Al, B, Cr, Hf, Mo, V, W, and Zr, and their thermal transformations have been examined. Heterogeneous catalytic reactions have been examined for selected materials. Also, cothermolyses of molecular precursors and additional molecules (i.e., metal alkoxides) have been utilized to provide materials with several components for potential use as catalysts or catalyst supports. Reactions of one and two equivs of HOSi(OtBu) 3 with Cr(OtBu)4 afforded the first Cr(IV) alkoxysiloxy complexes (tBuO) 3CrOSi(OtBu)3 and ( tBuO)2Cr[OSi(OtBu) 3]2, respectively. The high-yielding, convenient synthesis of (tBuO)3CrOSi(O tBu)3 make this complex a useful single-source molecular precursor, via the TMP method, to Cr/Si/O materials. The thermal transformations of (tBuO)3CrOSi(O tBu)3 and (tBuO) 2Cr[OSi(OtBu)3]2 to chromia-silica materials occurr at low temperatures (≤180°C), to give isobutene as the major carbon-containing product. The material generated from the solid-state conversion of (tBuO) 3CrOSi(OtBu)3 (CrOS ss) has an unexpectedly high surface area of 315 m2 g-1 that is slightly reduced to 275 m2 g-1 after calcination at 500°C in O2. The xerogel obtained by the thermolysis of an n-octane solution of (tBuO)3CrOSi(O tBu)3 (CrOSixg) has a surface area of 315 m2 g-1 that is reduced to 205 m2 g-1 upon calcination at 500°C. Powder X-ray diffraction (PXRD) analysis revealed that Cr2O 3 is

  19. Pesticides and biocides in a karst catchment: Identification of contaminant sources and related flow components

    Science.gov (United States)

    Wagner, Thomas; Bollmann, Ulla E.; Bester, Kai; Birk, Steffen

    2013-04-01

    Karst aquifers are widely used as drinking water resources. However, their high vulnerability to chemical and bacterial contamination due to the heterogeneity in aquifer properties (highly conductive solution conduits embedded in the less conductive fissured rock) is difficult to assess and thus poses major challenges to the management of karst water resources. Contamination of karst springs by organic micro-pollutants has been observed in recent studies. Within this study the water from different springs draining one karst aquifer as well as the main sinking stream replenishing it were analysed before, during and after a storm water event in order to examine the occurrence of different pesticides and biocides. Contaminants from both urban as well as agricultural origin could be detected in the water with concentrations in the low ng/L range (tebuconazole, carbendazim, diuron, isoproturon, terbutryn, atrazine, dichlorobenzamide (BAM), which is a metabolite of dichlobenil). While some compounds could be followed from the sinking stream to the springs (e.g. dichlorobenzamide) some seem to have a source in the autogenic recharge from the karst plateau (Tebuconazole: wood preservative in buildings). These compounds appear to be related to fast flow components with residence times in the order of days, which are known from a number of tracer tests with fluorescent dyes. However, the occurrence of the pesticide atrazine (banned since 1995 in Austria) in the springs, while on the other hand no current input into the karst occurs, shows that some compounds have long residence times in the karst aquifer. These differences in residence times can hardly be attributed to differences in physico-chemical properties of the compounds and must thus be due to the presence of slow and fast flow components. This is in agreement with the duality of karst aquifers due to highly conductive networks of solution conduits embedded in less conductive fissured carbonate rocks.

  20. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  1. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  2. SUPER-FMIT, an accelerator-based neutron source for fusion components irradiation testing

    International Nuclear Information System (INIS)

    Burke, R.J.; Holmes, J.J.; Johnson, D.L.; Mann, F.M.; Miles, R.R.

    1984-01-01

    The SUPER-FMIT facility is proposed as an advanced accelerator based neutron source for high flux irradiation testing of large-sized fusion reactor components. The facility would require only small extensions to existing accelerator and target technology originally developed for the Fusion Materials Irradiation Test (FMIT) facility. There, neutrons would be produced by a 0.1 ampere beam of 35 MeV deuterons incident upon a liquid lithium target. The volume available for high flux (> 10 14 n/cm 2 -s) testing in SUPER-FMIT would be 14 liters, about a factor of 30 larger than in the FMIT facility. This is because the effective beam current of 35 MeV deuterons on target can be increased by a factor of ten to 1.0 amperes or more. Such a large increase can be accomplished by acceleration of multiple beams of molecular deuterium ions (D 2 +) to 70 MeV in a common accelerator sructure. The availability of multiple beams and large total current allows great variety in the testing that can be done. For example, fluxes greater than 10 16 n/cm 2 -s, multiple simultaneous experiments, and great flexibility in tailoring of spatial distributions of flux and spectra can be achieved

  3. Evaluation of crack-like flaw in Japanese fitness-for-service code for nuclear power plant components

    International Nuclear Information System (INIS)

    Kashima, Koichi

    2003-01-01

    For evaluation of faults detected at nuclear appliances, establishment of fitness-for-service code in Japan is focused by most of peoples. The code is a management rule to keep features of the appliances under supplying operation to their constant safe level and is a rule composing a pair with design rule. The codes for nuclear power generation facilities-rules of fitness-for-service for nuclear power plants were issued on May, 2002, by the Japan Society of Mechanical Engineering (JSME), which was added on October, 2002, by its inspection code, for its amendment. Under such states, Japan Government is proceeding on establishment of the fitness-for-service code in Japan on a base of the private rule. Here were introduced present state and tasks on content of crack-like flaw evaluation on the code under an example of the private rule of JSME, which is composed of three items of inspection, evaluation, and recovery and exchange. The evaluation of defects consists of 1) the first step of evaluation of defects and 2) the second step of evaluation of defects. The first step determines the size of defect by modeling form. When the size of defect is smaller than the evaluation criterion, the appliances can be used unconditionally. However, its size is larger than the evaluation criterion, the appliances have to be evaluated by the second step. When the estimated defects size at end of evaluation period is smaller than the permissible value, the appliances can be used within the evaluation period. But, if its size is larger than the permissible value, the appliances have to be recovered and exchanged. Modeling, evaluation criterion, evaluation of destruction, safety standards and future problems are described. (S.Y.)

  4. Experience with copper oxide production in antiproton source components at Fermi National Accelerator Laboratory

    International Nuclear Information System (INIS)

    Ader, Christine R.; Harms, Elvin R. Jr; Morgan, James P.

    2000-01-01

    The Antiproton (Pbar) Source at Fermi National Accelerator Laboratory is a facility comprised of a target station, two rings called the Debuncher and Accumulator and the transport lines between those rings and the remainder of the particle accelerator complex. Water is by far the most common medium for carrying excess heat away from components, primarily electromagnets, in this facility. The largest of the water systems found in Pbar is the 95 degree Fahrenheit Low Conductivity Water (LCW) system. LCW is water which has had free ions removed, increasing its resistance to electrical current. This water circuit is used to cool magnets, power supplies, and stochastic cooling components and typically has a resistivity of 11--18 megaohms-cm. For more than ten years the Antiproton rings were plagued with overheating magnets due to plugged water-cooling channels. Various repairs have been tried over the years with no permanent success. Throughout all of this time, water samples have indicated copper oxide, CuO, as the source of the contamination. Matters came to a head in early 1997 following a major underground LCW leak between the Central Utilities Building and the Antiproton Rings enclosures. Over a span of several weeks following system turn-on, some twenty magnets overheated leading to unreliable Pbar source operation. Although it was known that oxygen in the system reacts with the copper tubing to form CuO, work to remedy this problem was not undertaken until this time period. Leaks, large quantities of make-up water, infrequent filter replacement, and thermal cycling also result in an increase in the corrosion product release rate. A three-pronged approach has been implemented to minimize the amount of copper oxide available to plug the magnets: (1) installation of an oxygen removal system capable of achieving dissolved oxygen concentrations in the parts per billion (ppb) range; (2) regular closed-loop filter/flushing of the copper headers and magnets and stainless

  5. Components and context: exploring sources of reading difficulties for language minority learners and native English speakers in urban schools.

    Science.gov (United States)

    Kieffer, Michael J; Vukovic, Rose K

    2012-01-01

    Drawing on the cognitive and ecological domains within the componential model of reading, this longitudinal study explores heterogeneity in the sources of reading difficulties for language minority learners and native English speakers in urban schools. Students (N = 150) were followed from first through third grade and assessed annually on standardized English language and reading measures. Structural equation modeling was used to investigate the relative contributions of code-related and linguistic comprehension skills in first and second grade to third grade reading comprehension. Linguistic comprehension and the interaction between linguistic comprehension and code-related skills each explained substantial variation in reading comprehension. Among students with low reading comprehension, more than 80% demonstrated weaknesses in linguistic comprehension alone, whereas approximately 15% demonstrated weaknesses in both linguistic comprehension and code-related skills. Results were remarkably similar for the language minority learners and native English speakers, suggesting the importance of their shared socioeconomic backgrounds and schooling contexts.

  6. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  7. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  8. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  9. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  10. TITANIUM ISOTOPE SOURCE RELATIONS AND THE EXTENT OF MIXING IN THE PROTO-SOLAR NEBULA EXAMINED BY INDEPENDENT COMPONENT ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Steele, Robert C. J.; Boehnke, Patrick [Department of Earth, Planetary, and Space Sciences, University of California, Los Angeles, CA 90095 (United States)

    2015-04-01

    The Ti isotope variations observed in hibonites represent some of the largest isotope anomalies observed in the solar system. Titanium isotope compositions have previously been reported for a wide variety of different early solar system materials, including calcium, aluminum rich inclusions (CAIs) and CM hibonite grains, some of the earliest materials to form in the solar system, and bulk meteorites which formed later. These data have the potential to allow mixing of material to be traced between many different regions of the early solar system. We have used independent component analysis to examine the mixing end-members required to produce the compositions observed in the different data sets. The independent component analysis yields results identical to a linear regression for the bulk meteorites. The components identified for hibonite suggest that most of the grains are consistent with binary mixing from one of three highly anomalous nucleosynthetic sources. Comparison of these end-members show that the sources which dominate the variation of compositions in the meteorite parent body forming regions was not present in the region in which the hibonites formed. This suggests that the source which dominates variation in Ti isotope anomalies between the bulk meteorites was not present when the hibonite grains were forming. One explanation is that the bulk meteorite source may not be a primary nucleosynthetic source but was created by mixing two or more of the hibonite sources. Alternatively, the hibonite sources may have been diluted during subsequent nebula processing and are not a dominant solar system signatures.

  11. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  12. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  13. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  14. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  15. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  16. A new six-component super soliton hierarchy and its self-consistent sources and conservation laws

    International Nuclear Information System (INIS)

    Wei Han-yu; Xia Tie-cheng

    2016-01-01

    A new six-component super soliton hierarchy is obtained based on matrix Lie super algebras. Super trace identity is used to furnish the super Hamiltonian structures for the resulting nonlinear super integrable hierarchy. After that, the self-consistent sources of the new six-component super soliton hierarchy are presented. Furthermore, we establish the infinitely many conservation laws for the integrable super soliton hierarchy. (paper)

  17. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library

  18. Development of GPU Based Parallel Computing Module for Solving Pressure Equation in the CUPID Component Thermo-Fluid Analysis Code

    International Nuclear Information System (INIS)

    Lee, Jin Pyo; Joo, Han Gyu

    2010-01-01

    In the thermo-fluid analysis code named CUPID, the linear system of pressure equations must be solved in each iteration step. The time for repeatedly solving the linear system can be quite significant because large sparse matrices of Rank more than 50,000 are involved and the diagonal dominance of the system is hardly hold. Therefore parallelization of the linear system solver is essential to reduce the computing time. Meanwhile, Graphics Processing Units (GPU) have been developed as highly parallel, multi-core processors for the global demand of high quality 3D graphics. If a suitable interface is provided, parallelization using GPU can be available to engineering computing. NVIDIA provides a Software Development Kit(SDK) named CUDA(Compute Unified Device Architecture) to code developers so that they can manage GPUs for parallelization using the C language. In this research, we implement parallel routines for the linear system solver using CUDA, and examine the performance of the parallelization. In the next section, we will describe the method of CUDA parallelization for the CUPID code, and then the performance of the CUDA parallelization will be discussed

  19. XUV synchrotron optical components for the Advanced Light Source: Summary of the requirements and the developmental program

    International Nuclear Information System (INIS)

    McKinney, W.; Irick, S.; Lunt, D.

    1992-07-01

    We give a brief summary of the requirements for water cooled optical components for the Advanced Light Source (ALS), a third generation synchrotron radiation source under construction at Lawrence Berkeley Laboratory (LBL). Materials choices, surface figure and smoothness specifications, and metrology systems for measuring the plated metal surfaces are discussed. Results from a finished water cooled copper alloy mirror will be used to demonstrate the state of the art in optical metrology with the Takacs Long Trace Profiler (LTP II)

  20. Fluoride in the Serra Geral Aquifer System: Source Evaluation Using Stable Isotopes and Principal Component Analysis

    OpenAIRE

    Nanni, Arthur Schmidt; Roisenberg, Ari; de Hollanda, Maria Helena Bezerra Maia; Marimon, Maria Paula Casagrande; Viero, Antonio Pedro; Scheibe, Luiz Fernando

    2013-01-01

    Groundwater with anomalous fluoride content and water mixture patterns were studied in the fractured Serra Geral Aquifer System, a basaltic to rhyolitic geological unit, using a principal component analysis interpretation of groundwater chemical data from 309 deep wells distributed in the Rio Grande do Sul State, Southern Brazil. A four-component model that explains 81% of the total variance in the Principal Component Analysis is suggested. Six hydrochemical groups were identified. δ18O and δ...

  1. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  2. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    Science.gov (United States)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  3. Experience in the application of the IAEA QA code and guides to the manufacture of nuclear reactor components

    International Nuclear Information System (INIS)

    Dutta, N.G.; Mankame, M.A.; Kulkarni, P.G.; Vijayaraghavan, R.; Balaramamoorthy, K.

    1985-01-01

    India has made considerable progress in the indigenous manufacture of 'Quality' nuclear reactor components. All activities associated with the development of atomic energy from mining of strategic minerals to the design, construction, and operation of nuclear power plants including supporting research and development efforts are mainly carried out by the Department of Atomic Energy (DAE). Through the sustained efforts of DAE, the major industries, both in public and private sectors supplying nuclear components have now adopted the practice of systematic quality assurance (QA). The stringent QA steps are mandatory for achieving the desired quality in the manufactured nuclear components. Control blades for BWRs are now indigenously manufactured by the Atomic Fuels Division (AFD) of Bhabha Atomic Research Centre (BARC), a constituent unit of DAE. For the Project Dhruva, a 100 MW(th) nuclear reactor, constructed at BARC, Trombay, Bombay, an independent cell was formed to carry out quality audit on the manufactured components. The components were designed, fabricated, inspected and tested to the desired quality level. The QA activities were enforced from the procurement of raw materials to the audit of the completed component for monitoring the manufacturer's continued compliance with the design. The major components of Dhruva, viz. calandria, end-shield, coolant channels, heat exchangers, etc., were covered under these quality audit activities. The paper highlights the QA programme implemented in the manufacture of control blades for BWRs, illustrated with a typical example, the end-shield for Dhruva. The authors consider that the recommendations and guidelines provided in the documents 50-SG-QA3, 50-SG-QA8, 50-SG-QA10, etc., were useful in providing a formal and systematic framework, under which various quality assurance functions have been carried out

  4. Assessment of the neutron component in a neutron-gamma field of a californium-252 source

    International Nuclear Information System (INIS)

    Tetteh, G.K.

    1978-12-01

    Experiments have been performed to determine the percentages of the different components in the radiation field of californium-252 which has now some clinical applications. Using Rossi Chambers in conjunction with absorption investigations involving lead and aluminium thimbles, it is observed that the dose rates due to the different components are: neutrons 54%; gammas 30%; betas 16%

  5. The RCC-MR design code for LMFBR components. A useful basic for fusion reactor design tools development

    International Nuclear Information System (INIS)

    Acker, D.; Chevereau, G.

    1985-11-01

    LMFBR and fusion reactors exhibit common features with regard to structural materials (Stainless steels), temperature service level (550-600 0 C), loading types. So, design and construction rules used in France for LMFBR, that is to say RCC-MR Code, can constitute a good basis for fusion reactors design. Some original aspects of RCC-MR design rules are described, relating to unsignificant creep, ratchetting effect, fatigue and creep damage limits, creep damage evaluation, fatigue damage evaluation, buckling. The main originality of RCC-MR consists to propose comprehensive simplified rules based on elastic calculations and extended from classical cold temperatures to the elevated temperature domain

  6. The RCC-MR design code for LMFBR components. A useful basis for fusion reactor design tools development

    International Nuclear Information System (INIS)

    Acker, D.; Chevereau, G.

    1986-01-01

    LMFBR and fusion reactors exhibit common features with regard to structural materials, temperature service level, loading types. So, design and construction rules used in France for LMFBR, that is to say RCC-MR Code, can constitute a good basis for fusion reactors design. Some original aspects of RCC-MR design rules are described, relating to unsignificant creep, ratchetting effect, fatigue and creep damage limits, creep damage evaluation, fatigue damage evaluation, buckling. The main originality of RCC-MR consists to propose comprehensive simplified rules based on elastic calculations and extended from classical cold temperatures to the elevated temperature domain. (author)

  7. Basic design of the HANARO cold neutron source using MCNP code

    International Nuclear Information System (INIS)

    Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil

    2005-01-01

    The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described

  8. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    Science.gov (United States)

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  9. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  10. Validation of activity determination codes and nuclide vectors by using results from processing of retired components and operational waste

    International Nuclear Information System (INIS)

    Lundgren, Klas; Larsson, Arne

    2012-01-01

    Decommissioning studies for nuclear power reactors are performed in order to assess the decommissioning costs and the waste volumes as well as to provide data for the licensing and construction of the LILW repositories. An important part of this work is to estimate the amount of radioactivity in the different types of decommissioning waste. Studsvik ALARA Engineering has performed such assessments for LWRs and other nuclear facilities in Sweden. These assessments are to a large content depending on calculations, senior experience and sampling on the facilities. The precision in the calculations have been found to be relatively high close to the reactor core. Of natural reasons the precision will decline with the distance. Even if the activity values are lower the content of hard to measure nuclides can cause problems in the long term safety demonstration of LLW repositories. At the same time Studsvik is processing significant volumes of metallic and combustible waste from power stations in operation and in decommissioning phase as well as from other nuclear facilities such as research and waste treatment facilities. Combining the unique knowledge in assessment of radioactivity inventory and the large data bank the waste processing represents the activity determination codes can be validated and the waste processing analysis supported with additional data. The intention with this presentation is to highlight how the European nuclear industry jointly could use the waste processing data for validation of activity determination codes. (authors)

  11. A principal-component and least-squares method for allocating polycyclic aromatic hydrocarbons in sediment to multiple sources

    International Nuclear Information System (INIS)

    Burns, W.A.; Mankiewicz, P.J.; Bence, A.E.; Page, D.S.; Parker, K.R.

    1997-01-01

    A method was developed to allocate polycyclic aromatic hydrocarbons (PAHs) in sediment samples to the PAH sources from which they came. The method uses principal-component analysis to identify possible sources and a least-squares model to find the source mix that gives the best fit of 36 PAH analytes in each sample. The method identified 18 possible PAH sources in a large set of field data collected in Prince William Sound, Alaska, USA, after the 1989 Exxon Valdez oil spill, including diesel oil, diesel soot, spilled crude oil in various weathering states, natural background, creosote, and combustion products from human activities and forest fires. Spill oil was generally found to be a small increment of the natural background in subtidal sediments, whereas combustion products were often the predominant sources for subtidal PAHs near sites of past or present human activity. The method appears to be applicable to other situations, including other spills

  12. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  13. 77 FR 7 - Revisions to Labeling Requirements for Blood and Blood Components, Including Source Plasma

    Science.gov (United States)

    2012-01-03

    ... uniform container label for blood and blood components and recommended labels that incorporated barcode... Protein Fraction (part 640, subpart I), and Immune Globulin (part 640, subpart J)). The comment noted that...

  14. Total annoyance from an industrial noise source with a main spectral component combined with a background noise.

    Science.gov (United States)

    Alayrac, M; Marquis-Favre, C; Viollon, S

    2011-07-01

    When living close to an industrial plant, people are exposed to a combination of industrial noise sources and a background noise composed of all the other noise sources in the environment. As a first step, noise annoyance indicators in laboratory conditions are proposed for a single exposure to an industrial noise source. The second step detailed in this paper involves determining total annoyance indicators in laboratory conditions for ambient noises composed of an industrial noise source and a background noise. Two types of steady and permanent industrial noise sources are studied: low frequency noises with a main spectral component at 100 Hz, and noises with a main spectral component in middle frequencies. Five background noises are assessed so as to take into account different sound environments which can usually be heard by people living around an industrial plant. One main conclusion of this study is that two different analyses are necessary to determine total annoyance indicators for this type of ambient noise, depending on the industrial noise source composing it. Therefore, two total annoyance indicators adapted to the ambient noises studied are proposed. © 2011 Acoustical Society of America

  15. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  16. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  17. Heat source component development program. Report for period March 1978--June 1978

    International Nuclear Information System (INIS)

    1978-07-01

    The General Purpose Heat Source (GPHS) is a radioisotope heat source being developed by LASL. The first intended application for the GPHS is the Solar Polar mission scheduled for 1983. Battelle's support of LASL during the current reporting period is reported. The specific efforts include: (1) analysis of trial designs with emphasis on comparison of performances of trial designs 1 and 2 and their modifications; and (2) helium vent development with emphasis on fabrication and qualification testing of platinum and iridium nonselective vents

  18. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  19. Development and verification test of integral reactor major components - Development of MCP impeller design, performance prediction code and experimental verification

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Myung Kyoon; Oh, Woo Hyoung; Song, Jae Wook [Korea Advanced Institute of Science and Technology, Taejon (Korea)

    1999-03-01

    The present study is aimed at developing a computational code for design and performance prediction of an axial-flow pump. The proposed performance prediction method is tested against a model axial-flow pump streamline curvature method. The preliminary design is made by using the ideal velocity triangles at inlet and exit and the three dimensional blade shape is calculated by employing the free vortex design method. Then the detailed blading design is carried out by using experimental database of double circular arc cambered hydrofoils. To computationally determine the design incidence, deviation, blade camber, solidity and stagger angle, a number of correlation equations are developed form the experimental database and a theorical formula for the lift coefficient is adopted. A total of 8 equations are solved iteratively using an under-relaxation factor. An experimental measurement is conducted under a non-cavitating condition to obtain the off-design performance curve and also a cavitation test is carried out by reducing the suction pressure. The experimental results are very satisfactorily compared with the predictions by the streamline curvature method. 28 refs., 26 figs., 11 tabs. (Author)

  20. Identifying effective components of alcohol abuse prevention programs: effects of fear appeals, message style, and source expertise.

    Science.gov (United States)

    Stainback, R D; Rogers, R W

    1983-04-01

    Despite the importance of alcohol abuse prevention programs, the effectiveness of many components of these programs has not been demonstrated empirically. An experiment tested the efficacy of three components of many prevention programs: fear appeals, one- versus two-sided message style, and the expertise of the source. The persuasive impact of this information was examined on 113 ninth-grade students' intentions to abstain from drinking alcohol while they are teenagers. The results reveal that fear appeals are successful in strengthening students' intentions to refrain from drinking. Implications are discussed for implementing these principles and for designing future investigations of alcohol abuse prevention programs.

  1. A new six-component super soliton hierarchy and its self-consistent sources and conservation laws

    Science.gov (United States)

    Han-yu, Wei; Tie-cheng, Xia

    2016-01-01

    A new six-component super soliton hierarchy is obtained based on matrix Lie super algebras. Super trace identity is used to furnish the super Hamiltonian structures for the resulting nonlinear super integrable hierarchy. After that, the self-consistent sources of the new six-component super soliton hierarchy are presented. Furthermore, we establish the infinitely many conservation laws for the integrable super soliton hierarchy. Project supported by the National Natural Science Foundation of China (Grant Nos. 11547175, 11271008 and 61072147), the First-class Discipline of University in Shanghai, China, and the Science and Technology Department of Henan Province, China (Grant No. 152300410230).

  2. Characterization of a Gene Coding for the Complement System Component FB from Loxosceles laeta Spider Venom Glands.

    Science.gov (United States)

    Myamoto, Daniela Tiemi; Pidde-Queiroz, Giselle; Gonçalves-de-Andrade, Rute Maria; Pedroso, Aurélio; van den Berg, Carmen W; Tambourgi, Denise V

    2016-01-01

    The human complement system is composed of more than 30 proteins and many of these have conserved domains that allow tracing the phylogenetic evolution. The complement system seems to be initiated with the appearance of C3 and factor B (FB), the only components found in some protostomes and cnidarians, suggesting that the alternative pathway is the most ancient. Here, we present the characterization of an arachnid homologue of the human complement component FB from the spider Loxosceles laeta. This homologue, named Lox-FB, was identified from a total RNA L. laeta spider venom gland library and was amplified using RACE-PCR techniques and specific primers. Analysis of the deduced amino acid sequence and the domain structure showed significant similarity to the vertebrate and invertebrate FB/C2 family proteins. Lox-FB has a classical domain organization composed of a control complement protein domain (CCP), a von Willebrand Factor domain (vWFA), and a serine protease domain (SP). The amino acids involved in Mg2+ metal ion dependent adhesion site (MIDAS) found in the vWFA domain in the vertebrate C2/FB proteins are well conserved; however, the classic catalytic triad present in the serine protease domain is not conserved in Lox-FB. Similarity and phylogenetic analyses indicated that Lox-FB shares a major identity (43%) and has a close evolutionary relationship with the third isoform of FB-like protein (FB-3) from the jumping spider Hasarius adansoni belonging to the Family Salcitidae.

  3. Characterization of a Gene Coding for the Complement System Component FB from Loxosceles laeta Spider Venom Glands.

    Directory of Open Access Journals (Sweden)

    Daniela Tiemi Myamoto

    Full Text Available The human complement system is composed of more than 30 proteins and many of these have conserved domains that allow tracing the phylogenetic evolution. The complement system seems to be initiated with the appearance of C3 and factor B (FB, the only components found in some protostomes and cnidarians, suggesting that the alternative pathway is the most ancient. Here, we present the characterization of an arachnid homologue of the human complement component FB from the spider Loxosceles laeta. This homologue, named Lox-FB, was identified from a total RNA L. laeta spider venom gland library and was amplified using RACE-PCR techniques and specific primers. Analysis of the deduced amino acid sequence and the domain structure showed significant similarity to the vertebrate and invertebrate FB/C2 family proteins. Lox-FB has a classical domain organization composed of a control complement protein domain (CCP, a von Willebrand Factor domain (vWFA, and a serine protease domain (SP. The amino acids involved in Mg2+ metal ion dependent adhesion site (MIDAS found in the vWFA domain in the vertebrate C2/FB proteins are well conserved; however, the classic catalytic triad present in the serine protease domain is not conserved in Lox-FB. Similarity and phylogenetic analyses indicated that Lox-FB shares a major identity (43% and has a close evolutionary relationship with the third isoform of FB-like protein (FB-3 from the jumping spider Hasarius adansoni belonging to the Family Salcitidae.

  4. Advanced sources and optical components for the McStas neutron scattering instrument simulation package

    DEFF Research Database (Denmark)

    Farhi, E.; Monzat, C.; Arnerin, R.

    2014-01-01

    -up, including lenses and prisms. A new library for McStas adds the ability to describe any geometrical arrangement as a set of polygons. This feature has been implemented in most sample scattering components such as Single_crystal, Incoherent, Isotropic_Sqw (liquids/amorphous/powder), PowderN as well...

  5. Castellated tiles as the beam-facing components for the diagnostic calorimeter of the negative ion source SPIDER

    Energy Technology Data Exchange (ETDEWEB)

    Peruzzo, S., E-mail: simone.peruzzo@igi.cnr.it; Cervaro, V.; Dalla Palma, M.; Delogu, R.; Fasolo, D.; Franchin, L.; Pasqualotto, R.; Rizzolo, A.; Tollin, M.; Serianni, G. [Consorzio RFX, Corso Stati Uniti 4, 35127 Padova (Italy); De Muri, M. [Consorzio RFX, Corso Stati Uniti 4, 35127 Padova (Italy); INFN-LNL, v.le dell’Università 2, I-35020 Legnaro, PD (Italy); Pimazzoni, A. [Consorzio RFX, Corso Stati Uniti 4, 35127 Padova (Italy); Università degli Studi di Padova, Via 8 Febbraio 2, I-35122 Padova (Italy); Zampieri, L. [Università degli Studi di Padova, Via 8 Febbraio 2, I-35122 Padova (Italy)

    2016-02-15

    This paper presents the results of numerical simulations and experimental tests carried out to assess the feasibility and suitability of graphite castellated tiles as beam-facing component in the diagnostic calorimeter of the negative ion source SPIDER (Source for Production of Ions of Deuterium Extracted from Radio frequency plasma). The results indicate that this concept could be a reliable, although less performing, alternative for the present design based on carbon fiber composite tiles, as it provides thermal measurements on the required spatial scale.

  6. MOCARS: a Monte Carlo code for determining distribution and simulation limits and ranking system components by importance

    International Nuclear Information System (INIS)

    Matthews, S.D.; Poloski, J.P.

    1978-08-01

    MOCARS is a computer program designed for use on the Idaho National Engineering Laboratory (INEL) CDC CYBER 76-173 computer system that uses Monte Carlo techniques to determine the distribution and simulation limits for a function. In use, the MOCARS program randomly samples data from any of the 12 different user-specified probability distributions and either evaluates a user-specified function or cut set system unavailability using the sample data. After data ordering, the values at various quantities and associated confidence bounds are calculated for output. If the cut set unavailability function is evaluated, MOCARS can determine the importance ranking for components in the unavailability calculation. Frequency and cumulative distribution histograms from the sample data are also available for output on microfilm. 39 figures, 4 tables

  7. A three-field model of transient 3D multiphase, three-component flow for the computer code IV A3. Pt. 2

    International Nuclear Information System (INIS)

    Kolev, N.I.

    1991-12-01

    The second part of the IVA3 code description contains the constitutive models used for the interfacial transport phenomena and the code validation results. First 20 flow patterns are defined and the transition criteria are discussed. The dynamic fragmentation and coalescence models used in IVA3 are documented. After the description of the models for predicting the flow patterns and flow structure sizes the models for the interfacial mechanical interaction are described. Finally the models for interfacial heat and mass transfer are given with emphasis on the time averaging of the heat and mass source terms. The code validation passes several stages from simple tests on well known benchmarks trough simulation of one-, two-, and three-phase flows in simple and complicated geometries. The gradually increase of the complexity and the successful comparison of the predictions with experimental data is the main characteristic of the verification procedure. It is demonstrated by several examples that IVA3 is a powerful tool for three-fluid modelling of complicated three-phase flows in complex geometry with strong thermal and mechanical interaction between the velocity fields. (orig.) [de

  8. Synoptic sampling and principal components analysis to identify sources of water and metals to an acid mine drainage stream

    OpenAIRE

    Byrne, Patrick; Runkel, Robert L.; Walton-Day, Katherine

    2017-01-01

    Combining the synoptic mass balance approach with principal components analysis (PCA) can be an effective method for discretising the chemistry of inflows and source areas in watersheds where contamination is diffuse in nature and/or complicated by groundwater interactions. This paper presents a field-scale study in which synoptic sampling and PCA are employed in a mineralized watershed (Lion Creek, Colorado, USA) under low flow conditions to (i) quantify the impacts of mining activity on str...

  9. Determination of neutron radiation source on components in the decy 13 cyclotron tank

    International Nuclear Information System (INIS)

    Sunardi; Silakhuddin

    2015-01-01

    In order to design the shielding on the Decy 13 cyclotron system, a study to identify the potency of neutron radiation at the cyclotron components in the vacuum tank has been carried out. The method used is to identify the kind of components material, analyzing significant nuclear reactions producing neutron, and determining the radial distribution of the formation probability of the nuclear reaction. The results of identification show that the nuclear reaction producing neutron are Cu 65 (p,n)Zn 65 , Cu 63 (p,n)Zn 63 and Fe 56 (p,n)Co 56 . The peaks of distribution curve of the formation probability of those reactions are located on the area between 37 cm and 39 cm. (author)

  10. Compilation of references, data sources and analysis methods for LMFBR primary piping system components

    International Nuclear Information System (INIS)

    Reich, M.; Esztergar, E.P.; Ellison, E.G.; Erdogan, F.; Gray, T.G.F.; Wells, C.W.

    1977-03-01

    A survey and review program for application of fracture mechanics methods in elevated temperature design and safety analysis has been initiated in December of 1976. This is the first of a series of reports, the aim of which is to provide a critical review of the theories of fracture and the application of fracture mechanics methods to life prediction, reliability and safety analysis of piping components in nuclear plants undergoing sub-creep and elevated temperature service conditions

  11. Micro-scale anaerobic digestion of point source components of organic fraction of municipal solid waste

    International Nuclear Information System (INIS)

    Chanakya, H.N.; Sharma, Isha; Ramachandra, T.V.

    2009-01-01

    The fermentation characteristics of six specific types of the organic fraction of municipal solid waste (OFMSW) were examined, with an emphasis on properties that are needed when designing plug-flow type anaerobic bioreactors. More specifically, the decomposition patterns of a vegetable (cabbage), fruits (banana and citrus peels), fresh leaf litter of bamboo and teak leaves, and paper (newsprint) waste streams as feedstocks were studied. Individual OFMSW components were placed into nylon mesh bags and subjected to various fermentation periods (solids retention time, SRT) within the inlet of a functioning plug-flow biogas fermentor. These were removed at periodic intervals, and their composition was analyzed to monitor decomposition rates and changes in chemical composition. Components like cabbage waste, banana peels, and orange peels fermented rapidly both in a plug-flow biogas reactor (PFBR) as well as under a biological methane potential (BMP) assay, while other OFMSW components (leaf litter from bamboo and teak leaves and newsprint) fermented slowly with poor process stability and moderate biodegradation. For fruit and vegetable wastes (FVW), a rapid and efficient removal of pectins is the main cause of rapid disintegration of these feedstocks, which left behind very little compost forming residues (2-5%). Teak and bamboo leaves and newsprint decomposed only to 25-50% in 30 d. These results confirm the potential for volatile fatty acids accumulation in a PFBR's inlet and suggest a modification of the inlet zone or operation of a PFBR with the above feedstocks

  12. Assessment of gamma irradiation heating and damage in miniature neutron source reactor vessel using computational methods and SRIM - TRIM code

    International Nuclear Information System (INIS)

    Appiah-Ofori, F. F.

    2014-07-01

    The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and

  13. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  14. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    International Nuclear Information System (INIS)

    O'Connor, Evan; Ott, Christian D

    2010-01-01

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  15. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)

    2010-06-07

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  16. Primordial condensation of meteorite components - experimental evidence of the state of the source medium

    International Nuclear Information System (INIS)

    Arrhenius, G.; McCrumb, J.L.; Friedman, N.

    1979-01-01

    Mineral grains and grain aggregates in meteorites carry potential information on the conditions in the environment where they formed. To avoid model-dependent interpretations it is necessary to develop experimental criteria that uniquely reflect the environmental parameters of interest. These parameters include the various temperatures of the source medium and the temperature of grains at growth all of which are observed to be highly differentiated in the space medium in accordance with the radiation laws. (orig./WL)

  17. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    Science.gov (United States)

    Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.

    2011-12-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  18. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    International Nuclear Information System (INIS)

    Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H; Petroni, A; Urquina, H; Manes, F; Ibáñez, A

    2011-01-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  19. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    Energy Technology Data Exchange (ETDEWEB)

    Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H [Laboratory of Industrial Electronics, Control and Instrumentation (LEICI), National University of La Plata (Argentina); Petroni, A [Integrative Neuroscience Laboratory, Physics Department, University of Buenos Aires, Buenos Aires (Argentina); Urquina, H; Manes, F; Ibanez, A [Institute of Cognitive Neurology (INECO) and Institute of Neuroscience, Favaloro University, Buenos Aires (Argentina)

    2011-12-23

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  20. Documentation of Source Code.

    Science.gov (United States)

    1988-05-12

    the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen

  1. Detection and Characterization of Ground Displacement Sources from Variational Bayesian Independent Component Analysis of GPS Time Series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2014-12-01

    A critical point in the analysis of ground displacements time series is the development of data driven methods that allow to discern and characterize the different sources that generate the observed displacements. A widely used multivariate statistical technique is the Principal Component Analysis (PCA), which allows to reduce the dimensionality of the data space maintaining most of the variance of the dataset explained. It reproduces the original data using a limited number of Principal Components, but it also shows some deficiencies. Indeed, PCA does not perform well in finding the solution to the so-called Blind Source Separation (BSS) problem, i.e. in recovering and separating the original sources that generated the observed data. This is mainly due to the assumptions on which PCA relies: it looks for a new Euclidean space where the projected data are uncorrelated. Usually, the uncorrelation condition is not strong enough and it has been proven that the BSS problem can be tackled imposing on the components to be independent. The Independent Component Analysis (ICA) is, in fact, another popular technique adopted to approach this problem, and it can be used in all those fields where PCA is also applied. An ICA approach enables us to explain the time series imposing a fewer number of constraints on the model, and to reveal anomalies in the data such as transient signals. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we use a variational bayesian ICA (vbICA) method, which models the probability density function (pdf) of each source signal using a mix of Gaussian distributions. This technique allows for more flexibility in the description of the pdf of the sources, giving a more reliable estimate of them. Here we present the application of the vbICA technique to GPS position time series. First, we use vbICA on synthetic data that simulate a seismic cycle

  2. Three-Level AC-DC-AC Z-Source Converter Using Reduced Passive Component Count

    DEFF Research Database (Denmark)

    Loh, Poh Chiang; Gao, Feng; Tan, Pee-Chin

    2009-01-01

    This paper presents a three-level ac-dc-ac Z-source converter with output voltage buck-boost capability. The converter is implemented by connecting a low-cost front-end diode rectifier to a neutral-point-clamped inverter through a single X-shaped LC impedance network. The inverter is controlled...... to switch with a three-level output voltage, where the middle neutral potential is uniquely tapped from the star-point of a wye-connected capacitive filter placed before the front-end diode rectifier for input current filtering. Through careful control, the resulting converter can produce the correct volt...

  3. A method to optimize the shield compact and lightweight combining the structure with components together by genetic algorithm and MCNP code.

    Science.gov (United States)

    Cai, Yao; Hu, Huasi; Pan, Ziheng; Hu, Guang; Zhang, Tao

    2018-05-17

    To optimize the shield for neutrons and gamma rays compact and lightweight, a method combining the structure and components together was established employing genetic algorithms and MCNP code. As a typical case, the fission energy spectrum of 235 U which mixed neutrons and gamma rays was adopted in this study. Six types of materials were presented and optimized by the method. Spherical geometry was adopted in the optimization after checking the geometry effect. Simulations have made to verify the reliability of the optimization method and the efficiency of the optimized materials. To compare the materials visually and conveniently, the volume and weight needed to build a shield are employed. The results showed that, the composite multilayer material has the best performance. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  5. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  6. Low Frequency Turbulence as the Source of High Frequency Waves in Multi-Component Space Plasmas

    Science.gov (United States)

    Khazanov, George V.; Krivorutsky, Emmanuel N.; Uritsky, Vadim M.

    2011-01-01

    Space plasmas support a wide variety of waves, and wave-particle interactions as well as wavewave interactions are of crucial importance to magnetospheric and ionospheric plasma behavior. High frequency wave turbulence generation by the low frequency (LF) turbulence is restricted by two interconnected requirements: the turbulence should be strong enough and/or the coherent wave trains should have the appropriate length. These requirements are strongly relaxed in the multi-component plasmas, due to the heavy ions large drift velocity in the field of LF wave. The excitation of lower hybrid waves (LHWs), in particular, is a widely discussed mechanism of interaction between plasma species in space and is one of the unresolved questions of magnetospheric multi-ion plasmas. It is demonstrated that large-amplitude Alfven waves, in particular those associated with LF turbulence, may generate LHW s in the auroral zone and ring current region and in some cases (particularly in the inner magnetosphere) this serves as the Alfven wave saturation mechanism. We also argue that the described scenario can playa vital role in various parts of the outer magnetosphere featuring strong LF turbulence accompanied by LHW activity. Using the data from THEMIS spacecraft, we validate the conditions for such cross-scale coupling in the near-Earth "flow-braking" magnetotail region during the passage of sharp injection/dipolarization fronts, as well as in the turbulent outflow region of the midtail reconnection site.

  7. Genesis Solar Wind Science Canister Components Curated as Potential Solar Wind Collectors and Reference Contamination Sources

    Science.gov (United States)

    Allton, J. H.; Gonzalez, C. P.; Allums, K. K.

    2016-01-01

    The Genesis mission collected solar wind for 27 months at Earth-Sun L1 on both passive and active collectors carried inside of a Science Canister, which was cleaned and assembled in an ISO Class 4 cleanroom prior to launch. The primary passive collectors, 271 individual hexagons and 30 half-hexagons of semiconductor materials, are described in. Since the hard landing reduced the 301 passive collectors to many thousand smaller fragments, characterization and posting in the online catalog remains a work in progress, with about 19% of the total area characterized to date. Other passive collectors, surfaces of opportunity, have been added to the online catalog. For species needing to be concentrated for precise measurement (e.g. oxygen and nitrogen isotopes) an energy-independent parabolic ion mirror focused ions onto a 6.2 cm diameter target. The target materials, as recovered after landing, are described in. The online catalog of these solar wind collectors, a work in progress, can be found at: http://curator.jsc.nasa.gov/gencatalog/index.cfm This paper describes the next step, the cataloging of pieces of the Science Canister, which were surfaces exposed to the solar wind or component materials adjacent to solar wind collectors which may have contributed contamination.

  8. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    Science.gov (United States)

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  9. The puzzle of the 1996 Bárdarbunga, Iceland, earthquake: no volumetric component in the source mechanism

    Science.gov (United States)

    Tkalcic, Hrvoje; Dreger, Douglas S.; Foulger, Gillian R.; Julian, Bruce R.

    2009-01-01

    A volcanic earthquake with Mw 5.6 occurred beneath the Bárdarbunga caldera in Iceland on 29 September 1996. This earthquake is one of a decade-long sequence of  events at Bárdarbunga with non-double-couple mechanisms in the Global Centroid Moment Tensor catalog. Fortunately, it was recorded well by the regional-scale Iceland Hotspot Project seismic experiment. We investigated the event with a complete moment tensor inversion method using regional long-period seismic waveforms and a composite structural model. The moment tensor inversion using data from stations of the Iceland Hotspot Project yields a non-double-couple solution with a 67% vertically oriented compensated linear vector dipole component, a 32% double-couple component, and a statistically insignificant (2%) volumetric (isotropic) contraction. This indicates the absence of a net volumetric component, which is puzzling in the case of a large volcanic earthquake that apparently is not explained by shear slip on a planar fault. A possible volcanic mechanism that can produce an earthquake without a volumetric component involves two offset sources with similar but opposite volume changes. We show that although such a model cannot be ruled out, the circumstances under which it could happen are rare.

  10. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  11. Explosive bonding and its application in the Advanced Photon Source front-end and beamline components design

    International Nuclear Information System (INIS)

    Shu, D.; Li, Y.; Ryding, D.; Kuzay, T.M.

    1994-01-01

    Explosive bonding is a bonding method in which the controlled energy of a detonating explosive is used to create a metallurgical bonding between two or more similar or dissimilar materials. Since 1991, a number of explosive-bonding joints have been designed for high-thermal-load ultrahigh-vacuum (UHV) compatible components in the Advanced Photon Source. A series of standardized explosive bonded joint units has also been designed and tested, such as: oxygen-free copper (OFHC) to stainless-steel vacuum joints for slits and shutters, GlidCop to stainless-steel vacuum joints for fixed masks, and GlidCop to OFHC thermal and mechanical joints for shutter face-plates, etc. The design and test results for the explosive bonding units to be used in the Advanced Photon Source front ends and beamlines will be discussed in this paper

  12. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  13. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  14. Synoptic sampling and principal components analysis to identify sources of water and metals to an acid mine drainage stream.

    Science.gov (United States)

    Byrne, Patrick; Runkel, Robert L; Walton-Day, Katherine

    2017-07-01

    Combining the synoptic mass balance approach with principal components analysis (PCA) can be an effective method for discretising the chemistry of inflows and source areas in watersheds where contamination is diffuse in nature and/or complicated by groundwater interactions. This paper presents a field-scale study in which synoptic sampling and PCA are employed in a mineralized watershed (Lion Creek, Colorado, USA) under low flow conditions to (i) quantify the impacts of mining activity on stream water quality; (ii) quantify the spatial pattern of constituent loading; and (iii) identify inflow sources most responsible for observed changes in stream chemistry and constituent loading. Several of the constituents investigated (Al, Cd, Cu, Fe, Mn, Zn) fail to meet chronic aquatic life standards along most of the study reach. The spatial pattern of constituent loading suggests four primary sources of contamination under low flow conditions. Three of these sources are associated with acidic (pH mine water in the Minnesota Mine shaft located to the north-east of the river channel. In addition, water chemistry data during a rainfall-runoff event suggests the spatial pattern of constituent loading may be modified during rainfall due to dissolution of efflorescent salts or erosion of streamside tailings. These data point to the complexity of contaminant mobilisation processes and constituent loading in mining-affected watersheds but the combined synoptic sampling and PCA approach enables a conceptual model of contaminant dynamics to be developed to inform remediation.

  15. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1995-01-01

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code

  16. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  17. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  18. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  19. Design and construction of a 500 KW CW, 400 MHz klystron to be used as RF power source for LHC/RF component tests

    CERN Document Server

    Frischholz, Hans; Pearson, C

    1998-01-01

    A 500 kW cw klystron operating at 400 MHz was developed and constructed jointly by CERN and SLAC for use as a high-power source at CERN for testing LHC/RF components such as circulators, RF absorbers and superconducting cavities with their input couplers. The design is a modification of the 353 MHz SLAC PEP-I klystron. More than 80% of the original PEP-I tube parts could thus be incorporated in the LHC test klystron which resulted in lower engineering costs as well as reduced development and construction time. The physical length between cathode plane and upper pole plate was kept unchanged so that a PEP-I tube focusing solenoid, available at CERN, could be re-used. With the aid of the klystron simulation codes JPNDISK and CONDOR, the design of the LHC tube was accomplished, which resulted in a tube with noticeably higher efficiency than its predecessor, the PEP-I klystron. The integrated cavities were redesigned using SUPERFISH and the output coupling circuit, which also required redesigning, was done with t...

  20. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  1. Use of system code to estimate equilibrium tritium inventory in fusion DT machines, such as ARIES-AT and components testing facilities

    International Nuclear Information System (INIS)

    Wong, C.P.C.; Merrill, B.

    2014-01-01

    Highlights: • With the use of a system code, tritium burn-up fraction (f burn ) can be determined. • Initial tritium inventory for steady state DT machines can be estimated. • f burn of ARIES-AT, CFETR and FNSF-AT are in the range of 1–2.8%. • Respective total tritium inventories of are 7.6 kg, 6.1 kg, and 5.2 kg. - Abstract: ITER is under construction and will begin operation in 2020. This is the first 500 MW fusion class DT device, and since it is not going to breed tritium, it will consume most of the limited supply of tritium resources in the world. Yet, in parallel, DT fusion nuclear component testing machines will be needed to provide technical data for the design of DEMO. It becomes necessary to estimate the tritium burn-up fraction and corresponding initial tritium inventory and the doubling time of these machines for the planning of future supply and utilization of tritium. With the use of a system code, tritium burn-up fraction and initial tritium inventory for steady state DT machines can be estimated. Estimated tritium burn-up fractions of FNSF-AT, CFETR-R and ARIES-AT are in the range of 1–2.8%. Corresponding total equilibrium tritium inventories of the plasma flow and tritium processing system, and with the DCLL blanket option are 7.6 kg, 6.1 kg, and 5.2 kg for ARIES-AT, CFETR-R and FNSF-AT, respectively

  2. Use of system code to estimate equilibrium tritium inventory in fusion DT machines, such as ARIES-AT and components testing facilities

    Energy Technology Data Exchange (ETDEWEB)

    Wong, C.P.C., E-mail: wongc@fusion.gat.com [General Atomics, San Diego, CA (United States); Merrill, B. [Idaho National Laboratory, Idaho Falls, ID (United States)

    2014-10-15

    Highlights: • With the use of a system code, tritium burn-up fraction (f{sub burn}) can be determined. • Initial tritium inventory for steady state DT machines can be estimated. • f{sub burn} of ARIES-AT, CFETR and FNSF-AT are in the range of 1–2.8%. • Respective total tritium inventories of are 7.6 kg, 6.1 kg, and 5.2 kg. - Abstract: ITER is under construction and will begin operation in 2020. This is the first 500 MW{sub fusion} class DT device, and since it is not going to breed tritium, it will consume most of the limited supply of tritium resources in the world. Yet, in parallel, DT fusion nuclear component testing machines will be needed to provide technical data for the design of DEMO. It becomes necessary to estimate the tritium burn-up fraction and corresponding initial tritium inventory and the doubling time of these machines for the planning of future supply and utilization of tritium. With the use of a system code, tritium burn-up fraction and initial tritium inventory for steady state DT machines can be estimated. Estimated tritium burn-up fractions of FNSF-AT, CFETR-R and ARIES-AT are in the range of 1–2.8%. Corresponding total equilibrium tritium inventories of the plasma flow and tritium processing system, and with the DCLL blanket option are 7.6 kg, 6.1 kg, and 5.2 kg for ARIES-AT, CFETR-R and FNSF-AT, respectively.

  3. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  4. Synoptic sampling and principal components analysis to identify sources of water and metals to an acid mine drainage stream

    Science.gov (United States)

    Byrne, Patrick; Runkel, Robert L.; Walton-Day, Katie

    2017-01-01

    Combining the synoptic mass balance approach with principal components analysis (PCA) can be an effective method for discretising the chemistry of inflows and source areas in watersheds where contamination is diffuse in nature and/or complicated by groundwater interactions. This paper presents a field-scale study in which synoptic sampling and PCA are employed in a mineralized watershed (Lion Creek, Colorado, USA) under low flow conditions to (i) quantify the impacts of mining activity on stream water quality; (ii) quantify the spatial pattern of constituent loading; and (iii) identify inflow sources most responsible for observed changes in stream chemistry and constituent loading. Several of the constituents investigated (Al, Cd, Cu, Fe, Mn, Zn) fail to meet chronic aquatic life standards along most of the study reach. The spatial pattern of constituent loading suggests four primary sources of contamination under low flow conditions. Three of these sources are associated with acidic (pH metal and major ion) chemistry using PCA suggests a hydraulic connection between many of the left bank inflows and mine water in the Minnesota Mine shaft located to the north-east of the river channel. In addition, water chemistry data during a rainfall-runoff event suggests the spatial pattern of constituent loading may be modified during rainfall due to dissolution of efflorescent salts or erosion of streamside tailings. These data point to the complexity of contaminant mobilisation processes and constituent loading in mining-affected watersheds but the combined synoptic sampling and PCA approach enables a conceptual model of contaminant dynamics to be developed to inform remediation.

  5. Evaluation of the methodology for dose calculation in microdosimetry with electrons sources using the MCNP5 Code

    International Nuclear Information System (INIS)

    Cintra, Felipe Belonsi de

    2010-01-01

    This study made a comparison between some of the major transport codes that employ the Monte Carlo stochastic approach in dosimetric calculations in nuclear medicine. We analyzed in detail the various physical and numerical models used by MCNP5 code in relation with codes like EGS and Penelope. The identification of its potential and limitations for solving microdosimetry problems were highlighted. The condensed history methodology used by MCNP resulted in lower values for energy deposition calculation. This showed a known feature of the condensed stories: its underestimates both the number of collisions along the trajectory of the electron and the number of secondary particles created. The use of transport codes like MCNP and Penelope for micrometer scales received special attention in this work. Class I and class II codes were studied and their main resources were exploited in order to transport electrons, which have particular importance in dosimetry. It is expected that the evaluation of available methodologies mentioned here contribute to a better understanding of the behavior of these codes, especially for this class of problems, common in microdosimetry. (author)

  6. Source apportionment of ambient non-methane hydrocarbons in Hong Kong: application of a principal component analysis/absolute principal component scores (PCA/APCS) receptor model.

    Science.gov (United States)

    Guo, H; Wang, T; Louie, P K K

    2004-06-01

    Receptor-oriented source apportionment models are often used to identify sources of ambient air pollutants and to estimate source contributions to air pollutant concentrations. In this study, a PCA/APCS model was applied to the data on non-methane hydrocarbons (NMHCs) measured from January to December 2001 at two sampling sites: Tsuen Wan (TW) and Central & Western (CW) Toxic Air Pollutants Monitoring Stations in Hong Kong. This multivariate method enables the identification of major air pollution sources along with the quantitative apportionment of each source to pollutant species. The PCA analysis identified four major pollution sources at TW site and five major sources at CW site. The extracted pollution sources included vehicular internal engine combustion with unburned fuel emissions, use of solvent particularly paints, liquefied petroleum gas (LPG) or natural gas leakage, and industrial, commercial and domestic sources such as solvents, decoration, fuel combustion, chemical factories and power plants. The results of APCS receptor model indicated that 39% and 48% of the total NMHCs mass concentrations measured at CW and TW were originated from vehicle emissions, respectively. 32% and 36.4% of the total NMHCs were emitted from the use of solvent and 11% and 19.4% were apportioned to the LPG or natural gas leakage, respectively. 5.2% and 9% of the total NMHCs mass concentrations were attributed to other industrial, commercial and domestic sources, respectively. It was also found that vehicle emissions and LPG or natural gas leakage were the main sources of C(3)-C(5) alkanes and C(3)-C(5) alkenes while aromatics were predominantly released from paints. Comparison of source contributions to ambient NMHCs at the two sites indicated that the contribution of LPG or natural gas at CW site was almost twice that at TW site. High correlation coefficients (R(2) > 0.8) between the measured and predicted values suggested that the PCA/APCS model was applicable for estimation

  7. Characteristic compression strength of a brickwork masonry starting from the strength of its components. Experimental verification of analitycal equations of european codes

    Directory of Open Access Journals (Sweden)

    Rolando, A.

    2006-09-01

    Full Text Available In this paper the compression strength of a clay brickwork masonry bound with cement mortar is analyzed. The target is to obtain the characteristic compression strength of unreinforced brickwork masonry. This research try to test the validity of the analytical equations in European codes, comparing the experimental strength with the analytically obtained from the strength of its components (clay brick and cement mortar.En este artículo se analiza la resistencia a compresión de una fábrica de ladrillo cerámico, asentado con mortero de cemento.El objetivo es obtener la resistencia característica a compresión de la fábrica sin armar.La investigación comprueba la fiabilidad de las expresiones analíticas existentes en la normativa europea, comparando la resistencia obtenida experimentalmente con la obtenida analíticamente, a partir de la resistencia de sus componentes (ladrillo cerámico y mortero de cemento.

  8. Functional anthology of intrinsic disorder. 2. Cellular components, domains, technical terms, developmental processes, and coding sequence diversities correlated with long disordered regions.

    Science.gov (United States)

    Vucetic, Slobodan; Xie, Hongbo; Iakoucheva, Lilia M; Oldfield, Christopher J; Dunker, A Keith; Obradovic, Zoran; Uversky, Vladimir N

    2007-05-01

    Biologically active proteins without stable ordered structure (i.e., intrinsically disordered proteins) are attracting increased attention. Functional repertoires of ordered and disordered proteins are very different, and the ability to differentiate whether a given function is associated with intrinsic disorder or with a well-folded protein is crucial for modern protein science. However, there is a large gap between the number of proteins experimentally confirmed to be disordered and their actual number in nature. As a result, studies of functional properties of confirmed disordered proteins, while helpful in revealing the functional diversity of protein disorder, provide only a limited view. To overcome this problem, a bioinformatics approach for comprehensive study of functional roles of protein disorder was proposed in the first paper of this series (Xie, H.; Vucetic, S.; Iakoucheva, L. M.; Oldfield, C. J.; Dunker, A. K.; Obradovic, Z.; Uversky, V. N. Functional anthology of intrinsic disorder. 1. Biological processes and functions of proteins with long disordered regions. J. Proteome Res. 2007, 5, 1882-1898). Applying this novel approach to Swiss-Prot sequences and functional keywords, we found over 238 and 302 keywords to be strongly positively or negatively correlated, respectively, with long intrinsically disordered regions. This paper describes approximately 90 Swiss-Prot keywords attributed to the cellular components, domains, technical terms, developmental processes, and coding sequence diversities possessing strong positive and negative correlation with long disordered regions.

  9. Functional Anthology of Intrinsic Disorder. II. Cellular Components, Domains, Technical Terms, Developmental Processes and Coding Sequence Diversities Correlated with Long Disordered Regions

    Science.gov (United States)

    Vucetic, Slobodan; Xie, Hongbo; Iakoucheva, Lilia M.; Oldfield, Christopher J.; Dunker, A. Keith; Obradovic, Zoran; Uversky, Vladimir N.

    2008-01-01

    Biologically active proteins without stable ordered structure (i.e., intrinsically disordered proteins) are attracting increased attention. Functional repertoires of ordered and disordered proteins are very different, and the ability to differentiate whether a given function is associated with intrinsic disorder or with a well-folded protein is crucial for modern protein science. However, there is a large gap between the number of proteins experimentally confirmed to be disordered and their actual number in nature. As a result, studies of functional properties of confirmed disordered proteins, while helpful in revealing the functional diversity of protein disorder, provide only a limited view. To overcome this problem, a bioinformatics approach for comprehensive study of functional roles of protein disorder was proposed in the first paper of this series (Xie H., Vucetic S., Iakoucheva L.M., Oldfield C.J., Dunker A.K., Obradovic Z., Uversky V.N. (2006) Functional anthology of intrinsic disorder. I. Biological processes and functions of proteins with long disordered regions. J. Proteome Res.). Applying this novel approach to Swiss-Prot sequences and functional keywords, we found over 238 and 302 keywords to be strongly positively or negatively correlated, respectively, with long intrinsically disordered regions. This paper describes ~90 Swiss-Prot keywords attributed to the cellular components, domains, technical terms, developmental processes and coding sequence diversities possessing strong positive and negative correlation with long disordered regions. PMID:17391015

  10. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis.

    Science.gov (United States)

    Delorme, Arnaud; Makeig, Scott

    2004-03-15

    We have developed a toolbox and graphic user interface, EEGLAB, running under the crossplatform MATLAB environment (The Mathworks, Inc.) for processing collections of single-trial and/or averaged EEG data of any number of channels. Available functions include EEG data, channel and event information importing, data visualization (scrolling, scalp map and dipole model plotting, plus multi-trial ERP-image plots), preprocessing (including artifact rejection, filtering, epoch selection, and averaging), independent component analysis (ICA) and time/frequency decompositions including channel and component cross-coherence supported by bootstrap statistical methods based on data resampling. EEGLAB functions are organized into three layers. Top-layer functions allow users to interact with the data through the graphic interface without needing to use MATLAB syntax. Menu options allow users to tune the behavior of EEGLAB to available memory. Middle-layer functions allow users to customize data processing using command history and interactive 'pop' functions. Experienced MATLAB users can use EEGLAB data structures and stand-alone signal processing functions to write custom and/or batch analysis scripts. Extensive function help and tutorial information are included. A 'plug-in' facility allows easy incorporation of new EEG modules into the main menu. EEGLAB is freely available (http://www.sccn.ucsd.edu/eeglab/) under the GNU public license for noncommercial use and open source development, together with sample data, user tutorial and extensive documentation.

  11. Use of Principal Components Analysis and Kriging to Predict Groundwater-Sourced Rural Drinking Water Quality in Saskatchewan.

    Science.gov (United States)

    McLeod, Lianne; Bharadwaj, Lalita; Epp, Tasha; Waldner, Cheryl L

    2017-09-15

    Groundwater drinking water supply surveillance data were accessed to summarize water quality delivered as public and private water supplies in southern Saskatchewan as part of an exposure assessment for epidemiologic analyses of associations between water quality and type 2 diabetes or cardiovascular disease. Arsenic in drinking water has been linked to a variety of chronic diseases and previous studies have identified multiple wells with arsenic above the drinking water standard of 0.01 mg/L; therefore, arsenic concentrations were of specific interest. Principal components analysis was applied to obtain principal component (PC) scores to summarize mixtures of correlated parameters identified as health standards and those identified as aesthetic objectives in the Saskatchewan Drinking Water Quality Standards and Objective. Ordinary, universal, and empirical Bayesian kriging were used to interpolate arsenic concentrations and PC scores in southern Saskatchewan, and the results were compared. Empirical Bayesian kriging performed best across all analyses, based on having the greatest number of variables for which the root mean square error was lowest. While all of the kriging methods appeared to underestimate high values of arsenic and PC scores, empirical Bayesian kriging was chosen to summarize large scale geographic trends in groundwater-sourced drinking water quality and assess exposure to mixtures of trace metals and ions.

  12. Design and test of component circuits of an integrated quantum voltage noise source for Johnson noise thermometry

    International Nuclear Information System (INIS)

    Yamada, Takahiro; Maezawa, Masaaki; Urano, Chiharu

    2015-01-01

    Highlights: • We demonstrated RSFQ digital components of a new quantum voltage noise source. • A pseudo-random number generator and variable pulse number multiplier are designed. • Fabrication process is based on four Nb wiring layers and Nb/AlOx/Nb junctions. • The circuits successfully operated with wide dc bias current margins, 80–120%. - Abstract: We present design and testing of a pseudo-random number generator (PRNG) and a variable pulse number multiplier (VPNM) which are digital circuit subsystems in an integrated quantum voltage noise source for Jonson noise thermometry. Well-defined, calculable pseudo-random patterns of single flux quantum pulses are synthesized with the PRNG and multiplied digitally with the VPNM. The circuit implementation on rapid single flux quantum technology required practical circuit scales and bias currents, 279 junctions and 33 mA for the PRNG, and 1677 junctions and 218 mA for the VPNM. We confirmed the circuit operation with sufficiently wide margins, 80–120%, with respect to the designed bias currents.

  13. Design and test of component circuits of an integrated quantum voltage noise source for Johnson noise thermometry

    Energy Technology Data Exchange (ETDEWEB)

    Yamada, Takahiro, E-mail: yamada-takahiro@aist.go.jp [Nanoelectronics Research Institute, National Institute of Advanced Industrial Science and Technology, Central 2, Umezono 1-1-1, Tsukuba, Ibaraki 305-8568 (Japan); Maezawa, Masaaki [Nanoelectronics Research Institute, National Institute of Advanced Industrial Science and Technology, Central 2, Umezono 1-1-1, Tsukuba, Ibaraki 305-8568 (Japan); Urano, Chiharu [National Metrology Institute of Japan, National Institute of Advanced Industrial Science and Technology, Central 3, Umezono 1-1-1, Tsukuba, Ibaraki 305-8563 (Japan)

    2015-11-15

    Highlights: • We demonstrated RSFQ digital components of a new quantum voltage noise source. • A pseudo-random number generator and variable pulse number multiplier are designed. • Fabrication process is based on four Nb wiring layers and Nb/AlOx/Nb junctions. • The circuits successfully operated with wide dc bias current margins, 80–120%. - Abstract: We present design and testing of a pseudo-random number generator (PRNG) and a variable pulse number multiplier (VPNM) which are digital circuit subsystems in an integrated quantum voltage noise source for Jonson noise thermometry. Well-defined, calculable pseudo-random patterns of single flux quantum pulses are synthesized with the PRNG and multiplied digitally with the VPNM. The circuit implementation on rapid single flux quantum technology required practical circuit scales and bias currents, 279 junctions and 33 mA for the PRNG, and 1677 junctions and 218 mA for the VPNM. We confirmed the circuit operation with sufficiently wide margins, 80–120%, with respect to the designed bias currents.

  14. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  15. The Los Alamos accelerator code group

    Energy Technology Data Exchange (ETDEWEB)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  16. The Los Alamos accelerator code group

    International Nuclear Information System (INIS)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-01-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG's activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET

  17. A new time-space accounting scheme to predict stream water residence time and hydrograph source components at the watershed scale

    Science.gov (United States)

    Takahiro Sayama; Jeffrey J. McDonnell

    2009-01-01

    Hydrograph source components and stream water residence time are fundamental behavioral descriptors of watersheds but, as yet, are poorly represented in most rainfall-runoff models. We present a new time-space accounting scheme (T-SAS) to simulate the pre-event and event water fractions, mean residence time, and spatial source of streamflow at the watershed scale. We...

  18. Matlab Source Code for Species Transport through Nafion Membranes in Direct Ethanol, Direct Methanol, and Direct Glucose Fuel Cells

    OpenAIRE

    JH, Summerfield; MW, Manley

    2016-01-01

    A simple simulation of chemical species movement is presented. The species traverse a Nafion membrane in a fuel cell. Three cells are examined: direct methanol, direct ethanol, and direct glucose. The species are tracked using excess proton concentration, electric field strength, and voltage. The Matlab computer code is provided.

  19. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  20. Identification of genes for small non-coding RNAs that belong to the regulon of the two-component regulatory system CiaRH in Streptococcus

    Directory of Open Access Journals (Sweden)

    Hakenbeck Regine

    2010-11-01

    Full Text Available Abstract Background Post-transcriptional regulation by small RNAs (sRNAs in bacteria is now recognized as a wide-spread regulatory mechanism modulating a variety of physiological responses including virulence. In Streptococcus pneumoniae, an important human pathogen, the first sRNAs to be described were found in the regulon of the CiaRH two-component regulatory system. Five of these sRNAs were detected and designated csRNAs for cia-dependent small RNAs. CiaRH pleiotropically affects β-lactam resistance, autolysis, virulence, and competence development by yet to be defined molecular mechanisms. Since CiaRH is highly conserved among streptococci, it is of interest to determine if csRNAs are also included in the CiaRH regulon in this group of organisms consisting of commensal as well as pathogenic species. Knowledge on the participation of csRNAs in CiaRH-dependent regulatory events will be the key to define the physiological role of this important control system. Results Genes for csRNAs were predicted in streptococcal genomes and data base entries other than S. pneumoniae by searching for CiaR-activated promoters located in intergenic regions that are followed by a transcriptional terminator. 61 different candidate genes were obtained specifying csRNAs ranging in size from 51 to 202 nt. Comparing these genes among each other revealed 40 different csRNA types. All streptococcal genomes harbored csRNA genes, their numbers varying between two and six. To validate these predictions, S. mitis, S. oralis, and S. sanguinis were subjected to csRNA-specific northern blot analysis. In addition, a csRNA gene from S. thermophilus plasmid pST0 introduced into S. pneumoniae was also tested. Each of the csRNAs was detected on these blots and showed the anticipated sizes. Thus, the method applied here is able to predict csRNAs with high precision. Conclusions The results of this study strongly suggest that genes for small non-coding RNAs, csRNAs, are part of

  1. Smart time-pulse coding photoconverters as basic components 2D-array logic devices for advanced neural networks and optical computers

    Science.gov (United States)

    Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Lazarev, Alexander A.; Michalnichenko, Nikolay N.

    2004-04-01

    The article deals with a conception of building arithmetic-logic devices (ALD) with a 2D-structure and optical 2D-array inputs-outputs as advanced high-productivity parallel basic operational training modules for realization of basic operation of continuous, neuro-fuzzy, multilevel, threshold and others logics and vector-matrix, vector-tensor procedures in neural networks, that consists in use of time-pulse coding (TPC) architecture and 2D-array smart optoelectronic pulse-width (or pulse-phase) modulators (PWM or PPM) for transformation of input pictures. The input grayscale image is transformed into a group of corresponding short optical pulses or time positions of optical two-level signal swing. We consider optoelectronic implementations of universal (quasi-universal) picture element of two-valued ALD, multi-valued ALD, analog-to-digital converters, multilevel threshold discriminators and we show that 2D-array time-pulse photoconverters are the base elements for these devices. We show simulation results of the time-pulse photoconverters as base components. Considered devices have technical parameters: input optical signals power is 200nW_200μW (if photodiode responsivity is 0.5A/W), conversion time is from tens of microseconds to a millisecond, supply voltage is 1.5_15V, consumption power is from tens of microwatts to a milliwatt, conversion nonlinearity is less than 1%. One cell consists of 2-3 photodiodes and about ten CMOS transistors. This simplicity of the cells allows to carry out their integration in arrays of 32x32, 64x64 elements and more.

  2. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  3. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    Science.gov (United States)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  4. A Mode Propagation Database Suitable for Code Validation Utilizing the NASA Glenn Advanced Noise Control Fan and Artificial Sources

    Science.gov (United States)

    Sutliff, Daniel L.

    2014-01-01

    The NASA Glenn Research Center's Advanced Noise Control Fan (ANCF) was developed in the early 1990s to provide a convenient test bed to measure and understand fan-generated acoustics, duct propagation, and radiation to the farfield. A series of tests were performed primarily for the use of code validation and tool validation. Rotating Rake mode measurements were acquired for parametric sets of: (i) mode blockage, (ii) liner insertion loss, (iii) short ducts, and (iv) mode reflection.

  5. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    Science.gov (United States)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  6. Group Independent Component Analysis (gICA) and Current Source Density (CSD) in the study of EEG in ADHD adults.

    Science.gov (United States)

    Ponomarev, Valery A; Mueller, Andreas; Candrian, Gian; Grin-Yatsenko, Vera A; Kropotov, Juri D

    2014-01-01

    To investigate the performance of the spectral analysis of resting EEG, Current Source Density (CSD) and group independent components (gIC) in diagnosing ADHD adults. Power spectra of resting EEG, CSD and gIC (19 channels, linked ears reference, eyes open/closed) from 96 ADHD and 376 healthy adults were compared between eyes open and eyes closed conditions, and between groups of subjects. Pattern of differences in gIC and CSD spectral power between conditions was approximately similar, whereas it was more widely spatially distributed for EEG. Size effect (Cohen's d) of differences in gIC and CSD spectral power between groups of subjects was considerably greater than in the case of EEG. Significant reduction of gIC and CSD spectral power depending on conditions was found in ADHD patients. Reducing power in a wide frequency range in the fronto-central areas is a common phenomenon regardless of whether the eyes were open or closed. Spectral power of local EEG activity isolated by gICA or CSD in the fronto-central areas may be a suitable marker for discrimination of ADHD and healthy adults. Spectral analysis of gIC and CSD provides better sensitivity to discriminate ADHD and healthy adults. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  7. X-ray spectroscopy of warm and hot electron components in the CAPRICE source plasma at EIS testbench at GSI

    Energy Technology Data Exchange (ETDEWEB)

    Mascali, D., E-mail: davidmascali@lns.infn.it; Celona, L.; Castro, G.; Torrisi, G.; Neri, L.; Gammino, S.; Ciavola, G. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud, – Via S. Sofia 62, 95123 Catania (Italy); Maimone, F.; Maeder, J.; Tinschert, K.; Spaedtke, K. P.; Rossbach, J.; Lang, R. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstrasse 1, 64291 Darmstadt (Germany); Romano, F. P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud, – Via S. Sofia 62, 95123 Catania (Italy); IBAM, CNR, Via Biblioteca 4, 95124 Catania (Italy); Musumarra, A.; Altana, C.; Caliri, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud, – Via S. Sofia 62, 95123 Catania (Italy); Dipartimento di Fisica e Astronomia, Università degli Studi di Catania, via S. Sofia 64, 95123 Catania (Italy)

    2014-02-15

    An experimental campaign aiming to detect X radiation emitted by the plasma of the CAPRICE source – operating at GSI, Darmstadt – has been carried out. Two different detectors (a SDD – Silicon Drift Detector and a HpGe – hyper-pure Germanium detector) have been used to characterize the warm (2–30 keV) and hot (30–500 keV) electrons in the plasma, collecting the emission intensity and the energy spectra for different pumping wave frequencies and then correlating them with the CSD of the extracted beam measured by means of a bending magnet. A plasma emissivity model has been used to extract the plasma density along the cone of sight of the SDD and HpGe detectors, which have been placed beyond specific collimators developed on purpose. Results show that the tuning of the pumping frequency considerably modifies the plasma density especially in the warm electron population domain, which is the component responsible for ionization processes: a strong variation of the plasma density near axis region has been detected. Potential correlations with the charge state distribution in the plasma are explored.

  8. X-ray spectroscopy of warm and hot electron components in the CAPRICE source plasma at EIS testbench at GSI.

    Science.gov (United States)

    Mascali, D; Celona, L; Maimone, F; Maeder, J; Castro, G; Romano, F P; Musumarra, A; Altana, C; Caliri, C; Torrisi, G; Neri, L; Gammino, S; Tinschert, K; Spaedtke, K P; Rossbach, J; Lang, R; Ciavola, G

    2014-02-01

    An experimental campaign aiming to detect X radiation emitted by the plasma of the CAPRICE source - operating at GSI, Darmstadt - has been carried out. Two different detectors (a SDD - Silicon Drift Detector and a HpGe - hyper-pure Germanium detector) have been used to characterize the warm (2-30 keV) and hot (30-500 keV) electrons in the plasma, collecting the emission intensity and the energy spectra for different pumping wave frequencies and then correlating them with the CSD of the extracted beam measured by means of a bending magnet. A plasma emissivity model has been used to extract the plasma density along the cone of sight of the SDD and HpGe detectors, which have been placed beyond specific collimators developed on purpose. Results show that the tuning of the pumping frequency considerably modifies the plasma density especially in the warm electron population domain, which is the component responsible for ionization processes: a strong variation of the plasma density near axis region has been detected. Potential correlations with the charge state distribution in the plasma are explored.

  9. IFMIF [International Fusion Materials Irradiation Facility], an accelerator-based neutron source for fusion components irradiation testing: Materials testing capabilities

    International Nuclear Information System (INIS)

    Mann, F.M.

    1988-08-01

    The International Fusion Materials Irradiation Facility (IFMIF) is proposed as an advanced accelerator-based neutron source for high-flux irradiation testing of large-sized fusion reactor components. The facility would require only small extensions to existing accelerator and target technology originally developed for the Fusion Materials Irradiation Test (FMIT) facility. At the extended facility, neutrons would be produced by a 0.1-A beam of 35-MeV deuterons incident upon a liquid lithium target. The volume available for high-flux (>10/sup 15/ n/cm/sup 2/-s) testing in IFMITF would be over a liter, a factor of about three larger than in the FMIT facility. This is because the effective beam current of 35-MeV deuterons on target can be increased by a factor of ten to 1A or more. Such an increase can be accomplished by funneling beams of deuterium ions from the radio-frequency quadruple into a linear accelerator and by taking advantage of recent developments in accelerator technology. Multiple beams and large total current allow great variety in available testing. For example, multiple simultaneous experiments, and great flexibility in tailoring spatial distributions of flux and spectra can be achieved. 5 refs., 2 figs., 1 tab

  10. Investigation of some possible changes in Am-Be neutron source configuration in order to increase the thermal neutron flux using Monte Carlo code

    Science.gov (United States)

    Basiri, H.; Tavakoli-Anbaran, H.

    2018-01-01

    Am-Be neutrons source is based on (α, n) reaction and generates neutrons in the energy range of 0-11 MeV. Since the thermal neutrons are widely used in different fields, in this work, we investigate how to improve the source configuration in order to increase the thermal flux. These suggested changes include a spherical moderator instead of common cylindrical geometry, a reflector layer and an appropriate materials selection in order to achieve the maximum thermal flux. All calculations were done by using MCNP1 Monte Carlo code. Our final results indicated that a spherical paraffin moderator, a layer of beryllium as a reflector can efficiently increase the thermal neutron flux of Am-Be source.

  11. An improvement of estimation method of source term to the environment for interfacing system LOCA for typical PWR using MELCOR code

    Energy Technology Data Exchange (ETDEWEB)

    Han, Seok Jung; Kim, Tae Woon; Ahn, Kwang Il [Risk and Environmental Safety Research Division, Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2017-06-15

    Interfacing-system loss-of-coolant-accident (ISLOCA) has been identified as the most hazardous accident scenario in the typical PWR plants. The present study as an effort to improve the knowledge of the source term to the environment during ISLOCA focuses on an improvement of the estimation method. The improvement was performed to take into account an effect of broken pipeline and auxiliary building structures relevant to ISLOCA. An estimation of the source term to the environment was for the OPR-1000 plants by MELOCR code version 1.8.6. The key features of the source term showed that the massive amount of fission products departed from the beginning of core degradation to the vessel breach. The release amount of fission products may be affected by the broken pipeline and the auxiliary building structure associated with release pathway.

  12. The Effects of Different Nitrogen Sources on Yield and Yield Components of Sweet Corn (Zea mays L. saccharata

    Directory of Open Access Journals (Sweden)

    ali mojab ghasroddashti

    2017-09-01

    Full Text Available Introduction Sweet corn (Zea mays L. saccharata is one of the tropical cereals of graminae which is cultivated in order to use for ear. Nitrogen is one of the most important nutrients and key factors to achieve desirable yield. Fertilizers play a major role in crop productivity. However, nowadays, excessive use of fertilizers have been found to have a negative impact on yield and environment. Introducing new management methods based on nitrogen and water use efficiency showed some improvements in the quality and quantity of crop production in association with the health of the community. In fact, soil organic matter content should be maintained in the appropriate level to improve fertility. Using municipal solid waste compost and poultry manure are appropriate solutions. They can increase soil organic matter, modify physicochemical properties and improve crop production. Moreover, they are able to solve problems caused by the accumulation of municipal solid waste compost and poultry manure. Material and Methods In order to investigate the impact of different sources of nitrogen on yield and yield components of sweet corn, a field experiment was conducted as a randomized complete block design with three replications in Marvdasht in 2013. Treatments included different resources of fertilizer: 200 kg.ha-1 net nitrogen (T1, 300 kg.ha-1 net nitrogen (T2, 8 ton.ha-1 poultry manure (T3, 24 ton.ha-1 municipal solid waste compost (T4, 150 kg net nitrogen + 2 ton municipal solid waste compost (T5, 100 kg net nitrogen + 4 ton poultry manure (T6, 150 kg net nitrogen + 6 ton municipal solid waste compost (T7, 100 kg.ha-1 net nitrogen + 12 ton municipal solid waste compost (T8 and fertilizer free (control (T9. At the time of crop maturity, two square meters from middle of each plot were harvested to measure yield and yield components. Statistical analysis was performed using the SAS statistical software. Least significant difference (LSD test at the five

  13. A study on the application of CRUDTRAN code in primary systems of domestic pressurized heavy-water reactors for prediction of radiation source term

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Cho, Hoon Jo; Jung, Min Young; Lee, Sang Heon [Dept. of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2017-04-15

    The importance of developing a source-term assessment technology has been emphasized owing to the decommissioning of Kori nuclear power plant (NPP) Unit 1 and the increase of deteriorated NPPs. We analyzed the behavioral mechanism of corrosion products in the primary system of a pressurized heavy-water reactor-type NPP. In addition, to check the possibility of applying the CRUDTRAN code to a Canadian Deuterium Uranium Reactor (CANDU)-type NPP, the type was assessed using collected domestic onsite data. With the assessment results, it was possible to predict trends according to operating cycles. Values estimated using the code were similar to the measured values. The results of this study are expected to be used to manage the radiation exposures of operators in high-radiation areas and to predict decommissioning processes in the primary system.

  14. Calculations of fuel burn-up and radionuclide inventory in the syrian miniature neutron source reactor using the WIMSD4 code

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    Calculations of the fuel burn up and radionuclide inventory in the Miniature Neutron Source Reactor after 10 years (the reactor core expected life) of the reactor operating time are presented in this paper. The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnt up and plutonium produced in the reactor core, the concentrations and radioactivities of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well

  15. European inter-comparison of Monte Carlo codes users for the uncertainty calculation of the kerma in air beside a caesium-137 source; Intercomparaison europeenne d'utilisateurs de codes monte carlo pour le calcul d'incertitudes sur le kerma dans l'air aupres d'une source de cesium-137

    Energy Technology Data Exchange (ETDEWEB)

    De Carlan, L.; Bordy, J.M.; Gouriou, J. [CEA Saclay, LIST, Laboratoire National Henri Becquerel, Laboratoire de Metrologie de la Dose 91 - Gif-sur-Yvette (France)

    2010-07-01

    Within the frame of the CONRAD European project (Coordination Network for Radiation Dosimetry), and more precisely within a work group paying attention to uncertainty assessment in computational dosimetry and aiming at comparing different approaches, the authors report the simulation of an irradiator containing a caesium 137 source to calculate the kerma in air as well as its uncertainty due to different parameters. They present the problem geometry, recall the studied issues (kerma uncertainty, influence of capsule source, influence of the collimator, influence of the air volume surrounding the source). They indicate the codes which have been used (MNCP, Fluka, Penelope, etc.) and discuss the obtained results for the first issue

  16. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    International Nuclear Information System (INIS)

    Liu, T; Lin, H; Xu, X; Su, L; Shi, C; Tang, X; Bednarz, B

    2016-01-01

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  17. TU-AB-BRC-10: Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison of GPU and MIC Computing Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Liu, T; Lin, H; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Su, L [John Hopkins University, Baltimore, MD (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States); Bednarz, B [University of Wisconsin, Madison, WI (United States)

    2016-06-15

    Purpose: (1) To perform phase space (PS) based source modeling for Tomotherapy and Varian TrueBeam 6 MV Linacs, (2) to examine the accuracy and performance of the ARCHER Monte Carlo code on a heterogeneous computing platform with Many Integrated Core coprocessors (MIC, aka Xeon Phi) and GPUs, and (3) to explore the software micro-optimization methods. Methods: The patient-specific source of Tomotherapy and Varian TrueBeam Linacs was modeled using the PS approach. For the helical Tomotherapy case, the PS data were calculated in our previous study (Su et al. 2014 41(7) Medical Physics). For the single-view Varian TrueBeam case, we analytically derived them from the raw patient-independent PS data in IAEA’s database, partial geometry information of the jaw and MLC as well as the fluence map. The phantom was generated from DICOM images. The Monte Carlo simulation was performed by ARCHER-MIC and GPU codes, which were benchmarked against a modified parallel DPM code. Software micro-optimization was systematically conducted, and was focused on SIMD vectorization of tight for-loops and data prefetch, with the ultimate goal of increasing 512-bit register utilization and reducing memory access latency. Results: Dose calculation was performed for two clinical cases, a Tomotherapy-based prostate cancer treatment and a TrueBeam-based left breast treatment. ARCHER was verified against the DPM code. The statistical uncertainty of the dose to the PTV was less than 1%. Using double-precision, the total wall time of the multithreaded CPU code on a X5650 CPU was 339 seconds for the Tomotherapy case and 131 seconds for the TrueBeam, while on 3 5110P MICs it was reduced to 79 and 59 seconds, respectively. The single-precision GPU code on a K40 GPU took 45 seconds for the Tomotherapy dose calculation. Conclusion: We have extended ARCHER, the MIC and GPU-based Monte Carlo dose engine to Tomotherapy and Truebeam dose calculations.

  18. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  19. Dosimetric comparison between the microSelectron HDR 192Ir v2 source and the BEBIG 60Co source for HDR brachytherapy using the EGSnrc Monte Carlo transport code

    International Nuclear Information System (INIS)

    Anwarul Islam, M.; Akramuzzaman, M.M.; Zakaria, G.A.

    2012-01-01

    Manufacturing of miniaturized high activity 192 Ir sources have been made a market preference in modern brachytherapy. The smaller dimensions of the sources are flexible for smaller diameter of the applicators and it is also suitable for interstitial implants. Presently, miniaturized 60 Co HDR sources have been made available with identical dimensions to those of 192 Ir sources. 60 Co sources have an advantage of longer half life while comparing with 192 Ir source. High dose rate brachytherapy sources with longer half life are logically pragmatic solution for developing country in economic point of view. This study is aimed to compare the TG-43U1 dosimetric parameters for new BEBIG 60 Co HDR and new microSelectron 192 Ir HDR sources. Dosimetric parameters are calculated using EGSnrc-based Monte Carlo simulation code accordance with the AAPM TG-43 formalism for microSelectron HDR 192 Ir v2 and new BEBIG 60 Co HDR sources. Air-kerma strength per unit source activity, calculated in dry air are 9.698x10 -8 ± 0.55% U Bq -1 and 3.039x10 -7 ± 0.41% U Bq -1 for the above mentioned two sources, respectively. The calculated dose rate constants per unit air-kerma strength in water medium are 1.116±0.12% cGy h -1 U -1 and 1.097±0.12% cGy h -1 U -1 , respectively, for the two sources. The values of radial dose function for distances up to 1 cm and more than 22 cm for BEBIG 60 Co HDR source are higher than that of other source. The anisotropic values are sharply increased to the longitudinal sides of the BEBIG 60 Co source and the rise is comparatively sharper than that of the other source. Tissue dependence of the absorbed dose has been investigated with vacuum phantom for breast, compact bone, blood, lung, thyroid, soft tissue, testis, and muscle. No significant variation is noted at 5 cm of radial distance in this regard while comparing the two sources except for lung tissues. The true dose rates are calculated with considering photon as well as electron transport using

  20. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  1. Effect of Different Sources of Nitrogen and Organic Fertilizers on Yield and Yield Components of Ajowan (Trachyspermum ammi L.

    Directory of Open Access Journals (Sweden)

    zahra saydi

    2017-09-01

    Full Text Available Introduction Ajowan (Trachyspermum ammi L. is an annual medicinal plant of the family Apiaceae which can reach 30 -100 cm in height. and its growth is highly depended on the availability of mineral nutrients in the soil. But, it has been shown that utilization of chemical fertilizers for growth promotion of Ajown could have negative impacts on environment and ecological systems. Nowadays, sustainable agriculture is the best approach to overcome such problems and prevent the excess accumulation of chemical fertilizers deposited within the soil. Application of bio-fertilizers as an alternative to chemical fertilizers is a new sustainable approach which have been raised in the new era of Agriculture. Therefore, this study was conducted to investigate the application of various source of biological fertilizers such as Vermicompost, Alkazotplus and Humic Acid in combination with nitrogen fertilizers on growth behavior, yield and yield components of Ajowan under Ahvaz growing condition. Materials and methods This research was conducted at the Agricultural Research Station of Shahid Chamran University in 2014-2015 to determine the effects of different sources of nitrogen and organic fertilizers on the yield and yield components of Ajowan based on two way randomized complete block design with three replications. The first factor of the experiment was Application of four different nitrogen sources including: Urea (U, Sulfur-coated Urea (SCU, %50 Sulfur-coated urea (1/2 SCU + Alkazot Plus biological fertilizer and Control (no nitrogen source used. Organic fertilizers were also applied at four levels, consisting of Humic Acid, Vermicompost, %50 Vermicompost + Humic Acid and Control (no organic Fertilizer as the second factor. After soil preparation, approximately four Kg.ha-1 of the seeds were planted on the rows with 30 cm distance. Plant height, number of sub branches, number of umbels per plant, number of seeds per umbel , 1000 seeds weight, biological

  2. Chemical Components, Variation, and Source Identification of PM1 during the Heavy Air Pollution Episodes in Beijing in December 2016

    Science.gov (United States)

    Zhang, Yangmei; Wang, Yaqiang; Zhang, Xiaoye; Shen, Xiaojing; Sun, Junying; Wu, Lingyan; Zhang, Zhouxiang; Che, Haochi

    2018-02-01

    Air pollution is a current global concern. The heavy air pollution episodes (HPEs) in Beijing in December 2016 severely influenced visibility and public health. This study aims to survey the chemical compositions, sources, and formation processes of the HPEs. An aerodyne quadruple aerosol mass spectrometer (Q-AMS) was utilized to measure the non-refractory PM1 (NR-PM1) mass concentration and size distributions of the main chemical components including organics, sulfate, nitrate, ammonium, and chloride in situ during 15-23 December 2016. The NR-PM1 mass concentration was found to increase from 6 to 188 μg m-3 within 5 days. During the most serious polluted episode, the PM1 mass concentration was about 2.6 times that during the first pollution stage and even 40 times that of the clean days. The formation rates of PM2.5 in the five pollution stages were 26, 22, 22, 32, and 67 μg m-3 h-1, respectively. Organics and nitrate occupied the largest proportion in the polluted episodes, whereas organics and sulfate dominated the submicron aerosol during the clean days. The size distribution of organics is always broader than those of other species, especially in the clean episodes. The peak sizes of the interested species grew gradually during different HPEs. Aqueous reaction might be important in forming sulfate and chloride, and nitrate was formed via oxidization and condensation processes. PMF (positive matrix factorization) analysis on AMS mass spectra was employed to separate the organics into different subtypes. Two types of secondary organic aerosol with different degrees of oxidation consisted of 43% of total organics. By contrast, primary organics from cooking, coal combustion, and traffic emissions comprised 57% of the organic aerosols during the HPEs.

  3. Linking soil moisture balance and source-responsive models to estimate diffuse and preferential components of groundwater recharge

    Science.gov (United States)

    Cuthbert, M.O.; Mackay, R.; Nimmo, J.R.

    2012-01-01

    Results are presented of a detailed study into the vadose zone and shallow water table hydrodynamics of a field site in Shropshire, UK. A conceptual model is developed and tested using a range of numerical models, including a modified soil moisture balance model (SMBM) for estimating groundwater recharge in the presence of both diffuse and preferential flow components. Tensiometry reveals that the loamy sand topsoil wets up via macropore flow and subsequent redistribution of moisture into the soil matrix. Recharge does not occur until near-positive pressures are achieved at the top of the sandy glaciofluvial outwash material that underlies the topsoil, about 1 m above the water table. Once this occurs, very rapid water table rises follow. This threshold behaviour is attributed to the vertical discontinuity in the macropore system due to seasonal ploughing of the topsoil, and a lower permeability plough/iron pan restricting matrix flow between the topsoil and the lower outwash deposits. Although the wetting process in the topsoil is complex, a SMBM is shown to be effective in predicting the initiation of preferential flow from the base of the topsoil into the lower outwash horizon. The rapidity of the response at the water table and a water table rise during the summer period while flow gradients in the unsaturated profile were upward suggest that preferential flow is also occurring within the outwash deposits below the topsoil. A variation of the source-responsive model proposed by Nimmo (2010) is shown to reproduce the observed water table dynamics well in the lower outwash horizon when linked to a SMBM that quantifies the potential recharge from the topsoil. The results reveal new insights into preferential flow processes in cultivated soils and provide a useful and practical approach to accounting for preferential flow in studies of groundwater recharge estimation.

  4. Linking soil moisture balance and source-responsive models to estimate diffuse and preferential components of groundwater recharge

    Directory of Open Access Journals (Sweden)

    M. O. Cuthbert

    2013-03-01

    Full Text Available Results are presented of a detailed study into the vadose zone and shallow water table hydrodynamics of a field site in Shropshire, UK. A conceptual model is presented and tested using a range of numerical models, including a modified soil moisture balance model (SMBM for estimating groundwater recharge in the presence of both diffuse and preferential flow components. Tensiometry reveals that the loamy sand topsoil wets up via preferential flow and subsequent redistribution of moisture into the soil matrix. Recharge does not occur until near-positive pressures are achieved at the top of the sandy glaciofluvial outwash material that underlies the topsoil, about 1 m above the water table. Once this occurs, very rapid water table rises follow. This threshold behaviour is attributed to the vertical discontinuity in preferential flow pathways due to seasonal ploughing of the topsoil and to a lower permeability plough/iron pan restricting matrix flow between the topsoil and the lower outwash deposits. Although the wetting process in the topsoil is complex, a SMBM is shown to be effective in predicting the initiation of preferential flow from the base of the topsoil into the lower outwash horizon. The rapidity of the response at the water table and a water table rise during the summer period while flow gradients in the unsaturated profile were upward suggest that preferential flow is also occurring within the outwash deposits below the topsoil. A variation of the source-responsive model proposed by Nimmo (2010 is shown to reproduce the observed water table dynamics well in the lower outwash horizon when linked to a SMBM that quantifies the potential recharge from the topsoil. The results reveal new insights into preferential flow processes in cultivated soils and provide a useful and practical approach to accounting for preferential flow in studies of groundwater recharge estimation.

  5. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    International Nuclear Information System (INIS)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I.

    2006-01-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR 2 + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six 60 Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  6. A contribution to the analysis of the activity distribution of a radioactive source trapped inside a cylindrical volume, using the M.C.N.P.X. code

    Energy Technology Data Exchange (ETDEWEB)

    Portugal, L.; Oliveira, C.; Trindade, R.; Paiva, I. [Instituto Tecnologico e Nuclear, Dpto. Proteccao Radiologica e Seguranca Nuclear, Sacavem (Portugal)

    2006-07-01

    Orphan sources, activated materials or contaminated materials with natural or artificial radionuclides have been detected in scrap metal products destined to recycling. The consequences of the melting of a source during the process could result in economical, environmental and social impacts. From the point of view of the radioactive waste management, a scenario of 100 ton of contaminated steel in one piece is a major problem. So, it is of great importance to develop a methodology that would allow us to predict the activity distribution inside a volume of steel. In previous work we were able to distinguish between the cases where the source is disseminated all over the entire cylinder and the cases where it is concentrated in different volumes. Now the main goal is to distinguish between different radiuses of spherical source geometries trapped inside the cylinder. For this, a methodology was proposed based on the ratio of the counts of two regions of the gamma spectrum, obtained with a sodium iodide detector, using the M.C.N.P.X. Monte Carlo simulation code. These calculated ratios allow us to determine a function r = aR{sup 2} + bR + c, where R is the ratio between the counts of the two regions of the gamma spectrum and r is the radius of the source. For simulation purposes six {sup 60}Co sources were used (a point source, four spheres of 5 cm, 10 cm, 15 cm and 20 cm radius and the overall contaminated cylinder) trapped inside two types of matrix, concrete and stainless steel. The methodology applied has shown to predict and distinguish accurately the distribution of a source inside a material roughly independently of the matrix and density considered. (authors)

  7. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Science.gov (United States)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  8. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Directory of Open Access Journals (Sweden)

    Lin Hui

    2017-01-01

    Full Text Available Monte Carlo (MC simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  9. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  10. Optical CDMA components requirements

    Science.gov (United States)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  11. A study of physics of sub-critical multiplicative systems driven by sources and the utilization of deterministic codes in calculation of this systems

    International Nuclear Information System (INIS)

    Antunes, Alberi

    2008-01-01

    This work presents the Physics of Source Driven Systems (ADS). It shows some statics and K i netics parameters of the reactor Physics and when it is sub critical, that are important in evaluation and definition of these systems. The objective is to demonstrate that there are differences in parameters when the reactor is critical. Moreover, the work shows the differences observed in the parameters for different calculation models. Two calculation methodologies are shown In this dissertation: Gandini and Salvatores and Dulla, and some parameters are calculated. The ANISN deterministic transport code is used in calculation in order to compare these parameters. In a subcritical configuration of IPEN-MB-01 Reactor driven by an external source some parameters are calculated. The conclusions about calculation realized are presented in end of work. (author)

  12. Calculation of gamma ray dose buildup factors in water for isotropic point, plane mono directional and line sources using MCNP code

    International Nuclear Information System (INIS)

    Atak, H.; Celikten, O. S.; Tombakoglu, M.

    2009-01-01

    Gamma ray dose buildup factors in water for isotropic point, plane mono directional and infinite/finite line sources were calculated using the MCNP code. The buildup factors are determined for gamma ray energies of 1, 2, 3 and 4 Mev and for shield thicknesses of 1, 2, 4 and 7 mean free paths. The calculated buildup factors were then fitted in the Taylor and Berger forms. For the line sources a buildup factor table was also constructed using the Sievert function and the constants in Taylor form derived in this study to compare with the Monte Carlo results. All buildup factors were compared with the tabulated data given in literature. In order to reduce the statistical errors on buildup factors, 'forced collision' option was used in the MCNP calculations.

  13. Source convergence diagnostics using Boltzmann entropy criterion application to different OECD/NEA criticality benchmarks with the 3-D Monte Carlo code Tripoli-4

    International Nuclear Information System (INIS)

    Dumonteil, E.; Le Peillet, A.; Lee, Y. K.; Petit, O.; Jouanne, C.; Mazzolo, A.

    2006-01-01

    The measurement of the stationarity of Monte Carlo fission source distributions in k eff calculations plays a central role in the ability to discriminate between fake and 'true' convergence (in the case of a high dominant ratio or in case of loosely coupled systems). Recent theoretical developments have been made in the study of source convergence diagnostics, using Shannon entropy. We will first recall those results, and we will then generalize them using the expression of Boltzmann entropy, highlighting the gain in terms of the various physical problems that we can treat. Finally we will present the results of several OECD/NEA benchmarks using the Tripoli-4 Monte Carlo code, enhanced with this new criterion. (authors)

  14. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  15. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  16. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  17. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  18. One-pot four-component synthesis of 2-aryl-3,3-dihaloacrylonitriles using potassium hexacyanoferrate(II) as environmentally benign cyanide source

    International Nuclear Information System (INIS)

    Zhao, Zhouxing; Li, Zheng

    2011-01-01

    An efficient route to one-pot four-component reactions of aroyl chlorides, potassium hexacyanoferrate(II), triphenylphosphine and carbon tetrahalides to synthesize 2-aryl-3,3-dichloroacrylonitriles and 2-aryl-3,3-dibromoacrylonitriles was described. This protocol has advantages of use of non-toxic cyanide source, high yield and simple work-up procedure. (author)

  19. The IPEM code of practice for determination of the reference air kerma rate for HDR 192Ir brachytherapy sources based on the NPL air kerma standard

    International Nuclear Information System (INIS)

    Bidmead, A M; Sander, T; Nutbrown, R F; Locks, S M; Lee, C D; Aird, E G A; Flynn, A

    2010-01-01

    This paper contains the recommendations of the high dose rate (HDR) brachytherapy working party of the UK Institute of Physics and Engineering in Medicine (IPEM). The recommendations consist of a Code of Practice (COP) for the UK for measuring the reference air kerma rate (RAKR) of HDR 192 Ir brachytherapy sources. In 2004, the National Physical Laboratory (NPL) commissioned a primary standard for the realization of RAKR of HDR 192 Ir brachytherapy sources. This has meant that it is now possible to calibrate ionization chambers directly traceable to an air kerma standard using an 192 Ir source (Sander and Nutbrown 2006 NPL Report DQL-RD 004 (Teddington: NPL) http://publications.npl.co.uk). In order to use the source specification in terms of either RAKR, .K R (ICRU 1985 ICRU Report No 38 (Washington, DC: ICRU); ICRU 1997 ICRU Report No 58 (Bethesda, MD: ICRU)), or air kerma strength, S K (Nath et al 1995 Med. Phys. 22 209-34), it has been necessary to develop algorithms that can calculate the dose at any point around brachytherapy sources within the patient tissues. The AAPM TG-43 protocol (Nath et al 1995 Med. Phys. 22 209-34) and the 2004 update TG-43U1 (Rivard et al 2004 Med. Phys. 31 633-74) have been developed more fully than any other protocol and are widely used in commercial treatment planning systems. Since the TG-43 formalism uses the quantity air kerma strength, whereas this COP uses RAKR, a unit conversion from RAKR to air kerma strength was included in the appendix to this COP. It is recommended that the measured RAKR determined with a calibrated well chamber traceable to the NPL 192 Ir primary standard is used in the treatment planning system. The measurement uncertainty in the source calibration based on the system described in this COP has been reduced considerably compared to other methods based on interpolation techniques.

  20. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  1. Source apportionment of soil heavy metals using robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR) receptor model.

    Science.gov (United States)

    Qu, Mingkai; Wang, Yan; Huang, Biao; Zhao, Yongcun

    2018-06-01

    The traditional source apportionment models, such as absolute principal component scores-multiple linear regression (APCS-MLR), are usually susceptible to outliers, which may be widely present in the regional geochemical dataset. Furthermore, the models are merely built on variable space instead of geographical space and thus cannot effectively capture the local spatial characteristics of each source contributions. To overcome the limitations, a new receptor model, robust absolute principal component scores-robust geographically weighted regression (RAPCS-RGWR), was proposed based on the traditional APCS-MLR model. Then, the new method was applied to the source apportionment of soil metal elements in a region of Wuhan City, China as a case study. Evaluations revealed that: (i) RAPCS-RGWR model had better performance than APCS-MLR model in the identification of the major sources of soil metal elements, and (ii) source contributions estimated by RAPCS-RGWR model were more close to the true soil metal concentrations than that estimated by APCS-MLR model. It is shown that the proposed RAPCS-RGWR model is a more effective source apportionment method than APCS-MLR (i.e., non-robust and global model) in dealing with the regional geochemical dataset. Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Characterization of secondary organic aerosol from photo-oxidation of gasoline exhaust and specific sources of major components.

    Science.gov (United States)

    Ma, Pengkun; Zhang, Peng; Shu, Jinian; Yang, Bo; Zhang, Haixu

    2018-01-01

    To further explore the composition and distribution of secondary organic aerosol (SOA) components from the photo-oxidation of light aromatic precursors (toluene, m-xylene, and 1,3,5-trimethylbenzene (1,3,5-TMB)) and idling gasoline exhaust, a vacuum ultraviolet photoionization mass spectrometer (VUV-PIMS) was employed. Peaks of the molecular ions of the SOA components with minimum molecular fragmentation were clearly observed from the mass spectra of SOA, through the application of soft ionization methods in VUV-PIMS. The experiments comparing the exhaust-SOA and light aromatic mixture-SOA showed that the observed distributions of almost all the predominant cluster ions in the exhaust-SOA were similar to that of the mixture-SOA. Based on the characterization experiments of SOA formed from individual light aromatic precursors, the SOA components with molecular weights of 98 and 110 amu observed in the exhaust-SOA resulted from the photo-oxidation of toluene and m-xylene; the components with a molecular weight of 124 amu were derived mainly from m-xylene; and the components with molecular weights of 100, 112, 128, 138, and 156 amu were mainly derived from 1,3,5-TMB. These results suggest that C 7 -C 9 light aromatic hydrocarbons are significant SOA precursors and that major SOA components originate from gasoline exhaust. Additionally, some new light aromatic hydrocarbon-SOA components were observed for the first time using VUV-PIMS. The corresponding reaction mechanisms were also proposed in this study to enrich the knowledge base of the formation mechanisms of light aromatic hydrocarbon-SOA compounds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  4. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  5. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  6. Development of Level-2 PSA Technology: A Development of the Database of the Parametric Source Term for Kori Unit 1 Using the MAAP4 Code

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Chang Soon; Mun, Ju Hyun; Yun, Jeong Ick; Cho, Young Hoo; Kim, Chong Uk [Seoul National University, Seoul (Korea, Republic of)

    1997-07-15

    To quantify the severe accident source term of the parametric model method, the uncertainty of the parameters should be analyzed. Generally, to analyze the uncertainties, the cumulative distribution functions(CDF`S) of the parameters are derived. This report introduces a method of derivation of the CDF`s of the basic parameters, FCOR, FVES and FDCH. The calculation tool of the source term is the MAAP version 4.0. In the MAAP code, there are model parameters to consider an uncertain physical and/or chemical phenomenon. In general, the parameters have not a point value but a range. In this paper, considering this point, the input values of model parameters influencing each parameter are sampled using LHS. Then, the calculation results are shown in the cumulative distribution form. For a case study, the CDF`s of FCOR, FVES and FDCH of KORI unit 1 are derived. The target scenarios for the calculation are the ones whose initial events are large LOCA, small LOCA and transient, respectively. It is found that the distributions of this study are consistent to those of NUREG-1150 and are proven to be adequate in assessing the uncertainties in the severe accident source term of KORI Unit 1. 15 refs., 27 tabs., 4 figs. (author)

  7. Computer code determination of tolerable accel current and voltage limits during startup of an 80 kV MFTF sustaining neutral beam source

    International Nuclear Information System (INIS)

    Mayhall, D.J.; Eckard, R.D.

    1979-01-01

    We have used a Lawrence Livermore Laboratory (LLL) version of the WOLF ion source extractor design computer code to determine tolerable accel current and voltage limits during startup of a prototype 80 kV Mirror Fusion Test Facility (MFTF) sustaining neutral beam source. Arc current limits are also estimated. The source extractor has gaps of 0.236, 0.721, and 0.155 cm. The effective ion mass is 2.77 AMU. The measured optimum accel current density is 0.266 A/cm 2 . The gradient grid electrode runs at 5/6 V/sub a/ (accel voltage). The suppressor electrode voltage is zero for V/sub a/ < 3 kV and -3 kV for V/sub a/ greater than or equal to 3 kV. The accel current density for optimum beam divergence is obtained for 1 less than or equal to V/sub a/ less than or equal to 80 kV, as are the beam divergence and emittance

  8. CONTAIN code calculations of the effects on the source term of CsI to I/sub 2/ conversion due to severe hydrogen burns

    International Nuclear Information System (INIS)

    Valdez, G.D.; Williams, D.C.

    1986-01-01

    In experiments conducted at Sandia National Laboratories large amounts of elemental iodine were produced when CsI-Al 2 O 3 aerosol was exposed to hydrogen/air combustion. To evaluate some of the implications of the iodide conversion (observed to occur with up to 75% efficiency) for the severe accident source term, computational simulations of representative accident sequences were conducted with the CONTAIN code. The following conclusions can be drawn from this preliminary source term assessment: (1) If the containment sprays are inoperative during the accident, or failed by the hydrogen burn, the late-time source term is almost tripled when the iodide is converted to I 2 . (2) With the sprays active, the amount released without conversion of the CsI aerosol is 63% higher than for the case when conversion occurs. (3) For the case where CsI is converted to I 2 continued operation of the sprays reduces the release by a factor of 40, relative to the case in which the sprays fail at the time of the hydrogen burn. When there is no conversion, the reduction factor for continued spray operation is about a factor of 9, relative to the failed spray case

  9. Source apportionment of fine particles and its chemical components over the Yangtze River Delta, China during a heavy haze pollution episode

    Science.gov (United States)

    Li, L.; An, J. Y.; Zhou, M.; Yan, R. S.; Huang, C.; Lu, Q.; Lin, L.; Wang, Y. J.; Tao, S. K.; Qiao, L. P.; Zhu, S. H.; Chen, C. H.

    2015-12-01

    An extremely high PM2.5 pollution episode occurred over the eastern China in January 2013. In this paper, the particulate matter source apportionment technology (PSAT) method coupled within the Comprehensive air quality model with extensions (CAMx) is applied to study the source contributions to PM2.5 and its major components at six receptors (Urban Shanghai, Chongming, Dianshan Lake, Urban Suzhou, Hangzhou and Zhoushan) in the Yangtze River Delta (YRD) region. Contributions from 4 source areas (including Shanghai, South Jiangsu, North Zhejiang and Super-region) and 9 emission sectors (including power plants, industrial boilers and kilns, industrial processing, mobile source, residential, volatile emissions, dust, agriculture and biogenic emissions) to PM2.5 and its major components (sulfate, nitrate, ammonia, organic carbon and elemental carbon) at the six receptors in the YRD region are quantified. Results show that accumulation of local pollution was the largest contributor during this air pollution episode in urban Shanghai (55%) and Suzhou (46%), followed by long-range transport (37% contribution to Shanghai and 44% to Suzhou). Super-regional emissions play an important role in PM2.5 formation at Hangzhou (48%) and Zhoushan site (68%). Among the emission sectors contributing to the high pollution episode, the major source categories include industrial processing (with contributions ranging between 12.7 and 38.7% at different receptors), combustion source (21.7-37.3%), mobile source (7.5-17.7%) and fugitive dust (8.4-27.3%). Agricultural contribution is also very significant at Zhoushan site (24.5%). In terms of the PM2.5 major components, it is found that industrial boilers and kilns are the major source contributor to sulfate and nitrate. Volatile emission source and agriculture are the major contributors to ammonia; transport is the largest contributor to elemental carbon. Industrial processing, volatile emissions and mobile source are the most significant

  10. The DC field components of horizontal and vertical electric dipole sources immersed in three-layered stratified media

    Directory of Open Access Journals (Sweden)

    D. Llanwyn Jones

    Full Text Available Formulas for computing the Cartesian components of the static (DC fields of horizontal electric dipoles ( HEDs and vertical electric dipoles ( VEDs located in the central zone of a three-layer horizontally stratified medium are derived and presented in a summary form suitable for immediate computation. Formulas are given for the electric and magnetic field components in the upper and central regions. In the general case the computation involves the summation of a convergent infinite series. For the particular case of an infinitely thick central region (corresponding to the two-layer problem, the analysis produces relatively simple closed-form equations for the field components which are suitable for a 'hand calculation'. Specimen calculations for dipoles in seawaters are included and the derived results are compared with computations made using an ac model.

  11. The DC field components of horizontal and vertical electric dipole sources immersed in three-layered stratified media

    Directory of Open Access Journals (Sweden)

    D. Llanwyn Jones

    1997-04-01

    Full Text Available Formulas for computing the Cartesian components of the static (DC fields of horizontal electric dipoles ( HEDs and vertical electric dipoles ( VEDs located in the central zone of a three-layer horizontally stratified medium are derived and presented in a summary form suitable for immediate computation. Formulas are given for the electric and magnetic field components in the upper and central regions. In the general case the computation involves the summation of a convergent infinite series. For the particular case of an infinitely thick central region (corresponding to the two-layer problem, the analysis produces relatively simple closed-form equations for the field components which are suitable for a 'hand calculation'. Specimen calculations for dipoles in seawaters are included and the derived results are compared with computations made using an ac model.

  12. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  13. Sex-differences of face coding: evidence from larger right hemispheric M170 in men and dipole source modelling.

    Directory of Open Access Journals (Sweden)

    Hannes O Tiedt

    Full Text Available The processing of faces relies on a specialized neural system comprising bilateral cortical structures with a dominance of the right hemisphere. However, due to inconsistencies of earlier findings as well as more recent results such functional lateralization has become a topic of discussion. In particular, studies employing behavioural tasks and electrophysiological methods indicate a dominance of the right hemisphere during face perception only in men whereas women exhibit symmetric and bilateral face processing. The aim of this study was to further investigate such sex differences in hemispheric processing of personally familiar and opposite-sex faces using whole-head magnetoencephalography (MEG. We found a right-lateralized M170-component in occipito-temporal sensor clusters in men as opposed to a bilateral response in women. Furthermore, the same pattern was obtained in performing dipole localization and determining dipole strength in the M170-timewindow. These results suggest asymmetric involvement of face-responsive neural structures in men and allow to ascribe this asymmetry to the fusiform gyrus. This specifies findings from previous investigations employing event-related potentials (ERP and LORETA reconstruction methods yielding rather extended bilateral activations showing left asymmetry in women and right lateralization in men. We discuss our finding of an asymmetric fusiform activation pattern in men in terms of holistic face processing during face evaluation and sex differences with regard to visual strategies in general and interest for opposite faces in special. Taken together the pattern of hemispheric specialization observed here yields new insights into sex differences in face perception and entails further questions about interactions between biological sex, psychological gender and influences that might be stimulus-driven or task dependent.

  14. Calculations of the thermal and fast neutron fluxes in the Syrian miniature neutron source reactor using the MCNP-4C code.

    Science.gov (United States)

    Khattab, K; Sulieman, I

    2009-04-01

    The MCNP-4C code, based on the probabilistic approach, was used to model the 3D configuration of the core of the Syrian miniature neutron source reactor (MNSR). The continuous energy neutron cross sections from the ENDF/B-VI library were used to calculate the thermal and fast neutron fluxes in the inner and outer irradiation sites of MNSR. The thermal fluxes in the MNSR inner irradiation sites were also measured experimentally by the multiple foil activation method ((197)Au (n, gamma) (198)Au and (59)Co (n, gamma) (60)Co). The foils were irradiated simultaneously in each of the five MNSR inner irradiation sites to measure the thermal neutron flux and the epithermal index in each site. The calculated and measured results agree well.

  15. TOGA: A TOUGH code for modeling three-phase, multi-component, and non-isothermal processes involved in CO2-based Enhanced Oil Recovery

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Lehua [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Oldenburg, Curtis M. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States)

    2016-10-10

    CMG. The code has also been validated against a CO2-EOR experimental core flood involving flow of three phases and 12 components. Results of simulations of a hypothetical 3D CO2-EOR problem involving three phases and multiple components are presented to demonstrate the field-scale capabilities of the new code. This user guide provides instructions for use and sample problems for verification and demonstration.

  16. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Science.gov (United States)

    Phillips-Smith, Catherine; Jeong, Cheol-Heon; Healy, Robert M.; Dabek-Zlotorzynska, Ewa; Celo, Valbona; Brook, Jeffrey R.; Evans, Greg

    2017-08-01

    The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter) were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010-November 2012) at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013), hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow, water, and biota samples collected

  17. Components and Context: Exploring Sources of Reading Difficulties for Language Minority Learners and Native English Speakers in Urban Schools

    Science.gov (United States)

    Kieffer, Michael J.; Vukovic, Rose K.

    2012-01-01

    Drawing on the cognitive and ecological domains within the componential model of reading, this longitudinal study explores heterogeneity in the sources of reading difficulties for language minority learners and native English speakers in urban schools. Students (N = 150) were followed from first through third grade and assessed annually on…

  18. Source apportionment of size-segregated atmospheric particles based on the major water-soluble components in Lecce (Italy)

    International Nuclear Information System (INIS)

    Contini, D.; Cesari, D.; Genga, A.; Siciliano, M.; Ielpo, P.; Guascito, M.R.; Conte, M.

    2014-01-01

    Atmospheric aerosols have potential effects on human health, on the radiation balance, on climate, and on visibility. The understanding of these effects requires detailed knowledge of aerosol composition and size distributions and of how the different sources contribute to particles of different sizes. In this work, aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI). Measurements were taken between February and October 2011 in an urban background site near Lecce (Apulia region, southeast of Italy). Samples were analysed to evaluate the concentrations of water-soluble ions (SO 4 2− , NO 3 − , NH 4 + , Cl − , Na + , K + , Mg 2+ and Ca 2+ ) and of water-soluble organic and inorganic carbon. The aerosols were characterised by two modes, an accumulation mode having a mass median diameter (MMD) of 0.35 ± 0.02 μm, representing 51 ± 4% of the aerosols and a coarse mode (MMD = 4.5 ± 0.4 μm), representing 49 ± 4% of the aerosols. The data were used to estimate the losses in the impactor by comparison with a low-volume sampler. The average loss in the MOUDI-collected aerosol was 19 ± 2%, and the largest loss was observed for NO 3 − (35 ± 10%). Significant losses were observed for Ca 2+ (16 ± 5%), SO 4 2− (19 ± 5%) and K + (10 ± 4%), whereas the losses for Na + and Mg 2+ were negligible. Size-segregated source apportionment was performed using Positive Matrix Factorization (PMF), which was applied separately to the coarse (size interval 1–18 μm) and accumulation (size interval 0.056–1 μm) modes. The PMF model was able to reasonably reconstruct the concentration in each size-range. The uncertainties in the source apportionment due to impactor losses were evaluated. In the accumulation mode, it was not possible to distinguish the traffic contribution from other combustion sources. In the coarse mode, it was not possible to efficiently separate nitrate from the contribution of crustal/resuspension origin

  19. Source apportionment of size-segregated atmospheric particles based on the major water-soluble components in Lecce (Italy)

    Energy Technology Data Exchange (ETDEWEB)

    Contini, D., E-mail: d.contini@isac.cnr.it [Istituto di Scienze dell' Atmosfera e del Clima, ISAC-CNR, Lecce (Italy); Cesari, D. [Istituto di Scienze dell' Atmosfera e del Clima, ISAC-CNR, Lecce (Italy); Genga, A.; Siciliano, M. [Dipartimento di Scienze e Tecnologie Biologiche e Ambientali, Università del Salento, Lecce (Italy); Ielpo, P. [Istituto di Scienze dell' Atmosfera e del Clima, ISAC-CNR, Lecce (Italy); Istituto di Ricerca Sulle Acque, IRSA-CNR, Bari (Italy); Guascito, M.R. [Dipartimento di Scienze e Tecnologie Biologiche e Ambientali, Università del Salento, Lecce (Italy); Conte, M. [Istituto di Scienze dell' Atmosfera e del Clima, ISAC-CNR, Lecce (Italy)

    2014-02-01

    Atmospheric aerosols have potential effects on human health, on the radiation balance, on climate, and on visibility. The understanding of these effects requires detailed knowledge of aerosol composition and size distributions and of how the different sources contribute to particles of different sizes. In this work, aerosol samples were collected using a 10-stage Micro-Orifice Uniform Deposit Impactor (MOUDI). Measurements were taken between February and October 2011 in an urban background site near Lecce (Apulia region, southeast of Italy). Samples were analysed to evaluate the concentrations of water-soluble ions (SO{sub 4}{sup 2−}, NO{sub 3}{sup −}, NH{sub 4}{sup +}, Cl{sup −}, Na{sup +}, K{sup +}, Mg{sup 2+} and Ca{sup 2+}) and of water-soluble organic and inorganic carbon. The aerosols were characterised by two modes, an accumulation mode having a mass median diameter (MMD) of 0.35 ± 0.02 μm, representing 51 ± 4% of the aerosols and a coarse mode (MMD = 4.5 ± 0.4 μm), representing 49 ± 4% of the aerosols. The data were used to estimate the losses in the impactor by comparison with a low-volume sampler. The average loss in the MOUDI-collected aerosol was 19 ± 2%, and the largest loss was observed for NO{sub 3}{sup −} (35 ± 10%). Significant losses were observed for Ca{sup 2+} (16 ± 5%), SO{sub 4}{sup 2−} (19 ± 5%) and K{sup +} (10 ± 4%), whereas the losses for Na{sup +} and Mg{sup 2+} were negligible. Size-segregated source apportionment was performed using Positive Matrix Factorization (PMF), which was applied separately to the coarse (size interval 1–18 μm) and accumulation (size interval 0.056–1 μm) modes. The PMF model was able to reasonably reconstruct the concentration in each size-range. The uncertainties in the source apportionment due to impactor losses were evaluated. In the accumulation mode, it was not possible to distinguish the traffic contribution from other combustion sources. In the coarse mode, it was not possible to

  20. Automated phase picker and source location algorithm for local distances using a single three component seismic station

    International Nuclear Information System (INIS)

    Saari, J.

    1989-12-01

    The paper describes procedures for automatic location of local events by using single-site, three-component (3c) seismogram records. Epicentral distance is determined from the time difference between P- and S-onsets. For onset time estimates a special phase picker algorithm is introduced. Onset detection is accomplished by comparing short-term average with long-term average after multiplication of north, east and vertical components of recording. For epicentral distances up to 100 km, errors seldom exceed 5 km. The slowness vector, essentially the azimuth, is estimated independently by using the Christoffersson et al. (1988) 'polarization' technique, although a priori knowledge of the P-onset time gives the best results. Differences between 'true' and observed azimuths are generally less than 12 deg C. Practical examples are given by demonstrating the viability of the procedures for automated 3c seismogram analysis. The results obtained compare favourably with those achieved by a miniarray of three stations. (orig.)

  1. Sources of particulate matter components in the Athabasca oil sands region: investigation through a comparison of trace element measurement methodologies

    Directory of Open Access Journals (Sweden)

    C. Phillips-Smith

    2017-08-01

    Full Text Available The province of Alberta, Canada, is home to three oil sands regions which, combined, contain the third largest deposit of oil in the world. Of these, the Athabasca oil sands region is the largest. As part of Environment and Climate Change Canada's program in support of the Joint Canada-Alberta Implementation Plan for Oil Sands Monitoring program, concentrations of trace elements in PM2. 5 (particulate matter smaller than 2.5 µm in diameter were measured through two campaigns that involved different methodologies: a long-term filter campaign and a short-term intensive campaign. In the long-term campaign, 24 h filter samples were collected once every 6 days over a 2-year period (December 2010–November 2012 at three air monitoring stations in the regional municipality of Wood Buffalo. For the intensive campaign (August 2013, hourly measurements were made with an online instrument at one air monitoring station; daily filter samples were also collected. The hourly and 24 h filter data were analyzed individually using positive matrix factorization. Seven emission sources of PM2. 5 trace elements were thereby identified: two types of upgrader emissions, soil, haul road dust, biomass burning, and two sources of mixed origin. The upgrader emissions, soil, and haul road dust sources were identified through both the methodologies and both methodologies identified a mixed source, but these exhibited more differences than similarities. The second upgrader emissions and biomass burning sources were only resolved by the hourly and filter methodologies, respectively. The similarity of the receptor modeling results from the two methodologies provided reassurance as to the identity of the sources. Overall, much of the PM2. 5-related trace elements were found to be anthropogenic, or at least to be aerosolized through anthropogenic activities. These emissions may in part explain the previously reported higher levels of trace elements in snow

  2. The MyHealthService approach for chronic disease management based on free open source software and low cost components.

    Science.gov (United States)

    Vognild, Lars K; Burkow, Tatjana M; Luque, Luis Fernandez

    2009-01-01

    In this paper we present an approach to building personal health services, supporting following-up, physical exercising, health education, and psychosocial support for the chronically ill, based on free open source software and low-cost computers, mobile devices, and consumer health and fitness devices. We argue that this will lower the cost of the systems, which is important given the increasing number of people with chronicle diseases and limited healthcare budgets.

  3. The Use of Principal Component Analysis for Source Identification of PM2.5 from Selected Urban and Regional Background Sites in Poland

    Science.gov (United States)

    Błaszczak, Barbara

    2018-01-01

    The paper reports the results of the measurements of water-soluble ions and carbonaceous matter content in the fine particulate matter (PM2.5), as well as the contributions of major sources in PM2.5. Daily PM2.5 samples were collected during heating and non-heating season of the year 2013 in three different locations in Poland: Szczecin (urban background), Trzebinia (urban background) and Złoty Potok (regional background). The concentrations of PM2.5, and its related components, exhibited clear spatiotemporal variability with higher levels during the heating period. The share of the total carbon (TC) in PM2.5 exceeded 40% and was primarily determined by fluctuations in the share of OC. Sulfates (SO42-), nitrates (NO3-) and ammonium (NH4+) dominated in the ionic composition of PM2.5 and accounted together 34% (Szczecin), 30% (Trzebinia) and 18% (Złoty Potok) of PM2.5 mass. Source apportionment analysis, performed by PCA-MLRA model (Principal Component Analysis - Multilinear Regression Analysis), revealed that secondary aerosol, whose presence is related to oxidation of gaseous precursors emitted from fuel combustion and biomass burning, had the largest contribution in observed PM2.5 concentrations. In addition, the contribution of traffic sources together with road dust resuspension, was observed. The share of natural sources (sea spray, crustal dust) was generally lower.

  4. The Use of Principal Component Analysis for Source Identification of PM2.5 from Selected Urban and Regional Background Sites in Poland

    Directory of Open Access Journals (Sweden)

    Błaszczak Barbara

    2018-01-01

    Full Text Available The paper reports the results of the measurements of water-soluble ions and carbonaceous matter content in the fine particulate matter (PM2.5, as well as the contributions of major sources in PM2.5. Daily PM2.5 samples were collected during heating and non-heating season of the year 2013 in three different locations in Poland: Szczecin (urban background, Trzebinia (urban background and Złoty Potok (regional background. The concentrations of PM2.5, and its related components, exhibited clear spatiotemporal variability with higher levels during the heating period. The share of the total carbon (TC in PM2.5 exceeded 40% and was primarily determined by fluctuations in the share of OC. Sulfates (SO42-, nitrates (NO3- and ammonium (NH4+ dominated in the ionic composition of PM2.5 and accounted together ~34% (Szczecin, ~30% (Trzebinia and ~18% (Złoty Potok of PM2.5 mass. Source apportionment analysis, performed by PCA-MLRA model (Principal Component Analysis – Multilinear Regression Analysis, revealed that secondary aerosol, whose presence is related to oxidation of gaseous precursors emitted from fuel combustion and biomass burning, had the largest contribution in observed PM2.5 concentrations. In addition, the contribution of traffic sources together with road dust resuspension, was observed. The share of natural sources (sea spray, crustal dust was generally lower.

  5. Spatiotemporal analysis of single-trial EEG of emotional pictures based on independent component analysis and source location

    Science.gov (United States)

    Liu, Jiangang; Tian, Jie

    2007-03-01

    The present study combined the Independent Component Analysis (ICA) and low-resolution brain electromagnetic tomography (LORETA) algorithms to identify the spatial distribution and time course of single-trial EEG record differences between neural responses to emotional stimuli vs. the neutral. Single-trial multichannel (129-sensor) EEG records were collected from 21 healthy, right-handed subjects viewing the emotion emotional (pleasant/unpleasant) and neutral pictures selected from International Affective Picture System (IAPS). For each subject, the single-trial EEG records of each emotional pictures were concatenated with the neutral, and a three-step analysis was applied to each of them in the same way. First, the ICA was performed to decompose each concatenated single-trial EEG records into temporally independent and spatially fixed components, namely independent components (ICs). The IC associated with artifacts were isolated. Second, the clustering analysis classified, across subjects, the temporally and spatially similar ICs into the same clusters, in which nonparametric permutation test for Global Field Power (GFP) of IC projection scalp maps identified significantly different temporal segments of each emotional condition vs. neutral. Third, the brain regions accounted for those significant segments were localized spatially with LORETA analysis. In each cluster, a voxel-by-voxel randomization test identified significantly different brain regions between each emotional condition vs. the neutral. Compared to the neutral, both emotional pictures elicited activation in the visual, temporal, ventromedial and dorsomedial prefrontal cortex and anterior cingulated gyrus. In addition, the pleasant pictures activated the left middle prefrontal cortex and the posterior precuneus, while the unpleasant pictures activated the right orbitofrontal cortex, posterior cingulated gyrus and somatosensory region. Our results were well consistent with other functional imaging

  6. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  7. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    International Nuclear Information System (INIS)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F.; Piro, Anthony L.

    2015-01-01

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M ⊙ (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M ⊙ of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M ⊙ , hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R ⊙

  8. LIGHT CURVES OF CORE-COLLAPSE SUPERNOVAE WITH SUBSTANTIAL MASS LOSS USING THE NEW OPEN-SOURCE SUPERNOVA EXPLOSION CODE (SNEC)

    Energy Technology Data Exchange (ETDEWEB)

    Morozova, Viktoriya; Renzo, Mathieu; Ott, Christian D.; Clausen, Drew; Couch, Sean M.; Ellis, Justin; Roberts, Luke F. [TAPIR, Walter Burke Institute for Theoretical Physics, MC 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Piro, Anthony L., E-mail: morozvs@tapir.caltech.edu [Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 (United States)

    2015-11-20

    We present the SuperNova Explosion Code (SNEC), an open-source Lagrangian code for the hydrodynamics and equilibrium-diffusion radiation transport in the expanding envelopes of supernovae. Given a model of a progenitor star, an explosion energy, and an amount and distribution of radioactive nickel, SNEC generates the bolometric light curve, as well as the light curves in different broad bands assuming blackbody emission. As a first application of SNEC, we consider the explosions of a grid of 15 M{sub ⊙} (at zero-age main sequence, ZAMS) stars whose hydrogen envelopes are stripped to different extents and at different points in their evolution. The resulting light curves exhibit plateaus with durations of ∼20–100 days if ≳1.5–2 M{sub ⊙} of hydrogen-rich material is left and no plateau if less hydrogen-rich material is left. If these shorter plateau lengths are not seen for SNe IIP in nature, it suggests that, at least for ZAMS masses ≲20 M{sub ⊙}, hydrogen mass loss occurs as an all or nothing process. This perhaps points to the important role binary interactions play in generating the observed mass-stripped supernovae (i.e., Type Ib/c events). These light curves are also unlike what is typically seen for SNe IIL, arguing that simply varying the amount of mass loss cannot explain these events. The most stripped models begin to show double-peaked light curves similar to what is often seen for SNe IIb, confirming previous work that these supernovae can come from progenitors that have a small amount of hydrogen and a radius of ∼500 R{sub ⊙}.

  9. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  10. Implementation of constitutive equations for creep damage mechanics into the ABAQUS finite element code - some practical cases in high temperature component design and life assessment

    International Nuclear Information System (INIS)

    Segle, P.; Samuelson, L.Aa.; Andersson, Peder; Moberg, F.

    1996-01-01

    Constitutive equations for creep damage mechanics are implemented into the finite element program ABAQUS using a user supplied subroutine, UMAT. A modified Kachanov-Rabotnov constitutive equation which accounts for inhomogeneity in creep damage is used. With a user defined material a number of bench mark tests are analyzed for verification. In the cases where analytical solutions exist, the numerical results agree very well. In other cases, the creep damage evolution response appear to be realistic in comparison with laboratory creep tests. The appropriateness of using the creep damage mechanics concept in design and life assessment of high temperature components is demonstrated. 18 refs

  11. Studying the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Zaidman, A.; Van Rompaey, B.; Van Deursen, A.; Demeyer, S.

    2010-01-01

    Many software production processes advocate rigorous development testing alongside functional code writing, which implies that both test code and production code should co-evolve. To gain insight in the nature of this co-evolution, this paper proposes three views (realized by a tool called TeMo)

  12. Source Code Analysis Laboratory (SCALe)

    Science.gov (United States)

    2012-04-01

    products (including services) and processes. The agency has also published ISO / IEC 17025 :2005 General Requirements for the Competence of Testing...SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 also operate in accordance with ISO 9001. • NIST National...assessed by the accreditation body against all of the requirements of ISO / IEC 17025 : 2005 General requirements for the competence of testing and

  13. Improved separability of dipole sources by tripolar versus conventional disk electrodes: a modeling study using independent component analysis.

    Science.gov (United States)

    Cao, H; Besio, W; Jones, S; Medvedev, A

    2009-01-01

    Tripolar electrodes have been shown to have less mutual information and higher spatial resolution than disc electrodes. In this work, a four-layer anisotropic concentric spherical head computer model was programmed, then four configurations of time-varying dipole signals were used to generate the scalp surface signals that would be obtained with tripolar and disc electrodes, and four important EEG artifacts were tested: eye blinking, cheek movements, jaw movements, and talking. Finally, a fast fixed-point algorithm was used for signal independent component analysis (ICA). The results show that signals from tripolar electrodes generated better ICA separation results than from disc electrodes for EEG signals with these four types of artifacts.

  14. Source heterogeneity for the major components of ~3.7 Ga banded iron formations (Isua Greenstone Belt, Western Greenland)

    DEFF Research Database (Denmark)

    Frei, Robert; Polat, Ali

    2006-01-01

    of the protocrustal landmass (eNd(3.7 Ga)   + 1.6). The validity of two different and periodically interacting water masses (an essentially two-component mixing system) in the deposition of alternating iron- and silica-rich layers is also reflected by systematic trends in germanium (Ge)/silicon (Si) ratios....... These suggest that significant amounts of silica were derived from unexposed and/or destroyed mafic Hadean landmass, unlike iron which probably originated from oceanic crust following hydrothermal alteration by deep percolating seawater. Ge/Si distributional patterns in the early Archean Isua BIF are similar...... of the depositional environment and to the understanding of depositional mechanisms of these earliest chemical sediments. Rare earth element (REE)-yttrium (Y) patterns of the individual mesobands show features of modern seawater with diagnostic cerium (Ce/Ce ), presodymium (Pr/Pr ) and Y/holmium (Ho) anomalies. Very...

  15. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  16. Thermodynamic study of the MWG system/components and measurement of the oxygen partial pressure in the heat source capsule

    International Nuclear Information System (INIS)

    David, D.J.

    1980-01-01

    A thermodynamic study of the Milliwatt Generator heat source capsule was performed to determine the effects of the oxide fuel on container materials at elevated temperatures in order to evaluate the factors affecting embrittlement of T-111 alloy. The study indicates that relatively slow oxidation of the T-111 of the capsule occurs during pretreatment. Yttrium added to the 238 PuO 2 fuel charge is functioning in its designed role as an oxygen getter and is stabilizing at an O/Pu ratio of 1.75. The free energy of formation of hafnium oxide has been measured and found to be -70632 cal/mole; this suggests that the ability of hafnium to strongly function as an oxygen getter may be largely determined by the kinetics, and the free energy may play a lesser role

  17. Design and test of component circuits of an integrated quantum voltage noise source for Johnson noise thermometry

    Science.gov (United States)

    Yamada, Takahiro; Maezawa, Masaaki; Urano, Chiharu

    2015-11-01

    We present design and testing of a pseudo-random number generator (PRNG) and a variable pulse number multiplier (VPNM) which are digital circuit subsystems in an integrated quantum voltage noise source for Jonson noise thermometry. Well-defined, calculable pseudo-random patterns of single flux quantum pulses are synthesized with the PRNG and multiplied digitally with the VPNM. The circuit implementation on rapid single flux quantum technology required practical circuit scales and bias currents, 279 junctions and 33 mA for the PRNG, and 1677 junctions and 218 mA for the VPNM. We confirmed the circuit operation with sufficiently wide margins, 80-120%, with respect to the designed bias currents.

  18. Fast Computation of Pulse Height Spectra Using SGRD Code

    Directory of Open Access Journals (Sweden)

    Humbert Philippe

    2017-01-01

    Full Text Available SGRD (Spectroscopy, Gamma rays, Rapid, Deterministic code is used for fast calculation of the gamma ray spectrum produced by a spherical shielded source and measured by a detector. The photon source lines originate from the radioactive decay of the unstable isotopes. The emission rate and spectrum of these primary sources are calculated using the DARWIN code. The leakage spectrum is separated in two parts, the uncollided component is transported by ray-tracing and the scattered component is calculated using a multigroup discrete ordinates method. The pulsed height spectrum is then simulated by folding the leakage spectrum with the detector response functions which are pre-calculated using MCNP5 code for each considered detector type. An application to the simulation of the gamma spectrum produced by a natural uranium ball coated with plexiglass and measured using a NaI detector is presented.

  19. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    International Nuclear Information System (INIS)

    Hiatt, Jessica R.; Davis, Stephen D.; Rivard, Mark J.

    2015-01-01

    ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions

  20. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model

    Energy Technology Data Exchange (ETDEWEB)

    Hiatt, Jessica R. [Department of Radiation Oncology, Rhode Island Hospital, The Warren Alpert Medical School of Brown University, Providence, Rhode Island 02903 (United States); Davis, Stephen D. [Department of Medical Physics, McGill University Health Centre, Montreal, Quebec H3G 1A4 (Canada); Rivard, Mark J., E-mail: mark.j.rivard@gmail.com [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States)

    2015-06-15

    function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  1. A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model.

    Science.gov (United States)

    Hiatt, Jessica R; Davis, Stephen D; Rivard, Mark J

    2015-06-01

    and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component. Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.

  2. Erythrocytes and cell line-based assays to evaluate the cytoprotective activity of antioxidant components obtained from natural sources.

    Science.gov (United States)

    Botta, Albert; Martínez, Verónica; Mitjans, Montserrat; Balboa, Elena; Conde, Enma; Vinardell, M Pilar

    2014-02-01

    Oxidative stress can damage cellular components including DNA, proteins or lipids, and may cause several skin diseases. To protect from this damage and addressing consumer's appeal to natural products, antioxidants obtained from algal and vegetal extracts are being proposed as antioxidants to be incorporated into formulations. Thus, the development of reliable, quick and economic in vitro methods to study the cytoactivity of these products is a meaningful requirement. A combination of erythrocyte and cell line-based assays was performed on two extracts from Sargassum muticum, one from Ulva lactuca, and one from Castanea sativa. Antioxidant properties were assessed in erythrocytes by the TBARS and AAPH assays, and cytotoxicity and antioxidant cytoprotection were assessed in HaCaT and 3T3 cells by the MTT assay. The extracts showed no antioxidant activity on the TBARS assay, whereas their antioxidant capacity in the AAPH assay was demonstrated. On the cytotoxicity assays, extracts showed low toxicity, with IC50 values higher than 200μg/mL. C. sativa extract showed the most favourable antioxidant properties on the antioxidant cytoprotection assays; while S. muticum and U. lactuca extracts showed a slight antioxidant activity. This battery of methods was useful to characterise the biological antioxidant properties of these natural extracts. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Coupled geochemical and solute transport code development

    International Nuclear Information System (INIS)

    Morrey, J.R.; Hostetler, C.J.

    1985-01-01

    A number of coupled geochemical hydrologic codes have been reported in the literature. Some of these codes have directly coupled the source-sink term to the solute transport equation. The current consensus seems to be that directly coupling hydrologic transport and chemical models through a series of interdependent differential equations is not feasible for multicomponent problems with complex geochemical processes (e.g., precipitation/dissolution reactions). A two-step process appears to be the required method of coupling codes for problems where a large suite of chemical reactions must be monitored. Two-step structure requires that the source-sink term in the transport equation is supplied by a geochemical code rather than by an analytical expression. We have developed a one-dimensional two-step coupled model designed to calculate relatively complex geochemical equilibria (CTM1D). Our geochemical module implements a Newton-Raphson algorithm to solve heterogeneous geochemical equilibria, involving up to 40 chemical components and 400 aqueous species. The geochemical module was designed to be efficient and compact. A revised version of the MINTEQ Code is used as a parent geochemical code

  4. Study of the source-detector system geometry using the MCNP-X code in the flowrate measurement with radioactive tracers

    Energy Technology Data Exchange (ETDEWEB)

    Avilan Puertas, Eddie, E-mail: epuertas@nuclear.ufrj.br [Universidad Central de Venezuela (UCV), Facultad de Ingenieria, Departamento de Fisica Aplicada, Caracas (Venezuela, Bolivarian Republic of); Braz, Delson, E-mail: delson@lin.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Brandao, Luis E.; Salgado, Cesar M., E-mail: brandao@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    The use radioactive tracers for flow rate measurement is applied to a great variety of situations, however the accuracy of the technique is highly dependent of the adequate choice of the experimental measurement conditions. To measure flow rate of fluids in ducts partially filled, is necessary to measure the fluid flow velocity and the fluid height. The flow velocity can be measured with the cross correlation function and the fluid level, with a fluid level meter system. One of the error factors when measuring flow rate, is on the correct setting of the source-detector of the fluid level meter system. The goal of the present work is to establish by mean of MCNP-X code simulations the experimental parameters to measure the fluid level. The experimental tests will be realized in a flow rate system of 10 mm of diameter of acrylic tube for water and oil as fluids. The radioactive tracer to be used is the {sup 82}Br and for the detection will be employed two 1″ NaI(Tl) scintillator detectors, shielded with collimators of 0.5 cm and 1 cm of circular aperture diameter. (author)

  5. Design of Smart Ion-Selective Electrode Arrays Based on Source Separation through Nonlinear Independent Component Analysis

    Directory of Open Access Journals (Sweden)

    Duarte L.T.

    2014-03-01

    Full Text Available The development of chemical sensor arrays based on Blind Source Separation (BSS provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE. The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.

  6. Neutron and photon measurements through concrete from a 15 GeV electron beam on a target-comparison with models and calculations. [Intermediate energy source term, Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Jenkins, T M [Stanford Linear Accelerator Center, CA (USA)

    1979-02-15

    Measurements of neutron and photon dose equivalents from a 15 GeV electron beam striking an iron target inside a scale model of a PEP IR hall are described, and compared with analytic-empirical calculations and with the Monte Carlo code, MORSE. The MORSE code is able to predict both absolute neutron and photon dose equivalents for geometries where the shield is relatively thin, but fails as the shield thickness is increased. An intermediate energy source term is postulated for analytic-empirical neutron shielding calculations to go along with the giant resonance and high energy terms, and a new source term due to neutron capture is postulated for analytic-empirical photon shielding calculations. The source strengths for each energy source term, and each type, are given from analysis of the measurements.

  7. Calculus of the Power Spectral Density of Ultra Wide Band Pulse Position Modulation Signals Coded with Totally Flipped Code

    Directory of Open Access Journals (Sweden)

    DURNEA, T. N.

    2009-02-01

    Full Text Available UWB-PPM systems were noted to have a power spectral density (p.s.d. consisting of a continuous portion and a line spectrum, which is composed of energy components placed at discrete frequencies. These components are the major source of interference to narrowband systems operating in the same frequency interval and deny harmless coexistence of UWB-PPM and narrowband systems. A new code denoted as Totally Flipped Code (TFC is applied to them in order to eliminate these discrete spectral components. The coded signal transports the information inside pulse position and will have the amplitude coded to generate a continuous p.s.d. We have designed the code and calculated the power spectral density of the coded signals. The power spectrum has no discrete components and its envelope is largely flat inside the bandwidth with a maximum at its center and a null at D.C. These characteristics make this code suited for implementation in the UWB systems based on PPM-type modulation as it assures a continuous spectrum and keeps PPM modulation performances.

  8. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  9. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  10. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    Energy Technology Data Exchange (ETDEWEB)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Grimm, Simon L. [University of Bern, Space Research and Planetary Sciences, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Mordasini, Christoph; Oreshenko, Maria; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Bonnefoy, Mickaël [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Ehrenreich, David, E-mail: baptiste.lavie@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Observatoire de l’Université de Genève, 51 chemin des Maillettes, 1290, Sauverny (Switzerland)

    2017-09-01

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).

  11. Simulation of equivalent dose due to accidental electron beam loss in Indus-1 and Indus-2 synchrotron radiation sources using FLUKA code

    International Nuclear Information System (INIS)

    Sahani, P.K.; Dev, Vipin; Singh, Gurnam; Haridas, G.; Thakkar, K.K.; Sarkar, P.K.; Sharma, D.N.

    2008-01-01

    Indus-1 and Indus-2 are two Synchrotron radiation sources at Raja Ramanna Centre for Advanced Technology (RRCAT), India. Stored electron energy in Indus-1 and Indus-2 are 450MeV and 2.5GeV respectively. During operation of storage ring, accidental electron beam loss may occur in addition to normal beam losses. The Bremsstrahlung radiation produced due to the beam losses creates a major radiation hazard in these high energy electron accelerators. FLUKA, the Monte Carlo radiation transport code is used to simulate the accidental beam loss. The simulation was carried out to estimate the equivalent dose likely to be received by a trapped person closer to the storage ring. Depth dose profile in water phantom for 450MeV and 2.5GeV electron beam is generated, from which percentage energy absorbed in 30cm water phantom (analogous to human body) is calculated. The simulation showed the percentage energy deposition in the phantom is about 19% for 450MeV electron and 4.3% for 2.5GeV electron. The dose build up factor in 30cm water phantom for 450MeV and 2.5GeV electron beam are found to be 1.85 and 2.94 respectively. Based on the depth dose profile, dose equivalent index of 0.026Sv and 1.08Sv are likely to be received by the trapped person near the storage ring in Indus-1 and Indus-2 respectively. (author)

  12. Twenty-fifth water reactor safety information meeting: Proceedings. Volume 1: Plenary sessions; Pressure vessel research; BWR strainer blockage and other generic safety issues; Environmentally assisted degradation of LWR components; Update on severe accident code improvements and applications

    International Nuclear Information System (INIS)

    Monteleone, S.

    1998-03-01

    This three-volume report contains papers presented at the conference. The papers are printed in the order of their presentation in each session and describe progress and results of programs in nuclear safety research conducted in this country and abroad. Foreign participation in the meeting included papers presented by researchers from France, Japan, Norway, and Russia. The titles of the papers and the names of the authors have been updated and may differ from those that appeared in the final program of the meeting. This volume contains the following information: (1) plenary sessions; (2) pressure vessel research; (3) BWR strainer blockage and other generic safety issues; (4) environmentally assisted degradation of LWR components; and (5) update on severe accident code improvements and applications. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database

  13. Twenty-fifth water reactor safety information meeting: Proceedings. Volume 1: Plenary sessions; Pressure vessel research; BWR strainer blockage and other generic safety issues; Environmentally assisted degradation of LWR components; Update on severe accident code improvements and applications

    Energy Technology Data Exchange (ETDEWEB)

    Monteleone, S. [comp.] [Brookhaven National Lab., Upton, NY (United States)

    1998-03-01

    This three-volume report contains papers presented at the conference. The papers are printed in the order of their presentation in each session and describe progress and results of programs in nuclear safety research conducted in this country and abroad. Foreign participation in the meeting included papers presented by researchers from France, Japan, Norway, and Russia. The titles of the papers and the names of the authors have been updated and may differ from those that appeared in the final program of the meeting. This volume contains the following information: (1) plenary sessions; (2) pressure vessel research; (3) BWR strainer blockage and other generic safety issues; (4) environmentally assisted degradation of LWR components; and (5) update on severe accident code improvements and applications. Selected papers have been indexed separately for inclusion in the Energy Science and Technology Database.

  14. Statistically based uncertainty analysis for ranking of component importance in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor

    International Nuclear Information System (INIS)

    Wilson, G.E.

    1992-01-01

    The Analytic Hierarchy Process (AHP) has been used to help determine the importance of components and phenomena in thermal-hydraulic safety analyses of nuclear reactors. The AHP results are based, in part on expert opinion. Therefore, it is prudent to evaluate the uncertainty of the AHP ranks of importance. Prior applications have addressed uncertainty with experimental data comparisons and bounding sensitivity calculations. These methods work well when a sufficient experimental data base exists to justify the comparisons. However, in the case of limited or no experimental data the size of the uncertainty is normally made conservatively large. Accordingly, the author has taken another approach, that of performing a statistically based uncertainty analysis. The new work is based on prior evaluations of the importance of components and phenomena in the thermal-hydraulic safety analysis of the Advanced Neutron Source Reactor (ANSR), a new facility now in the design phase. The uncertainty during large break loss of coolant, and decay heat removal scenarios is estimated by assigning a probability distribution function (pdf) to the potential error in the initial expert estimates of pair-wise importance between the components. Using a Monte Carlo sampling technique, the error pdfs are propagated through the AHP software solutions to determine a pdf of uncertainty in the system wide importance of each component. To enhance the generality of the results, study of one other problem having different number of elements is reported, as are the effects of a larger assumed pdf error in the expert ranks. Validation of the Monte Carlo sample size and repeatability are also documented

  15. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  16. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  17. Calculations of fuel burn up and radionuclide inventories in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION codes

    International Nuclear Information System (INIS)

    Khattab, K.

    2005-01-01

    The WIMSD4 code is used to generate the fuel group constants and the infinite multiplication factor as a function of the reactor operating time for 10, 20, and 30 k W operating power levels. The uranium burn up rate and burn up percentage, the amounts of the plutonium isotopes, the concentrations and radioactivities of the fission products and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core are calculated using the WIMSD4 code as well. The CITATION code is used to calculate the changes in the effective multiplication factor of the reactor.(author)

  18. Efficient three-dimensional resist profile-driven source mask optimization optical proximity correction based on Abbe-principal component analysis and Sylvester equation

    Science.gov (United States)

    Lin, Pei-Chun; Yu, Chun-Chang; Chen, Charlie Chung-Ping

    2015-01-01

    As one of the critical stages of a very large scale integration fabrication process, postexposure bake (PEB) plays a crucial role in determining the final three-dimensional (3-D) profiles and lessening the standing wave effects. However, the full 3-D chemically amplified resist simulation is not widely adopted during the postlayout optimization due to the long run-time and huge memory usage. An efficient simulation method is proposed to simulate the PEB while considering standing wave effects and resolution enhancement techniques, such as source mask optimization and subresolution assist features based on the Sylvester equation and Abbe-principal component analysis method. Simulation results show that our algorithm is 20× faster than the conventional Gaussian convolution method.

  19. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  20. Involvement of the anterior cingulate cortex in time-based prospective memory task monitoring: An EEG analysis of brain sources using Independent Component and Measure Projection Analysis.

    Directory of Open Access Journals (Sweden)

    Gabriela Cruz

    Full Text Available Time-based prospective memory (PM, remembering to do something at a particular moment in the future, is considered to depend upon self-initiated strategic monitoring, involving a retrieval mode (sustained maintenance of the intention plus target checking (intermittent time checks. The present experiment was designed to explore what brain regions and brain activity are associated with these components of strategic monitoring in time-based PM tasks.24 participants were asked to reset a clock every four minutes, while performing a foreground ongoing word categorisation task. EEG activity was recorded and data were decomposed into source-resolved activity using Independent Component Analysis. Common brain regions across participants, associated with retrieval mode and target checking, were found using Measure Projection Analysis.Participants decreased their performance on the ongoing task when concurrently performed with the time-based PM task, reflecting an active retrieval mode that relied on withdrawal of limited resources from the ongoing task. Brain activity, with its source in or near the anterior cingulate cortex (ACC, showed changes associated with an active retrieval mode including greater negative ERP deflections, decreased theta synchronization, and increased alpha suppression for events locked to the ongoing task while maintaining a time-based intention. Activity in the ACC was also associated with time-checks and found consistently across participants; however, we did not find an association with time perception processing per se.The involvement of the ACC in both aspects of time-based PM monitoring may be related to different functions that have been attributed to it: strategic control of attention during the retrieval mode (distributing attentional resources between the ongoing task and the time-based task and anticipatory/decision making processing associated with clock-checks.

  1. Component-Based Java Legacy Code Refactoring

    Directory of Open Access Journals (Sweden)

    Hugo Arboleda

    2013-01-01

    Full Text Available La Ingeniería de Software Basada en Componentes (CBSE pretende mejorar la modularización del software y la inserción de preocupaciones arquitecturales. Refactorizar código Java legado con CBSE en mente requiere evaluar primero el cumplimiento del código legado con los principios de la programación por componentes. En este artículo presentamos un portafolio de reglas para evaluar el cumplimiento de la propiedad de Integridad de Comunicación en código Java legado; esta propiedad es una de las mayores fortalezas del enfoque CBSE. Proponemos estas reglas para identificar tipos componente y así proveer una medida de la construcción de componentes CBSE de una aplicación. Con el objetivo de ayudar a los desarrolladores y al personal responsable del mantenimiento de código legado cuando se hace necesario refactorizar sus aplicaciones, nuestro trabajo nos lleva a definir un conjunto de acciones de refactorización. En este artículo también presentamos resultados de pruebas, comparaciones y análisis de las salidas logradas luego de refactorizar varias aplicaciones Java.

  2. Assessment of water quality in the elbe river at flood water conditions based on cluster analysis, principle components analysis, and source apportionment

    Energy Technology Data Exchange (ETDEWEB)

    Baborowski, Martina [Department of River Ecology, UFZ-Helmholtz Centre for Environmental Research, Magdeburg (Germany); Simeonov, Vasil [Faculty of Chemistry, University of Sofia, Sofia (Bulgaria); Einax, Juergen W. [Institute of Inorganic and Analytical Chemistry, Friedrich Schiller University of Jena, Jena (Germany)

    2012-04-15

    An assessment of water quality measurements during a spring flood in the Elbe River is presented. Daily samples were taken at a site in the middle Elbe, which is part of the network of the International Commission for the Protection of the Elbe River (IKSE/MKOL). Cluster analysis (CA), principal components analysis (PCA), and source apportionment (APCS apportioning) were used to assess the flood-dependent matter transport. As a result, three main components could be extracted as important to the matter transport in the Elbe River basin during flood events: (i) re-suspended contaminated sediments, which led to temporarily increased concentrations of suspended matter and of most of the investigated heavy metals; (ii) water discharge related concentrations of pedogenic dissolved organic matter (DOM) as well as preliminary diluted concentrations of uranium and chloride, parameters with stable pollution background in the river basin; and (iii) abandoned mines, i.e., their dewatering systems, with particular influence on nickel, manganese, and zinc concentrations. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Assessment of water quality in the elbe river at flood water conditions based on cluster analysis, principle components analysis, and source apportionment

    International Nuclear Information System (INIS)

    Baborowski, Martina; Simeonov, Vasil; Einax, Juergen W.

    2012-01-01

    An assessment of water quality measurements during a spring flood in the Elbe River is presented. Daily samples were taken at a site in the middle Elbe, which is part of the network of the International Commission for the Protection of the Elbe River (IKSE/MKOL). Cluster analysis (CA), principal components analysis (PCA), and source apportionment (APCS apportioning) were used to assess the flood-dependent matter transport. As a result, three main components could be extracted as important to the matter transport in the Elbe River basin during flood events: (i) re-suspended contaminated sediments, which led to temporarily increased concentrations of suspended matter and of most of the investigated heavy metals; (ii) water discharge related concentrations of pedogenic dissolved organic matter (DOM) as well as preliminary diluted concentrations of uranium and chloride, parameters with stable pollution background in the river basin; and (iii) abandoned mines, i.e., their dewatering systems, with particular influence on nickel, manganese, and zinc concentrations. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. Cross platform SCA component using C++ builder and KYLIX

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Timossi, Chiris; McDonald, James L.

    2003-01-01

    A cross-platform component for EPICS Simple Channel Access (SCA) has been developed. EPICS client programs with GUI become portable at their C++ source-code level both on Windows and Linux by using Borland C++ Builder 6 and Kylix 3 on these platforms respectively

  5. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  6. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, C A; Moran, M J

    2007-08-21

    The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS

  7. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 1: Model components for sources parameterization

    Directory of Open Access Journals (Sweden)

    R. Azzaro

    2017-11-01

    Full Text Available The volcanic region of Mt. Etna (Sicily, Italy represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA, the first results and maps of which are presented in a companion paper, Peruzza et al. (2017. The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades. The analysis of the frequency–magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude–size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool – FiSH (Pace et al., 2016 – that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be

  8. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  9. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  10. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  11. Software testing and source code for the calculation of clearance values. Final report; Erprobung von Software und Quellcode zur Berechnung von Freigabewerten. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Meyering, Henrich

    2016-11-15

    The GRS research project was aimed to the test the appropriateness of the software package ''residual radioactivity'' (RESRAD) for the calculation of clearance values according to German and European regulations. Comparative evaluations were performed with RESRAD-OFFSITE and the code SiWa-PRO DSS used by GRS and the GRS program code ARTM. It is recommended to use RESRAD-OFFSITE for comparative calculations. The dose relevant air-path dispersion of radionuclides should not be modeled using RESRAD-OFFSITE, the use of ARTM is recommended. The sensitivity analysis integrated into RESRAD-OFFSITE allows a fast identification of crucial parameters.

  12. GAMMA-CLOUD: a computer code for calculating gamma-exposure due to a radioactive cloud released from a point source

    Energy Technology Data Exchange (ETDEWEB)

    Sugimoto, O [Chugoku Electric Power Co. Inc., Hiroshima (Japan); Sawaguchi, Y; Kaneko, M

    1979-03-01

    A computer code, designated GAMMA-CLOUD, has been developed by specialists of electric power companies to meet requests from the companies to have a unified means of calculating annual external doses from routine releases of radioactive gaseous effluents from nuclear power plants, based on the Japan Atomic Energy Commission's guides for environmental dose evaluation. GAMMA-CLOUD is written in FORTRAN language and its required capacity is less than 100 kilobytes. The average ..gamma..-exposure at an observation point can be calculated within a few minutes with comparable precision to other existing codes.

  13. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  14. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  15. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  16. Comparing the co-evolution of production and test code in open source and industrial developer test processes through repository mining

    NARCIS (Netherlands)

    Van Rompaey, B.; Zaidman, A.E.; Van Deursen, A.; Demeyer, S.

    2008-01-01

    This paper represents an extension to our previous work: Mining software repositories to study coevolution of production & test code. Proceedings of the International Conference on Software Testing, Verification, and Validation (ICST), IEEE Computer Society, 2008; doi:10.1109/ICST.2008.47

  17. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  18. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  19. Earthquakes Sources Parameter Estimation of 20080917 and 20081114 Near Semangko Fault, Sumatra Using Three Components of Local Waveform Recorded by IA Network Station

    Directory of Open Access Journals (Sweden)

    Madlazim

    2012-04-01

    Full Text Available The 17/09/2008 22:04:80 UTC and 14/11/2008 00:27:31.70 earthquakes near Semangko fault were analyzed to identify the fault planes. The two events were relocated to assess physical insight against the hypocenter uncertainty. The datas used to determine source parameters of both earthquakes were three components of local waveform recorded by Geofon broadband IA network stations, (MDSI, LWLI, BLSI and RBSI for the event of 17/09/2008 and (MDSI, LWLI, BLSI and KSI for the event of 14/11/2008. Distance from the epicenter to all station was less than 5°. Moment tensor solution of two events was simultaneously analyzed by determination of the centroid position. Simultaneous analysis covered hypocenter position, centroid position and nodal planes of two events indicated Semangko fault planes. Considering that the Semangko fault zone is a high seismicity area, the identification of the seismic fault is important for the seismic hazard investigation in the region.

  20. UV spectral fingerprinting and analysis of variance-principal component analysis: a useful tool for characterizing sources of variance in plant materials.

    Science.gov (United States)

    Luthria, Devanand L; Mukhopadhyay, Sudarsan; Robbins, Rebecca J; Finley, John W; Banuelos, Gary S; Harnly, James M

    2008-07-23

    UV spectral fingerprints, in combination with analysis of variance-principal components analysis (ANOVA-PCA), can differentiate between cultivars and growing conditions (or treatments) and can be used to identify sources of variance. Broccoli samples, composed of two cultivars, were grown under seven different conditions or treatments (four levels of Se-enriched irrigation waters, organic farming, and conventional farming with 100 and 80% irrigation based on crop evaporation and transpiration rate). Freeze-dried powdered samples were extracted with methanol-water (60:40, v/v) and analyzed with no prior separation. Spectral fingerprints were acquired for the UV region (220-380 nm) using a 50-fold dilution of the extract. ANOVA-PCA was used to construct subset matrices that permitted easy verification of the hypothesis that cultivar and treatment contributed to a difference in the chemical expression of the broccoli. The sums of the squares of the same matrices were used to show that cultivar, treatment, and analytical repeatability contributed 30.5, 68.3, and 1.2% of the variance, respectively.