WorldWideScience

Sample records for source-code instrumentation component

  1. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  2. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  3. Exploiting IoT Technologies and Open Source Components for Smart Seismic Network Instrumentation

    Science.gov (United States)

    Germenis, N. G.; Koulamas, C. A.; Foundas, P. N.

    2017-12-01

    The data collection infrastructure of any seismic network poses a number of requirements and trade-offs related to accuracy, reliability, power autonomy and installation & operational costs. Having the right hardware design at the edge of this infrastructure, embedded software running inside the instruments is the heart of pre-processing and communication services implementation and their integration with the central storage and processing facilities of the seismic network. This work demonstrates the feasibility and benefits of exploiting software components from heterogeneous sources in order to realize a smart seismic data logger, achieving higher reliability, faster integration and less development and testing costs of critical functionality that is in turn responsible for the cost and power efficient operation of the device. The instrument's software builds on top of widely used open source components around the Linux kernel with real-time extensions, the core Debian Linux distribution, the earthworm and seiscomp tooling frameworks, as well as components from the Internet of Things (IoT) world, such as the CoAP and MQTT protocols for the signaling planes, besides the widely used de-facto standards of the application domain at the data plane, such as the SeedLink protocol. By using an innovative integration of features based on lower level GPL components of the seiscomp suite with higher level processing earthworm components, coupled with IoT protocol extensions to the latter, the instrument can implement smart functionality such as network controlled, event triggered data transmission in parallel with edge archiving and on demand, short term historical data retrieval.

  4. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  5. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  6. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  7. Electrical, instrumentation, and control codes and standards

    International Nuclear Information System (INIS)

    Kranning, A.N.

    1978-01-01

    During recent years numerous documents in the form of codes and standards have been developed and published to provide design, fabrication and construction rules and criteria applicable to instrumentation, control and power distribution facilities for nuclear power plants. The contents of this LTR were prepared by NUS Corporation under Subcontract K5108 and provide a consolidated index and listing of the documents selected for their application to procurement of materials and design of modifications and new construction at the LOFT facility. These codes and standards should be applied together with the National Electrical Code, the ID Engineering Standards and LOFT Specifications to all LOFT instrument and electrical design activities

  8. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  9. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  10. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  11. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  12. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  13. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  14. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  15. A UML profile for code generation of component based distributed systems

    International Nuclear Information System (INIS)

    Chiozzi, G.; Karban, R.; Andolfato, L.; Tejeda, A.

    2012-01-01

    A consistent and unambiguous implementation of code generation (model to text transformation) from UML (must rely on a well defined UML (Unified Modelling Language) profile, customizing UML for a particular application domain. Such a profile must have a solid foundation in a formally correct ontology, formalizing the concepts and their relations in the specific domain, in order to avoid a maze or set of wildly created stereotypes. The paper describes a generic profile for the code generation of component based distributed systems for control applications, the process to distill the ontology and define the profile, and the strategy followed to implement the code generator. The main steps that take place iteratively include: defining the terms and relations with an ontology, mapping the ontology to the appropriate UML meta-classes, testing the profile by creating modelling examples, and generating the code. This has allowed us to work on the modelling of E-ELT (European Extremely Large Telescope) control system and instrumentation without knowing what infrastructure will be finally used

  16. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  17. Regulatory instrument review: Management of aging of LWR [light water reactor] major safety-related components

    International Nuclear Information System (INIS)

    Werry, E.V.

    1990-10-01

    This report comprises Volume 1 of a review of US nuclear plant regulatory instruments to determine the amount and kind of information they contain on managing the aging of safety-related components in US nuclear power plants. The review was conducted for the US Nuclear Regulatory Commission (NRC) by the Pacific Northwest Laboratory (PNL) under the NRC Nuclear Plant Aging Research (NPAR) Program. Eight selected regulatory instruments, e.g., NRC Regulatory Guides and the Code of Federal Regulations, were reviewed for safety-related information on five selected components: reactor pressure vessels, steam generators, primary piping, pressurizers, and emergency diesel generators. Volume 2 will be concluded in FY 1991 and will also cover selected major safety-related components, e.g., pumps, valves and cables. The focus of the review was on 26 NPAR-defined safety-related aging issues, including examination, inspection, and maintenance and repair; excessive/harsh testing; and irradiation embrittlement. The major conclusion of the review is that safety-related regulatory instruments do provide implicit guidance for aging management, but include little explicit guidance. The major recommendation is that the instruments be revised or augmented to explicitly address the management of aging

  18. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  19. Development of source range measurement instrument in Xi'an pulsed reactor

    CERN Document Server

    Wang Li

    2002-01-01

    Source range measurement instrument in Xi'an pulsed reactor is key equipment of low-side measuring in source range. At the same time, it is also weighty component of out-of-pile neutron-flux level observation system. The authors have done some researching and renovating based on the similar type devices used in nuclear reactor to improve the meter sensitivity, measuring range, noise proof features, reliability in running and maintainability which belong to the main performance index of the instrument. The design ideas, configurations, working principle, performance indexes, technique features and effect in utilizing are introduced briefly

  20. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  1. New developments in the McStas neutron instrument simulation package

    International Nuclear Information System (INIS)

    Willendrup, P K; Knudsen, E B; Klinkby, E; Nielsen, T; Farhi, E; Filges, U; Lefmann, K

    2014-01-01

    The McStas neutron ray-tracing software package is a versatile tool for building accurate simulators of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as a scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics.

  2. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  3. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  4. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  5. The qualification of electrical components and instrumentations relevant to safety

    CERN Document Server

    Zambardi, F

    1989-01-01

    Systems and components relevant to safety of nuclear power plants must maintain their functional integrity in order to assure accident prevention and mitigation. Redundancy is utilized against random failures, nevertheless care must be taken to avoid common failures in redundant components. Main sources of degradation and common cause failures consist in the aging effects and in the changes of environmental conditions which occur during the plant life and the postulated accidents. These causes of degradation are expected to be especially significant for instrumentation and electrical equipment, which can have a primary role in safety systems. The qualification is the methodology by which component safety requirements can be met against the above mentioned causes of degradation. In this report the connection between the possible, plant conditions and the resulting degradation effects on components is preliminarily addressed. A general characterization of the qualification is then presented. Basis, methods and ...

  6. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    Science.gov (United States)

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  7. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  8. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  9. Effect of Color-Coded Notation on Music Achievement of Elementary Instrumental Students.

    Science.gov (United States)

    Rogers, George L.

    1991-01-01

    Presents results of a study of color-coded notation to teach music reading to instrumental students. Finds no clear evidence that color-coded notation enhances achievement on performing by memory, sight-reading, or note naming. Suggests that some students depended on the color-coding and were unable to read uncolored notation well. (DK)

  10. TREFF: Reflectometer and instrument component test beamline at MLZ

    Directory of Open Access Journals (Sweden)

    Peter Link

    2017-11-01

    Full Text Available TREFF is a high resolution polarized neutron reflectometer and instrument component test beamline resulting in a highly modular instrument providing a flexible beam line for various applications.

  11. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  12. Building an open-source robotic stereotaxic instrument.

    Science.gov (United States)

    Coffey, Kevin R; Barker, David J; Ma, Sisi; West, Mark O

    2013-10-29

    This protocol includes the designs and software necessary to upgrade an existing stereotaxic instrument to a robotic (CNC) stereotaxic instrument for around $1,000 (excluding a drill), using industry standard stepper motors and CNC controlling software. Each axis has variable speed control and may be operated simultaneously or independently. The robot's flexibility and open coding system (g-code) make it capable of performing custom tasks that are not supported by commercial systems. Its applications include, but are not limited to, drilling holes, sharp edge craniotomies, skull thinning, and lowering electrodes or cannula. In order to expedite the writing of g-coding for simple surgeries, we have developed custom scripts that allow individuals to design a surgery with no knowledge of programming. However, for users to get the most out of the motorized stereotax, it would be beneficial to be knowledgeable in mathematical programming and G-Coding (simple programming for CNC machining). The recommended drill speed is greater than 40,000 rpm. The stepper motor resolution is 1.8°/Step, geared to 0.346°/Step. A standard stereotax has a resolution of 2.88 μm/step. The maximum recommended cutting speed is 500 μm/sec. The maximum recommended jogging speed is 3,500 μm/sec. The maximum recommended drill bit size is HP 2.

  13. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  14. An Inexpensive, Open-Source USB Arduino Data Acquisition Device for Chemical Instrumentation.

    Science.gov (United States)

    Grinias, James P; Whitfield, Jason T; Guetschow, Erik D; Kennedy, Robert T

    2016-07-12

    Many research and teaching labs rely on USB data acquisition devices to collect voltage signals from instrumentation. However, these devices can be cost-prohibitive (especially when large numbers are needed for teaching labs) and require software to be developed for operation. In this article, we describe the development and use of an open-source USB data acquisition device (with 16-bit acquisition resolution) built using simple electronic components and an Arduino Uno that costs under $50. Additionally, open-source software written in Python is included so that data can be acquired using nearly any PC or Mac computer with a simple USB connection. Use of the device was demonstrated for a sophomore-level analytical experiment using GC and a CE-UV separation on an instrument used for research purposes.

  15. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  16. The qualification of electrical components and instrumentations relevant to safety

    International Nuclear Information System (INIS)

    Zambardi, F.

    1989-03-01

    Systems and components relevant to safety of nuclear power plants must maintain their functional integrity in order to assure accident prevention and mitigation. Redundancy is utilized against random failures, nevertheless care must be taken to avoid common failures in redundant components. Main sources of degradation and common cause failures consist in the aging effects and in the changes of environmental conditions which occur during the plant life and the postulated accidents. These causes of degradation are expected to be especially significant for instrumentation and electrical equipment, which can have a primary role in safety systems. The qualification is the methodology by which component safety requirements can be met against the above mentioned causes of degradation. In this report the connection between the possible, plant conditions and the resulting degradation effects on components is preliminarily addressed. A general characterization of the qualification is then presented. Basis, methods and peculiar aspects are discussed and the qualification by testing is taken into special account. Technical and organizational aspects related to a plant qualification program are also focused. The report ends with a look to the most significant research and development activities. (author)

  17. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  18. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  19. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  20. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  1. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  2. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  3. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  4. The test beamline of the European Spallation Source - Instrumentation development and wavelength frame multiplication

    DEFF Research Database (Denmark)

    Woracek, R.; Hofmann, T.; Bulat, M.

    2016-01-01

    which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor...... wavelength band between 1.6 A and 10 A by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components....... This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects....

  5. Radioactive check sources for alpha and beta sensitive radiological instrumentation

    International Nuclear Information System (INIS)

    Barnett, J.M.; Kane, J.E. II.

    1994-06-01

    Since 1991, the Westinghouse Hanford Company has examined the construction and use of alpha and beta radioactive check sources for calibrating instruments and for performing response checks of instruments used for operational and environmental radiation detection. The purpose of using a radioactive check source is to characterize the response of a radiation monitoring instrument in the presence of radioactivity. To accurately calibrate the instrument and check its response, the check source used must emulate as closely as possible the actual physical and isotopic conditions being monitored. The isotope employed and the physical methods used to fabricate the check source (among other factors) determine instrument response. Although information from applicable national and international standards, journal articles, books, and government documents was considered, empirical data collected is most valuable when considering the type of source to use for a particular application. This paper presents source construction methods, use considerations, and standard recommendations. The results of a Hanford Site evaluation of several types of alpha and beta sources are also given

  6. Design Procedure of Graphite Components by ASME HTR Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Ji-Ho; Jo, Chang Keun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet.

  7. Design Procedure of Graphite Components by ASME HTR Codes

    International Nuclear Information System (INIS)

    Kang, Ji-Ho; Jo, Chang Keun

    2016-01-01

    In this study, the ASME B and PV Code, Subsection HH, Subpart A, design procedure for graphite components of HTRs was reviewed and the differences from metal materials were remarked. The Korean VHTR has a prismatic core which is made of multiple graphite blocks, reflectors, and core supports. One of the design issues is the assessment of the structural integrity of the graphite components because the graphite is brittle and shows quite different behaviors from metals in high temperature environment. The American Society of Mechanical Engineers (ASME) issued the latest edition of the code for the high temperature reactors (HTR) in 2015. In this study, the ASME B and PV Code, Subsection HH, Subpart A, Graphite Materials was reviewed and the special features were remarked. Due the brittleness of graphites, the damage-tolerant design procedures different from the conventional metals were adopted based on semi-probabilistic approaches. The unique additional classification, SRC, is allotted to the graphite components and the full 3-D FEM or equivalent stress analysis method is required. In specific conditions, the oxidation and viscoelasticity analysis of material are required. The fatigue damage rule has not been established yet

  8. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  9. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  10. The Complexity integrated-Instruments components media of IPA at Elementary School

    Directory of Open Access Journals (Sweden)

    Angreni Siska

    2018-01-01

    Full Text Available This research aims at describing the complexity of Integrated Instrument Components media (CII in learning of science at Elementary schools in District Siulak Mukai and at Elementary schools in District Siulak. The research applied a descriptive method which included survey forms. Instruments used were observation sheets. The result of the research showed Integrated Instrument Components media (CII natural science that complexity at primary school district Siulak was more complex compared with that at primary school district Siulak Mukai. is better than from primary school district Mukai

  11. Optimization of virtual source parameters in neutron scattering instrumentation

    International Nuclear Information System (INIS)

    Habicht, K; Skoulatos, M

    2012-01-01

    We report on phase-space optimizations for neutron scattering instruments employing horizontal focussing crystal optics. Defining a figure of merit for a generic virtual source configuration we identify a set of optimum instrumental parameters. In order to assess the quality of the instrumental configuration we combine an evolutionary optimization algorithm with the analytical Popovici description using multidimensional Gaussian distributions. The optimum phase-space element which needs to be delivered to the virtual source by preceding neutron optics may be obtained using the same algorithm which is of general interest in instrument design.

  12. Backscattering at a pulsed neutron source, the MUSICAL instrument

    International Nuclear Information System (INIS)

    Alefeld, B.

    1995-01-01

    In the first part the principles of the neutron backscattering method are described and some simple considerations about the energy resolution and the intensity are presented. A prototype of a backscattering instrument, the first Juelich instrument, is explained in some detail and a representative measurement is shown which was performed on the backscattering instrument IN10 at the ILL in Grenoble. In the second part a backscattering instrument designed for a pulsed neutron source is proposed. It is shown that a rather simple modification, which consists in the replacement of the Doppler drive of the conventional backscattering instrument by a multi silicon monochromator crystal (MUSICAL) leads to a very effective instrument, benefitting from the peak flux of the pulsed source. ((orig.))

  13. Pulsed neutron source and instruments at neutron facility

    Energy Technology Data Exchange (ETDEWEB)

    Teshigawara, Makoto; Aizawa, Kazuya; Suzuki, Jun-ichi; Morii, Yukio; Watanabe, Noboru [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-11-01

    We report the results of design studies on the optimal target shape, target - moderator coupling, optimal layout of moderators, and neutron instruments for a next generation pulsed spallation source in JAERI. The source utilizes a projected high-intensity proton accelerator (linac: 1.5 GeV, {approx}8 MW in total beam power, compressor ring: {approx}5 MW). We discuss the target neutronics, moderators and their layout. The sources is designed to have at least 30 beam lines equipped with more than 40 instruments, which are selected tentatively to the present knowledge. (author)

  14. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games

    Science.gov (United States)

    Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah

    2015-01-01

    Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842

  15. Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.

    Science.gov (United States)

    Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M

    2015-07-01

    Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.

  16. New sources and instrumentation for neutrons in biology

    DEFF Research Database (Denmark)

    Teixeira, S. C. M.; Zaccai, G.; Ankner, J.

    2008-01-01

    Neutron radiation offers significant advantages for the study of biological molecular structure and dynamics. A broad and significant effort towards instrumental and methodological development to facilitate biology experiments at neutron sources worldwide is reviewed.......Neutron radiation offers significant advantages for the study of biological molecular structure and dynamics. A broad and significant effort towards instrumental and methodological development to facilitate biology experiments at neutron sources worldwide is reviewed....

  17. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  18. Neutron scattering instruments for the Spallation Neutron Source (SNS)

    International Nuclear Information System (INIS)

    Crawford, R.K.; Fornek, T.; Herwig, K.W.

    1998-01-01

    The Spallation Neutron Source (SNS) is a 1 MW pulsed spallation source for neutron scattering planned for construction at Oak Ridge National Laboratory. This facility is being designed as a 5-laboratory collaboration project. This paper addresses the proposed facility layout, the process for selection and construction of neutron scattering instruments at the SNS, the initial planning done on the basis of a reference set of ten instruments, and the plans for research and development (R and D) to support construction of the first ten instruments and to establish the infrastructure to support later development and construction of additional instruments

  19. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  20. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  1. The test beamline of the European Spallation SourceInstrumentation development and wavelength frame multiplication

    International Nuclear Information System (INIS)

    Woracek, R.; Hofmann, T.; Bulat, M.; Sales, M.; Habicht, K.; Andersen, K.; Strobl, M.

    2016-01-01

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  2. The test beamline of the European Spallation SourceInstrumentation development and wavelength frame multiplication

    Energy Technology Data Exchange (ETDEWEB)

    Woracek, R., E-mail: robin.woracek@esss.se [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Hofmann, T.; Bulat, M. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Sales, M. [Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark); Habicht, K. [Helmholtz-Zentrum Berlin für Materialien und Energie, Hahn-Meitner Platz 1, 14109 Berlin (Germany); Andersen, K. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Strobl, M. [European Spallation Source ESS ERIC, P.O. Box 176, SE-22100 Lund (Sweden); Technical University of Denmark, Fysikvej, 2800 Kgs. Lyngby (Denmark)

    2016-12-11

    The European Spallation Source (ESS), scheduled to start operation in 2020, is aiming to deliver the most intense neutron beams for experimental research of any facility worldwide. Its long pulse time structure implies significant differences for instrumentation compared to other spallation sources which, in contrast, are all providing short neutron pulses. In order to enable the development of methods and technology adapted to this novel type of source well in advance of the first instruments being constructed at ESS, a test beamline (TBL) was designed and built at the BER II research reactor at Helmholtz-Zentrum Berlin (HZB). Operating the TBL shall provide valuable experience in order to allow for a smooth start of operations at ESS. The beamline is capable of mimicking the ESS pulse structure by a double chopper system and provides variable wavelength resolution as low as 0.5% over a wide wavelength band between 1.6 Å and 10 Å by a dedicated wavelength frame multiplication (WFM) chopper system. WFM is proposed for several ESS instruments to allow for flexible time-of-flight resolution. Hence, ESS will benefit from the TBL which offers unique possibilities for testing methods and components. This article describes the main capabilities of the instrument, its performance as experimentally verified during the commissioning, and its relevance to currently starting ESS instrumentation projects.

  3. Blind source separation dependent component analysis

    CERN Document Server

    Xiang, Yong; Yang, Zuyuan

    2015-01-01

    This book provides readers a complete and self-contained set of knowledge about dependent source separation, including the latest development in this field. The book gives an overview on blind source separation where three promising blind separation techniques that can tackle mutually correlated sources are presented. The book further focuses on the non-negativity based methods, the time-frequency analysis based methods, and the pre-coding based methods, respectively.

  4. Fast-neutron, coded-aperture imager

    International Nuclear Information System (INIS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-01-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  5. Fast-neutron, coded-aperture imager

    Energy Technology Data Exchange (ETDEWEB)

    Woolf, Richard S., E-mail: richard.woolf@nrl.navy.mil; Phlips, Bernard F., E-mail: bernard.phlips@nrl.navy.mil; Hutcheson, Anthony L., E-mail: anthony.hutcheson@nrl.navy.mil; Wulf, Eric A., E-mail: eric.wulf@nrl.navy.mil

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  6. 4th generation light source instrumentation

    International Nuclear Information System (INIS)

    Lumpkin, A.

    1998-01-01

    This working group on 4th Generation Light Source (4GLS) Instrumentation was a follow-up to the opening-discussion on Challenges in Beam Profiling. It was in parallel with the Feedback Systems session. We filled the SSRL Conference Room with about 25 participants. The session opened with an introduction by Lumpkin. The target beam parameter values for a few-angstrom, self-amplified spontaneous emissions (SASE) experiment and for a diffraction-limited soft x-ray storage ring source were addressed. Instrument resolution would of course need to be 2-3 times better than the value measured, if possible. The nominal targeted performance parameters are emittance (1-2π mm mrad), bunch length (100 fs), peak-current (l-5 kA), beam size (10 microm), beam divergence (1 microrad), energy spread (2 x 10 -4 ), and beam energy (10's of GeV). These are mostly the SASE values, and the possible parameters for a diffraction-limited soft x-ray source would be relaxed somewhat. Beam stability and alignment specifications in the sub-micron domain for either device are anticipated

  7. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  8. RCC-E a Design Code for I and C and Electrical Systems

    International Nuclear Information System (INIS)

    Haure, J.M.

    2015-01-01

    The paper deals with the stakes and strength of the RCC-E code applicable to Electrical and Instrumentation and control systems and components as regards dealing with safety class functions. The document is interlacing specifications between Owners, safety authorities, designers, and suppliers IAEA safety guides and IEC standards. The code is periodically updated and published by French Society for Design and Construction rules for Nuclear Island Components (AFCEN). The code is compliant with third generation PWR nuclear islands and aims to suit with national regulations as needed in a companion document. The Feedback experience of Fukushima and the licensing of UKEPR in the framework of Generic Design Assessment are lessons learnt that should be considered in the upgrading of the code. The code gathers a set of requirements and relevant good practices of several PWR design and construction practices related to the electrical and I and C systems and components, and electrical engineering documents dealing with systems, equipment and layout designs. Comprehensive statement including some recent developments will be provided about: - Offsite and onsite sources requirements including sources dealing the total loss of off sites and main onsite sources. - Highlights of a relevant protection level against high frequencies disturbances emitted by lightning strokes, Interfaces data used by any supplier or designer such as site data, rooms temperature, equipment maximum design temperature, alternative current and direct current electrical network voltages and frequency variation ranges, environmental conditions decoupling data, - Environmental Qualification process including normal, mild (earthquake resistant), harsh and severe accident ambient conditions. A suit made approach based on families, which are defined as a combination of mission time, duration and abnormal conditions (pressure, temperature, radiation), enables to better cope with Environmental Qualifications

  9. Low-cost coding of directivity information for the recording of musical instruments

    Science.gov (United States)

    Braasch, Jonas; Martens, William L.; Woszczyk, Wieslaw

    2004-05-01

    Most musical instruments radiate sound according to characteristic spatial directivity patterns. These patterns are usually not only strongly frequency dependent, but also time-variant functions of various parameters of the instrument, such as pitch and the playing technique applied (e.g., plucking versus bowing of string instruments). To capture the directivity information when recording an instrument, Warusfel and Misdariis (2001) proposed to record an instrument using four channels, one for the monopole and the others for three orthogonal dipole parts. In the new recording setup presented here, it is proposed to store one channel at a high sampling frequency, along with directivity information that is updated only every few milliseconds. Taking the binaural sluggishness of the human auditory system into account in this way provides a low-cost coding scheme for subsequent reproduction of time-variant directivity patterns.

  10. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  11. Simulation for developing new pulse neutron spectrometers I. Creation of new McStas components of moderators of JSNS

    CERN Document Server

    Tamura, I; Arai, M; Harada, M; Maekawa, F; Shibata, K; Soyama, K

    2003-01-01

    Moderators components of the McStas code have been created for the design of JSNS instruments. Three cryogenic moderators are adopted in JSNS, one is coupled H sub 2 moderators for high intensity experiments and other two are decoupled H sub 2 with poisoned or unpoisoned for high resolution moderators. Since the characteristics of neutron beams generated from moderators make influence on the performance of pulse neutron spectrometers, it is important to perform the Monte Carlo simulation with neutron source component written precisely. The neutron spectrum and time structure were calculated using NMTC/JAERI97 and MCNP4a codes. The simulation parameters, which describe the pulse shape over entire spectrum as a function of time, are optimized. In this paper, the creation of neutron source components for port No.16 viewed to coupled H sub 2 moderator and for port No.11 viewed to decoupled H sub 2 moderator of JSNS are reported.

  12. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  13. Runtime Detection of C-Style Errors in UPC Code

    Energy Technology Data Exchange (ETDEWEB)

    Pirkelbauer, P; Liao, C; Panas, T; Quinlan, D

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the global address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.

  14. Fabrication of ion source components by electroforming

    International Nuclear Information System (INIS)

    Schechter, D.E.; Sluss, F.

    1983-01-01

    Several components of the Oak Ridge National Laboratory (ORNL)/Magnetic Fusion Test Facility (MFTF-B) ion source have been fabricated utilizing an electroforming process. A procedure has been developed for enclosing coolant passages in copper components by electrodepositing a thick (greater than or equal to 0.75-mm) layer of copper (electroforming) over the top of grooves machined into the copper component base. Details of the procedure to fabricate acceleration grids and other ion source components are presented

  15. Instrumentation at pulsed neutron sources

    International Nuclear Information System (INIS)

    Carpenter, J.M.; Lander, G.H.; Windsor, C.G.

    1984-01-01

    Scientific investigations involving the use of neutron beams have been centered at reactor sources for the last 35 years. Recently, there has been considerable interest in using the neutrons produced by accelerator driven (pulsed) sources. Such installations are in operation in England, Japan, and the United States. In this article a brief survey is given of how the neutron beams are produced and how they can be optimized for neutron scattering experiments. A detailed description is then given of the various types of instruments that have been, or are planned, at pulsed sources. Numerous examples of the scientific results that are emerging are given. An attempt is made throughout the article to compare the scientific opportunities at pulsed sources with the proven performance of reactor installations, and some familiarity with the latter and the general field of neutron scattering is assumed. New areas are being opened up by pulsed sources, particularly with the intense epithermal neutron beams, which promise to be several orders of magnitude more intense than can be obtained from a thermal reactor

  16. Fast-neutron, coded-aperture imager

    Science.gov (United States)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  17. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  18. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  19. Instrumentation database specific to Trillo I NPP

    International Nuclear Information System (INIS)

    Pereira Pagan, M.B.; Saenz de Tejada, P.; Fernandez Alvarez, A.; Haya, J.

    1997-01-01

    The analysis of data on electronic instrumentation components in the Trillo I PSA has involved and extra effort, basically due to the particular characteristics of these equipment items. This analysis has different aspects depending on the type of information used: Components whose data have been obtained from generic information sources (with or without Bayesian processing). Components whose data have been obtained from specific German studies (TUV) Components whose data have been based directly on the historical experience of Trillo I NPP Components whose data have been based on miscellaneous generic and specific sources This information can also be classified into: Micro components formed by a single module ar card Micro components: formed by set of instrumentation elements It can be further subdivided according to the operating conditions of the components: Equipment whose operation depends on the functions they perform in a particular system (eg. reactor protection system instrumentation channels) Equipment whose operation is not associated with particular conditions (eg. modules for motor-operated equipment). (Author)

  20. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  1. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  2. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  3. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  4. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  5. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  6. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  7. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  8. Determination of the elemental distribution in cigarette components and smoke by instrumental neutron activation analysis

    International Nuclear Information System (INIS)

    Wu, D.; Landsberger, S.; Larson, S.M.

    1997-01-01

    Cigarette smoking is a major source of particle released in indoor environments. A comprehensive study of the elemental distribution in cigarettes and cigarette smoke has been completed. Specifically, concentrations of thirty elements have been determined for the components of 15 types of cigarettes. Components include tobacco, ash, butts, filters, and cigarette paper. In addition, particulate matter from mainstream smoke (MS) and sidesstream smoke (SS) were analyzed. The technique of elemental determination used in the study is instrumental neutron activation analysis. The results show that certain heavy metals, such as As, Cd, K, Sb and Zn, are released into the MS and SS. These metals may then be part of the health risk of exposure to smoke. Other elements are retained, for the most part, in cigarette ash and butts. The elemental distribution among the cigarette components and smoke changes for different smoking conditions. (author)

  9. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  10. Development of instruments and components for SANS and PNS

    Energy Technology Data Exchange (ETDEWEB)

    Park, Kook Nam; Lee, Chang Hee; Lee, C. H. and others

    2000-11-01

    The base floor of SANS was constructed by the 27 steel plates with horizontal flatness of {+-}0.5mm. That is for the two dimensional position sensitivity neutron detector(2D-PSD), which is operated in vacuum chamber, as moving smoothly. Concerned to the equipments, we designed and installed the inner-shielding of the PNS which will be used for research of magnetic structure and critical phenomena of the materials. In the development of experimental devices, we have designed and manufactured a beam-stop exchange unit and a detector carriage which are used for 2D-PSD. The detector carriage is to control the distance between sample and detector. The beam-stop exchange unit is to protect detector from exposure of direct neutron beam. Especially, many experimental devices or instruments such as high-resolution collimator, components for low temperature facility, multi-purpose vacuum chamber, etc. were made in domestic, it is worth meaningful in domestic availability and standardization for the neutron instrument component.

  11. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  12. TRIPOLI-4: Monte Carlo transport code functionalities and applications; TRIPOLI-4: code de transport Monte Carlo fonctionnalites et applications

    Energy Technology Data Exchange (ETDEWEB)

    Both, J P; Lee, Y K; Mazzolo, A; Peneliau, Y; Petit, O; Roesslinger, B [CEA Saclay, Dir. de l' Energie Nucleaire (DEN), Service d' Etudes de Reacteurs et de Modelisation Avancee, 91 - Gif sur Yvette (France)

    2003-07-01

    Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)

  13. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  14. Monte Carlo simulation of a coded-aperture thermal neutron camera

    International Nuclear Information System (INIS)

    Dioszegi, I.; Salwen, C.; Forman, L.

    2011-01-01

    We employed the MCNPX Monte Carlo code to simulate image formation in a coded-aperture thermal-neutron camera. The camera, developed at Brookhaven National Laboratory (BNL), consists of a 20 x 17 cm"2 active area "3He-filled position-sensitive wire chamber in a cadmium enclosure box. The front of the box is a coded-aperture cadmium mask (at present with three different resolutions). We tested the detector experimentally with various arrangements of moderated point-neutron sources. The purpose of using the Monte Carlo modeling was to develop an easily modifiable model of the device to predict the detector's behavior using different mask patterns, and also to generate images of extended-area sources or large numbers (up to ten) of them, that is important for nonproliferation and arms-control verification, but difficult to achieve experimentally. In the model, we utilized the advanced geometry capabilities of the MCNPX code to simulate the coded aperture mask. Furthermore, the code simulated the production of thermal neutrons from fission sources surrounded by a thermalizer. With this code we also determined the thermal-neutron shadow cast by the cadmium mask; the calculations encompassed fast- and epithermal-neutrons penetrating into the detector through the mask. Since the process of signal production in "3He-filled position-sensitive wire chambers is well known, we omitted this part from our modeling. Simplified efficiency values were used for the three (thermal, epithermal, and fast) neutron-energy regions. Electronic noise and the room's background were included as a uniform irradiation component. We processed the experimental- and simulated-images using identical LabVIEW virtual instruments. (author)

  15. Marine and Hydrokinetic Energy Metocean Data-use, Sources, and Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Sirnivas, Senu [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2018-01-02

    Marine and Hydrokinetic Energy Metocean Data-use, Sources, and Instrumentation presentation from Water Power Technologies Office Peer Review, FY14-FY16. This project aims to accelerate deployment of marine and hydrokinetic (MHK) technology by establishing: 1) relevant existing and evolving standards and guidelines, 2) meteorological and oceanic (metocean) data use 3) data sources, and 4) instrumentation guidance for siting, design, and operation of MHK devices along the U.S coastline.

  16. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  17. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  18. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  19. Decision criteria for software component sourcing: steps towards a framework

    NARCIS (Netherlands)

    Kusters, R.J.; Pouwelse, L.; Martin, H.; Trienekens, J.J.M.; Hammoudi, Sl.; Maciaszek, L.; Missikoff, M.M.; Camp, O.; Cordeiro, J.

    2016-01-01

    Software developing organizations nowadays have a wide choice when it comes to sourcing software components. This choice ranges from developing or adapting in-house developed components via buying closed source components to utilizing open source components. This study seeks to determine criteria

  20. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    CERN Document Server

    Lu Cun Heng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving cou...

  1. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  2. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  3. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  4. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    Science.gov (United States)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  5. Comparison of SANS instruments at reactors and pulsed sources

    International Nuclear Information System (INIS)

    Thiyagarajan, P.; Epperson, J.E.; Crawford, R.K.; Carpenter, J.M.; Hjelm, R.P. Jr.

    1992-01-01

    Small angle neutron scattering is a general purpose technique to study long range fluctuations and hence has been applied in almost every field of science for material characterization. SANS instruments can be built at steady state reactors and at the pulsed neutron sources where time-of-flight (TOF) techniques are used. The steady state instruments usually give data over small q ranges and in order to cover a large q range these instruments have to be reconfigured several times and SANS measurements have to be made. These instruments have provided better resolution and higher data rates within their restricted q ranges until now, but the TOF instruments are now developing to comparable performance. The TOF-SANS instruments, by using a wide band of wavelengths, can cover a wide dynamic q range in a single measurement. This is a big advantage for studying systems that are changing and those which cannot be exactly reproduced. This paper compares the design concepts and performances of these two types of instruments

  6. Under sodium reliability tests on core components and in-core instrumentation

    International Nuclear Information System (INIS)

    Ruppert, E.; Stehle, H.; Vinzens, K.

    1977-01-01

    A sodium test facility for fast breeder core components (AKB), built by INTERATOM at Bensberg, has been operating since 1971 to test fuel dummies and blanket elements as well as absorber elements under simulated normal and extreme reactor conditions. Individual full-scale fuel or blanket elements and arrays of seven elements, modelling a section of the SNR-300 reactor core, have been tested under a wide range of sodium mass flow and isothermal test conditions up to 925K as well as under cyclic changed temperature transients. Besides endurance testing of the core components a special sodium and high-temperature instrumentation is provided to investigate thermohydraulic and vibrational behaviour of the test objects. During all test periods the main subassembly characteristics could be reproduced and the reliability of the instrumentation could be proven. (orig.) [de

  7. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  8. TRIPOLI-4: Monte Carlo transport code functionalities and applications

    International Nuclear Information System (INIS)

    Both, J.P.; Lee, Y.K.; Mazzolo, A.; Peneliau, Y.; Petit, O.; Roesslinger, B.

    2003-01-01

    Tripoli-4 is a three dimensional calculations code using the Monte Carlo method to simulate the transport of neutrons, photons, electrons and positrons. This code is used in four application fields: the protection studies, the criticality studies, the core studies and the instrumentation studies. Geometry, cross sections, description of sources, principle. (N.C.)

  9. Nuclear component design ontology building based on ASME codes

    International Nuclear Information System (INIS)

    Bao Shiyi; Zhou Yu; He Shuyan

    2005-01-01

    The adoption of ontology analysis in the study of concept knowledge acquisition and representation for the nuclear component design process based on computer-supported cooperative work (CSCW) makes it possible to share and reuse numerous concept knowledge of multi-disciplinary domains. A practical ontology building method is accordingly proposed based on Protege knowledge model in combination with both top-down and bottom-up approaches together with Formal Concept Analysis (FCA). FCA exhibits its advantages in the way it helps establish and improve taxonomic hierarchy of concepts and resolve concept conflict occurred in modeling multi-disciplinary domains. With Protege-3.0 as the ontology building tool, a nuclear component design ontology based ASME codes is developed by utilizing the ontology building method. The ontology serves as the basis to realize concept knowledge sharing and reusing of nuclear component design. (authors)

  10. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  11. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  12. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  13. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  14. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  15. ASME code and ratcheting in piping components. Final technical report

    International Nuclear Information System (INIS)

    Hassan, T.; Matzen, V.C.

    1999-01-01

    The main objective of this research is to develop an analysis program which can accurately simulate ratcheting in piping components subjected to seismic or other cyclic loads. Ratcheting is defined as the accumulation of deformation in structures and materials with cycles. This phenomenon has been demonstrated to cause failure to piping components (known as ratcheting-fatigue failure) and is yet to be understood clearly. The design and analysis methods in the ASME Boiler and Pressure Vessel Code for ratcheting of piping components are not well accepted by the practicing engineering community. This research project attempts to understand the ratcheting-fatigue failure mechanisms and improve analysis methods for ratcheting predictions. In the first step a state-of-the-art testing facility is developed for quasi-static cyclic and seismic testing of straight and elbow piping components. A systematic testing program to study ratcheting is developed. Some tests have already been performed and the rest will be completed by summer'99. Significant progress has been made in the area of constitutive modeling. A number of sophisticated constitutive models have been evaluated in terms of their simulations for a broad class of ratcheting responses. From the knowledge gained from this evaluation study two improved models are developed. These models are demonstrated to have promise in simulating ratcheting responses in piping components. Hence, implementation of these improved models in widely used finite element programs, ANSYS and/or ABAQUS, is in progress. Upon achieving improved finite element programs for simulation of ratcheting, the ASME Code provisions for ratcheting of piping components will be reviewed and more rational methods will be suggested. Also, simplified analysis methods will be developed for operability studies of piping components and systems. Some of the future works will be performed under the auspices of the Center for Nuclear Power Plant Structures

  16. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  17. Design codes for gas cooled reactor components

    International Nuclear Information System (INIS)

    1990-12-01

    High-temperature gas-cooled reactor (HTGR) plants have been under development for about 30 years and experimental and prototype plants have been operated. The main line of development has been electricity generation based on the steam cycle. In addition the potential for high primary coolant temperature has resulted in research and development programmes for advanced applications including the direct cycle gas turbine and process heat applications. In order to compare results of the design techniques of various countries for high temperature reactor components, the IAEA established a Co-ordinated Research Programme (CRP) on Design Codes for Gas-Cooled Reactor Components. The Federal Republic of Germany, Japan, Switzerland and the USSR participated in this Co-ordinated Research Programme. Within the frame of this CRP a benchmark problem was established for the design of the hot steam header of the steam generator of an HTGR for electricity generation. This report presents the results of that effort. The publication also contains 5 reports presented by the participants. A separate abstract was prepared for each of these reports. Refs, figs and tabs

  18. Calculus of the Power Spectral Density of Ultra Wide Band Pulse Position Modulation Signals Coded with Totally Flipped Code

    Directory of Open Access Journals (Sweden)

    DURNEA, T. N.

    2009-02-01

    Full Text Available UWB-PPM systems were noted to have a power spectral density (p.s.d. consisting of a continuous portion and a line spectrum, which is composed of energy components placed at discrete frequencies. These components are the major source of interference to narrowband systems operating in the same frequency interval and deny harmless coexistence of UWB-PPM and narrowband systems. A new code denoted as Totally Flipped Code (TFC is applied to them in order to eliminate these discrete spectral components. The coded signal transports the information inside pulse position and will have the amplitude coded to generate a continuous p.s.d. We have designed the code and calculated the power spectral density of the coded signals. The power spectrum has no discrete components and its envelope is largely flat inside the bandwidth with a maximum at its center and a null at D.C. These characteristics make this code suited for implementation in the UWB systems based on PPM-type modulation as it assures a continuous spectrum and keeps PPM modulation performances.

  19. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  20. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  1. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  2. An explication of the Graphite Structural Design Code of core components for the High Temperature Engineering Test Reactor

    International Nuclear Information System (INIS)

    Iyoku, Tatsuo; Ishihara, Masahiro; Toyota, Junji; Shiozawa, Shusaku

    1991-05-01

    The integrity evaluation of the core graphite components for the High Temperature Engineering Test Reactor (HTTR) will be carried out based upon the Graphite Structural Design Code for core components. In the application of this design code, it is necessary to make clear the basic concept to evaluate the integrity of core components of HTTR. Therefore, considering the detailed design of core graphite structures such as fuel graphite blocks, etc. of HTTR, this report explicates the design code in detail about the concepts of stress and fatigue limits, integrity evaluation method of oxidized graphite components and thermal irradiation stress analysis method etc. (author)

  3. Limit of detection in the presence of instrumental and non-instrumental errors: study of the possible sources of error and application to the analysis of 41 elements at trace levels by inductively coupled plasma-mass spectrometry technique

    International Nuclear Information System (INIS)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Tapparo, Andrea; Pastore, Paolo

    2015-01-01

    In this paper the detection limit was estimated when signals were affected by two error contributions, namely instrumental errors and operational-non-instrumental errors. The detection limit was theoretically obtained following the hypothesis testing schema implemented with the calibration curve methodology. The experimental calibration design was based on J standards measured I times with non-instrumental errors affecting each standard systematically but randomly among the J levels. A two-component variance regression was performed to determine the calibration curve and to define the detection limit in these conditions. The detection limit values obtained from the calibration at trace levels of 41 elements by ICP-MS resulted larger than those obtainable from a one component variance regression. The role of the reagent impurities on the instrumental errors was ascertained and taken into account. Environmental pollution was studied as source of non-instrumental errors. The environmental pollution role was evaluated by Principal Component Analysis technique (PCA) applied to a series of nine calibrations performed in fourteen months. The influence of the seasonality of the environmental pollution on the detection limit was evidenced for many elements usually present in the urban air particulate. The obtained results clearly indicated the need of using the two-component variance regression approach for the calibration of all the elements usually present in the environment at significant concentration levels. - Highlights: • Limit of detection was obtained considering a two variance component regression. • Calibration data may be affected by instrumental and operational conditions errors. • Calibration model was applied to determine 41 elements at trace level by ICP-MS. • Non instrumental errors were evidenced by PCA analysis

  4. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  5. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  6. Enabling instrumentation and technology for 21st century light sources

    Energy Technology Data Exchange (ETDEWEB)

    Byrd, J.M.; Shea, T.J.; Denes, P.; Siddons, P.; Attwood, D.; Kaertner, F.; Moog, L.; Li, Y.; Sakdinawat, A.; Schlueter, R.

    2010-06-01

    We present the summary from the Accelerator Instrumentation and Technology working group, one of the five working groups that participated in the BES-sponsored Workshop on Accelerator Physics of Future Light Sources held in Gaithersburg, MD September 15-17, 2009. We describe progress and potential in three areas: attosecond instrumentation, photon detectors for user experiments, and insertion devices.

  7. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  8. Small-angle scattering instruments on a 1 MW long pulse spallation source

    Energy Technology Data Exchange (ETDEWEB)

    Olah, G.A. [Los Alamos National Lab., Chemical Science and Tehcnology Div., Biosciences and Biotechnology Group, Los aalamos, NM (United States); Hjelm, R.P. [Los Alamos National Lab., Neutron Scattering Center, Los Alamos, NM (United States); Seeger, P.A.

    1995-11-01

    We have designed and optimized two small-angle neutron scattering instruments for installation at a 1 MW long pulse spallation source. The first of these instruments measures a Q-domain from 0.002 to 0.44 A{sup -1}, and the second instrument from 0.00069-0.17 A{sup -1}, Design characteristics were determined and optimization was done using a Monte Carlo instrument simulation package under development at Los alamos. A performance comparison was made between these instruments with D11 at the ILL by evaluating the scattered intensity and rms resolution for the instrument response function at different Q values for various instrument configurations needed to spn a Q-range of 0.0007-0.44 A{sup -1}. We concluded that the first of these instruments outperforms D11 in both intensity and resolution over most of the Q-domain and that the second is comparable to D11. Comparisons were also made of the performance of the optimized long pulse instruments with different reflectors and with a short pulse source, from which we concluded that there is an optimal moderator-reflector combination, and that a short pulse does not substantially improve the instrument performance. (author) 7 figs., 2 tabs., 9 refs.

  9. Small-angle scattering instruments on a 1 MW long pulse spallation source

    International Nuclear Information System (INIS)

    Olah, G.A.; Hjelm, R.P.; Seeger, P.A.

    1995-01-01

    We have designed and optimized two small-angle neutron scattering instruments for installation at a 1 MW long pulse spallation source. The first of these instruments measures a Q-domain from 0.002 to 0.44 A -1 , and the second instrument from 0.00069-0.17 A -1 , Design characteristics were determined and optimization was done using a Monte Carlo instrument simulation package under development at Los alamos. A performance comparison was made between these instruments with D11 at the ILL by evaluating the scattered intensity and rms resolution for the instrument response function at different Q values for various instrument configurations needed to spn a Q-range of 0.0007-0.44 A -1 . We concluded that the first of these instruments outperforms D11 in both intensity and resolution over most of the Q-domain and that the second is comparable to D11. Comparisons were also made of the performance of the optimized long pulse instruments with different reflectors and with a short pulse source, from which we concluded that there is an optimal moderator-reflector combination, and that a short pulse does not substantially improve the instrument performance. (author) 7 figs., 2 tabs., 9 refs

  10. Calibrate the aerial surveying instrument by the limited surface source and the single point source that replace the unlimited surface source

    International Nuclear Information System (INIS)

    Lu Cunheng

    1999-01-01

    It is described that the calculating formula and surveying result is found on the basis of the stacking principle of gamma ray and the feature of hexagonal surface source when the limited surface source replaces the unlimited surface source to calibrate the aerial survey instrument on the ground, and that it is found in the light of the exchanged principle of the gamma ray when the single point source replaces the unlimited surface source to calibrate aerial surveying instrument in the air. Meanwhile through the theoretical analysis, the receiving rate of the crystal bottom and side surfaces is calculated when aerial surveying instrument receives gamma ray. The mathematical expression of the gamma ray decaying following height according to the Jinge function regularity is got. According to this regularity, the absorbing coefficient that air absorbs the gamma ray and the detective efficiency coefficient of the crystal is calculated based on the ground and air measuring value of the bottom surface receiving count rate (derived from total receiving count rate of the bottom and side surface). Finally, according to the measuring value, it is proved that imitating the change of total receiving gamma ray exposure rate of the bottom and side surfaces with this regularity in a certain high area is feasible

  11. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  12. Bar code instrumentation for nuclear safeguards

    International Nuclear Information System (INIS)

    Bieber, A.M. Jr.

    1984-01-01

    This paper presents a brief overview of the basic principles of bar codes and the equipment used to make and to read bar code labels, and a summary of some of the more important factors that need to be considered in integrating bar codes into an information system

  13. GCS component development cycle

    Science.gov (United States)

    Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti

    2012-09-01

    The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.

  14. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  15. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  16. SALSA-A new instrument for strain imaging in engineering materials and components

    International Nuclear Information System (INIS)

    Pirling, Thilo; Bruno, Giovanni; Withers, Philip J.

    2006-01-01

    Residual stresses are very hard to predict and if undetected can lead to premature failure or unexpected behaviour of engineering materials or components. This paper describes the operation of a new residual strain-mapping instrument, Strain Analyser for Large and Small scale engineering Applications (SALSA), recently commissioned at the public user facility, the Institut Laue-Langevin in Grenoble, France. A unique feature of this neutron diffraction instrument is the sample manipulator, which is the first of its kind, allowing precise scanning of large and heavy (<500 kg) samples along any trajectory involving translations, tilts and rotations. Other notable features of the instrument are also described

  17. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    International Nuclear Information System (INIS)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-01

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor

  18. Nuclear Reactor Component Code CUPID-I: Numerical Scheme and Preliminary Assessment Results

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Hyoung Kyu; Jeong, Jae Jun; Park, Ik Kyu; Kim, Jong Tae; Yoon, Han Young

    2007-12-15

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components of a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAC, semi-implicit ICE, SIMPLE, Row Scheme and so on. Among them, the ICE scheme for the three-field model was presented in the present report. The CUPID code is utilizing unstructured mesh for the simulation of complicated geometries of the nuclear reactor components. The conventional ICE scheme that was applied to RELAP5 and COBRA-TF, therefore, were modified for the application to the unstructured mesh. Preliminary calculations for the unstructured semi-implicit ICE scheme have been conducted for a verification of the numerical method from a qualitative point of view. The preliminary calculation results showed that the present numerical scheme is robust and efficient for the prediction of phase changes and flow transitions due to a boiling and a flashing. These calculation results also showed the strong coupling between the pressure and void fraction changes. Thus, it is believed that the semi-implicit ICE scheme can be utilized for transient two-phase flows in a component of a nuclear reactor.

  19. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  20. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  1. Dissolution And Analysis Of Yellowcake Components For Fingerprinting UOC Sources

    International Nuclear Information System (INIS)

    Hexel, Cole R.; Bostick, Debra A.; Kennedy, Angel K.; Begovich, John M.; Carter, Joel A.

    2012-01-01

    There are a number of chemical and physical parameters that might be used to help elucidate the ore body from which uranium ore concentrate (UOC) was derived. It is the variation in the concentration and isotopic composition of these components that can provide information as to the identity of the ore body from which the UOC was mined and the type of subsequent processing that has been undertaken. Oak Ridge National Laboratory (ORNL) in collaboration with Lawrence Livermore and Los Alamos National Laboratories is surveying ore characteristics of yellowcake samples from known geologic origin. The data sets are being incorporated into a national database to help in sourcing interdicted material, as well as aid in safeguards and nonproliferation activities. Geologic age and attributes from chemical processing are site-specific. Isotopic abundances of lead, neodymium, and strontium provide insight into the provenance of geologic location of ore material. Variations in lead isotopes are due to the radioactive decay of uranium in the ore. Likewise, neodymium isotopic abundances are skewed due to the radiogenic decay of samarium. Rubidium decay similarly alters the isotopic signature of strontium isotopic composition in ores. This paper will discuss the chemical processing of yellowcake performed at ORNL. Variations in lead, neodymium, and strontium isotopic abundances are being analyzed in UOC from two geologic sources. Chemical separation and instrumental protocols will be summarized. The data will be correlated with chemical signatures (such as elemental composition, uranium, carbon, and nitrogen isotopic content) to demonstrate the utility of principal component and cluster analyses to aid in the determination of UOC provenance.

  2. Supervision of electrical and instrumentation systems and components at nuclear facilities

    International Nuclear Information System (INIS)

    1986-01-01

    The general guidelines for the supervision of nuclear facilities carried out by the Finnish Centre for Radiation and Nuclear Safety (STUK) are set forth in the guide YVL 1.1. This guide shows in more detail how STUK supervises the electrical and instrumentation systems and components of nuclear facilities

  3. Ageing studies on materials, components and process instruments used in nuclear power plants

    International Nuclear Information System (INIS)

    Bora, J.S.

    1997-04-01

    This report is a compilation of test results of thermal and radiation ageing tests carried out in the laboratory over a period of 25 years on diverse engineering materials, components and instruments used in nuclear power plants. Test items covered are different types of electrical cables, elastomers, surface coatings, electrical and electronics components and process instruments. Effects of thermal and radiation ageing on performance parameters are shown in tabular forms. Apart from finding the characteristics, capabilities and limitations of test items, ageing research has helped in pin-pointing sub-standard and critical parts and necessary corrective action has been taken. This report is expected to be quite useful to the manufacturers users and researchers for reference and guidance. (author)

  4. The Atomic, Molecular and Optical Science instrument at the Linac Coherent Light Source

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Ken R. [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Department of Applied Physics, Stanford University, 348 Via Pueblo, Stanford, CA 94305 (United States); Bucher, Maximilian; Bozek, John D.; Carron, Sebastian; Castagna, Jean-Charles [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Coffee, Ryan [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Pulse Institute, Stanford University and SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Curiel, G. Ivan; Holmes, Michael; Krzywinski, Jacek; Messerschmidt, Marc; Minitti, Michael; Mitra, Ankush; Moeller, Stefan; Noonan, Peter; Osipov, Timur; Schorb, Sebastian; Swiggers, Michele; Wallace, Alexander; Yin, Jing [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Bostedt, Christoph, E-mail: bostedt@slac.stanford.edu [Linac Coherent Light Source, SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States); Pulse Institute, Stanford University and SLAC National Accelerator Laboratory, 2575 Sand Hill Road, Menlo Park, CA 94025 (United States)

    2015-04-17

    A description of the Atomic, Molecular and Optical Sciences (AMO) instrument at the Linac Coherent Light Source is presented. Recent scientific highlights illustrate the imaging, time-resolved spectroscopy and high-power density capabilities of the AMO instrument. The Atomic, Molecular and Optical Science (AMO) instrument at the Linac Coherent Light Source (LCLS) provides a tight soft X-ray focus into one of three experimental endstations. The flexible instrument design is optimized for studying a wide variety of phenomena requiring peak intensity. There is a suite of spectrometers and two photon area detectors available. An optional mirror-based split-and-delay unit can be used for X-ray pump–probe experiments. Recent scientific highlights illustrate the imaging, time-resolved spectroscopy and high-power density capabilities of the AMO instrument.

  5. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  6. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  7. Beamline standard component designs for the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu, D.; Barraza, J.; Brite, C.; Chang, J.; Sanchez, T.; Tcheskidov, V.; Kuzay, T.M.

    1994-01-01

    The Advanced Photon Source (APS) has initiated a design standardization and modularization activity for the APS synchrotron radiation beamline components. These standard components are included in components library, sub-components library and experimental station library. This paper briefly describes these standard components using both technical specifications and side view drawings

  8. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  9. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  10. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  11. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  12. New sources and instrumentation for neutron science

    International Nuclear Information System (INIS)

    Gil, Alina

    2011-01-01

    Neutron-scattering research has a lot to do with our everyday lives. Things like medicine, food, electronics, cars and airplanes have all been improved by neutron-scattering research. Neutron research also helps scientists improve materials used in a multitude of different products, such as high-temperature superconductors, powerful lightweight magnets, stronger, lighter plastic products etc. Neutron scattering is one of the most effective ways to obtain information on both, the structure and the dynamics of condensed matter. Most of the world's neutron sources were built decades ago, and although the uses and demand for neutrons have increased throughout the years, few new sources have been built. The new construction, accelerator-based neutron source, the spallation source will provide the most intense pulsed neutron beams in the world for scientific research and industrial development. In this paper it will be described what neutrons are and what unique properties make them useful for science, how spallation source is designed to produce neutron beams and the experimental instruments that will use those beams. Finally, it will be described how past neutron research has affected our everyday lives and what we might expect from the most exciting future applications.

  13. New sources and instrumentation for neutron science

    Energy Technology Data Exchange (ETDEWEB)

    Gil, Alina, E-mail: a.gil@ajd.czest.pl [Faculty of Mathematical and Natural Sciences, JD University, Al. Armii Krajowej 13/15, 42-200 Czestochowa (Poland)

    2011-04-01

    Neutron-scattering research has a lot to do with our everyday lives. Things like medicine, food, electronics, cars and airplanes have all been improved by neutron-scattering research. Neutron research also helps scientists improve materials used in a multitude of different products, such as high-temperature superconductors, powerful lightweight magnets, stronger, lighter plastic products etc. Neutron scattering is one of the most effective ways to obtain information on both, the structure and the dynamics of condensed matter. Most of the world's neutron sources were built decades ago, and although the uses and demand for neutrons have increased throughout the years, few new sources have been built. The new construction, accelerator-based neutron source, the spallation source will provide the most intense pulsed neutron beams in the world for scientific research and industrial development. In this paper it will be described what neutrons are and what unique properties make them useful for science, how spallation source is designed to produce neutron beams and the experimental instruments that will use those beams. Finally, it will be described how past neutron research has affected our everyday lives and what we might expect from the most exciting future applications.

  14. PCCE-A Predictive Code for Calorimetric Estimates in actively cooled components affected by pulsed power loads

    International Nuclear Information System (INIS)

    Agostinetti, P.; Palma, M. Dalla; Fantini, F.; Fellin, F.; Pasqualotto, R.

    2011-01-01

    The analytical interpretative models for calorimetric measurements currently available in the literature can consider close systems in steady-state and transient conditions, or open systems but only in steady-state conditions. The PCCE code (Predictive Code for Calorimetric Estimations), here presented, introduces some novelties. In fact, it can simulate with an analytical approach both the heated component and the cooling circuit, evaluating the heat fluxes due to conductive and convective processes both in steady-state and transient conditions. The main goal of this code is to model heating and cooling processes in actively cooled components of fusion experiments affected by high pulsed power loads, that are not easily analyzed with purely numerical approaches (like Finite Element Method or Computational Fluid Dynamics). A dedicated mathematical formulation, based on concentrated parameters, has been developed and is here described in detail. After a comparison and benchmark with the ANSYS commercial code, the PCCE code is applied to predict the calorimetric parameters in simple scenarios of the SPIDER experiment.

  15. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  16. Fast Computation of Pulse Height Spectra Using SGRD Code

    Directory of Open Access Journals (Sweden)

    Humbert Philippe

    2017-01-01

    Full Text Available SGRD (Spectroscopy, Gamma rays, Rapid, Deterministic code is used for fast calculation of the gamma ray spectrum produced by a spherical shielded source and measured by a detector. The photon source lines originate from the radioactive decay of the unstable isotopes. The emission rate and spectrum of these primary sources are calculated using the DARWIN code. The leakage spectrum is separated in two parts, the uncollided component is transported by ray-tracing and the scattered component is calculated using a multigroup discrete ordinates method. The pulsed height spectrum is then simulated by folding the leakage spectrum with the detector response functions which are pre-calculated using MCNP5 code for each considered detector type. An application to the simulation of the gamma spectrum produced by a natural uranium ball coated with plexiglass and measured using a NaI detector is presented.

  17. Calibrating instrument of plane sources of alpha and beta

    International Nuclear Information System (INIS)

    Liu Hongquan

    1988-12-01

    The instrument is standard instrument for measuring emissivity of plane sources of alpha and beta under 2π geometry in radionuclide metrologic technique. It is composed of box-type detector and truck-type NIM (made in China) to make up integral equipment. Its detector is composed of multivire proportion counter with electrostatic screen of zero potential and unique anticoincidence multiwire proportion counter in lead chamber. The characteristics of the instrument are as follows: Low background (α≤ 0.006 C · P · M/cm 2 , β≤ 0.03 C · P · M/cm 2 ), low work voltage, low noise, high detective efficiency (>99%), large sensitive area (150 x 100 mm), less dead time, possessing micro accidental anticoincidences, better property of high voltage plateau and discriminating. It has fulfiled the requirements of standard which possesses wide rang (50 C · M · M ∼ 10 6 C · P · M), high precision (± 5 ∼ 6% for 50 C · P · M ∼ 220 C · P · M, ≤ ± 0.6% for 200 C · P · M ∼ 10 6 C · P · M); besides, have solved the problem of instability which usualy occurs in same kind of equipments for measuring a sources with less face conductivity

  18. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  19. Instrument for analysis of electric motors based on slip-poles component

    Science.gov (United States)

    Haynes, Howard D.; Ayers, Curtis W.; Casada, Donald A.

    1996-01-01

    A new instrument for monitoring the condition and speed of an operating electric motor from a remote location. The slip-poles component is derived from a motor current signal. The magnitude of the slip-poles component provides the basis for a motor condition monitor, while the frequency of the slip-poles component provides the basis for a motor speed monitor. The result is a simple-to-understand motor health monitor in an easy-to-use package. Straightforward indications of motor speed, motor running current, motor condition (e.g., rotor bar condition) and synthesized motor sound (audible indication of motor condition) are provided. With the device, a relatively untrained worker can diagnose electric motors in the field without requiring the presence of a trained engineer or technician.

  20. On safety classification of instrumentation and control systems and their components

    International Nuclear Information System (INIS)

    Yastrebenetskij, M.A.; Rozen, Yu.V.

    2004-01-01

    Safety classification of instrumentation and control systems (I and C) and their components (hardware, software, software-hardware complexes) is described: - evaluation of classification principles and criteria in Ukrainian standards and rules; comparison between Ukrainian and international principles and criteria; possibility and ways of coordination of Ukrainian and international standards related to (I and C) safety classification

  1. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  2. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  3. 7-GeV Advanced Photon Source Instrumentation Initiative conceptual design report

    International Nuclear Information System (INIS)

    1992-12-01

    In this APS Instrumentation Initiative, 2.5-m-long and 5-m-long insertion-device x-ray sources will be built on 9 straight sections of the APS storage ring, and an additional 9 bending-magnet sources will also be put in use. The front ends for these 18 x-ray sources will be built to contain and safeguard access to these bright x-ray beams. In addition, funds will be provided to build state-of-the-art insertion-device beamlines to meet scientific and technological research demands well into the next century. This new initiative will also include four user laboratory modules and a special laboratory designed to meet the x-ray imaging research needs of the users. The Conceptual Design Report (CDR) for the APS Instrumentation Initiative describes the scope of all the above technical and conventional construction and provides a detailed cost and schedule for these activities. According to these plans, this new initiative begins in FY 1994 and ends in FY 1998. The document also describes the preconstruction R ampersand D plans for the Instrumentation Initiative activities and provides the cost estimates for the required R ampersand D

  4. Design type testing for digital instrumentation and control systems

    International Nuclear Information System (INIS)

    Bastl, W.; Mohns, G.

    1997-01-01

    The design type qualification of digital safety instrumentation and control is outlined. Experience shows that the concepts discussed, derived from codes, guidelines and standards, achieve useful results. It has likewise become clear that the systematics of design type qualification of the hardware components is also applicable to the software components. Design type qualification of the software, a premiere, could be performed unexpectedly smoothly. The hardware design type qualification proved that the hardware as a substrate of functionality and reliability is an issue that demands full attention, as compared to conventional systems. Another insight is that design qualification of digital instrumentation and control systems must include plant-independent systems tests. Digital instrumentation and control systems simply work very differently from conventional control systems, so that this testing modality is inevitable. (Orig./CB) [de

  5. Validation of the Open Source Code_Aster Software Used in the Modal Analysis of the Fluid-filled Cylindrical Shell

    Directory of Open Access Journals (Sweden)

    B D. Kashfutdinov

    2017-01-01

    Code_Aster in the field of aerospace systems. Using the free software to analyze the aerospace system components allows enterprises to diminish dependency on the commercial software, reduce the cost of software usage, and adapt the open source software to the specific problems.

  6. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  7. Pressure vessel design codes: A review of their applicability to HTGR components at temperatures above 800 deg C

    International Nuclear Information System (INIS)

    Hughes, P.T.; Over, H.H.; Bieniussa, K.

    1984-01-01

    The governments of USA and Federal Republic of Germany have approved of cooperation between the two countries in an endeavour to establish structural design code for gas reactor components intended to operate at temperatures exceeding 800 deg C. The basis of existing codes and their applicability to gas reactor component design are reviewed in this paper. This review has raised a number of important questions as to the direct applicability of the present codes. The status of US and FRG cooperative efforts to obtain answers to these questions are presented

  8. Application of radioactive sources in analytical instruments for planetary exploration

    International Nuclear Information System (INIS)

    Economou, T.E.

    2008-01-01

    Full text: In the past 50 years or so, many types of radioactive sources have been used in space exploration. 238 Pu is often used in space missions in Radioactive Heater Units (RHU) and Radioisotope Thermoelectric Generators (RTG) for heat and power generation, respectively. In 1960's, 2 ' 42 Cm alpha radioactive sources have been used for the first time in space applications on 3 Surveyor spacecrafts to obtain the chemical composition of the lunar surface with an instrument based on the Rutherford backscatterring of the alpha particles from nuclei in the analyzed sample. 242 Cm is an alpha emitter of 6.1 MeV alpha particles. Its half-life time, 163 days, is short enough to allow sources to be prepared with the necessary high intensity per unit area ( up to 470 mCi and FWHM of about 1.5% in the lunar instruments) that results in narrow energy distribution, yet long enough that the sources have adequate lifetimes for short duration missions. 242 Cm is readily prepared in curie quantities by irradiation of 241 Am by neutrons in nuclear reactors, followed by chemical separation of the curium from the americium and fission products. For long duration missions, like for example missions to Mars, comets, and asteroids, the isotope 244 Cm (T 1/2 =18.1 y, E α =5.8 MeV) is a better source because of its much longer half-life time. Both of these isotopes are also excellent x-ray excitation sources and have been used for that purpose on several planetary missions. For the light elements the excitation is caused mainly by the alpha particles, while for the heavier elements (> Ca) the excitation is mainly due to the x-rays from the Pu L-lines (E x =14-18 keV). 244 Cm has been used in several variations of the Alpha Proton Xray Spectrometer (APXS): PHOBOS 1 and 2 Pathfinder, Russian Mars-96 mission, Mars Exploration Rover (MER) and Rosetta. Other sources used in X-ray fluorescence instruments in space are 55 Fe and 109 Cd (Viking1,2, Beagle 2) and 57 Co is used in Moessbauer

  9. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  10. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  11. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  12. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  13. Development of the Sixty Watt Heat-Source hardware components

    International Nuclear Information System (INIS)

    McNeil, D.C.; Wyder, W.C.

    1995-01-01

    The Sixty Watt Heat Source is a nonvented heat source designed to provide 60 thermal watts of power. The unit incorporates a plutonium-238 fuel pellet encapsulated in a hot isostatically pressed General Purpose Heat Source (GPHS) iridium clad vent set. A molybdenum liner sleeve and support components isolate the fueled iridium clad from the T-111 strength member. This strength member serves as the pressure vessel and fulfills the impact and hydrostatic strength requirements. The shell is manufactured from Hastelloy S which prevents the internal components from being oxidized. Conventional drawing operations were used to simplify processing and utilize existing equipment. The deep drawing reqirements for the molybdenum, T-111, and Hastelloy S were developed from past heat source hardware fabrication experiences. This resulted in multiple step drawing processes with intermediate heat treatments between forming steps. The molybdenum processing included warm forming operations. This paper describes the fabrication of these components and the multiple draw tooling developed to produce hardware to the desired specifications. copyright 1995 American Institute of Physics

  14. Open-Source Low-Cost Wireless Potentiometric Instrument for pH Determination Experiments

    Science.gov (United States)

    Jin, Hao; Qin, Yiheng; Pan, Si; Alam, Arif U.; Dong, Shurong; Ghosh, Raja; Deen, M. Jamal

    2018-01-01

    pH determination is an essential experiment in many chemistry laboratories. It requires a potentiometric instrument with extremely low input bias current to accurately measure the voltage between a pH sensing electrode and a reference electrode. In this technology report, we propose an open-source potentiometric instrument for pH determination…

  15. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  16. Instrumentation

    International Nuclear Information System (INIS)

    Umminger, K.

    2008-01-01

    A proper measurement of the relevant single and two-phase flow parameters is the basis for the understanding of many complex thermal-hydraulic processes. Reliable instrumentation is therefore necessary for the interaction between analysis and experiment especially in the field of nuclear safety research where postulated accident scenarios have to be simulated in experimental facilities and predicted by complex computer code systems. The so-called conventional instrumentation for the measurement of e. g. pressures, temperatures, pressure differences and single phase flow velocities is still a solid basis for the investigation and interpretation of many phenomena and especially for the understanding of the overall system behavior. Measurement data from such instrumentation still serves in many cases as a database for thermal-hydraulic system codes. However some special instrumentation such as online concentration measurement for boric acid in the water phase or for non-condensibles in steam atmosphere as well as flow visualization techniques were further developed and successfully applied during the recent years. Concerning the modeling needs for advanced thermal-hydraulic codes, significant advances have been accomplished in the last few years in the local instrumentation technology for two-phase flow by the application of new sensor techniques, optical or beam methods and electronic technology. This paper will give insight into the current state of instrumentation technology for safety-related thermohydraulic experiments. Advantages and limitations of some measurement processes and systems will be indicated as well as trends and possibilities for further development. Aspects of instrumentation in operating reactors will also be mentioned.

  17. The Los Alamos accelerator code group

    International Nuclear Information System (INIS)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-01-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG's activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET

  18. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  19. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  20. RECENT BEAM MEASUREMENTS AND NEW INSTRUMENTATION AT THE ADVANCED LIGHT SOURCE

    International Nuclear Information System (INIS)

    Sannibale, Fernando; Baptiste, Kenneth; Barry, Walter; Chin, Michael; Filippetto, Daniele; Jaegerhofer, Lukas; Julian, James; Kwiatkowski, Slawomir; Low, Raymond; Plate, David; Portmann, Gregory; Robin, David; Scarvie, Tomas; Stupakov, Gennady; Weber, Jonah; Zolotorev, Max

    2008-01-01

    The Advanced Light Source (ALS) in Berkeley was the first of the soft x-ray third generation light source ever built, and since 1993 has been in continuous and successful operation serving a large community of users in the VUV and soft x-ray community. During these years the storage ring underwent through several important upgrades that allowed maintaining the performance of this veteran facility at the forefront. The ALS beam diagnostics and instrumentation have followed a similar path of innovation and upgrade and nowadays include most of the modem and last generation devices and technologies that are commercially available and used in the recently constructed third generation light sources. In this paper we will not focus on such already widely known systems, but we will concentrate our effort in the description of some measurements techniques, instrumentation and diagnostic systems specifically developed at the ALS and used during the last few years

  1. Instrument control software development process for the multi-star AO system ARGOS

    Science.gov (United States)

    Kulas, M.; Barl, L.; Borelli, J. L.; Gässler, W.; Rabien, S.

    2012-09-01

    The ARGOS project (Advanced Rayleigh guided Ground layer adaptive Optics System) will upgrade the Large Binocular Telescope (LBT) with an AO System consisting of six Rayleigh laser guide stars. This adaptive optics system integrates several control loops and many different components like lasers, calibration swing arms and slope computers that are dispersed throughout the telescope. The purpose of the instrument control software (ICS) is running this AO system and providing convenient client interfaces to the instruments and the control loops. The challenges for the ARGOS ICS are the development of a distributed and safety-critical software system with no defects in a short time, the creation of huge and complex software programs with a maintainable code base, the delivery of software components with the desired functionality and the support of geographically distributed project partners. To tackle these difficult tasks, the ARGOS software engineers reuse existing software like the novel middleware from LINC-NIRVANA, an instrument for the LBT, provide many tests at different functional levels like unit tests and regression tests, agree about code and architecture style and deliver software incrementally while closely collaborating with the project partners. Many ARGOS ICS components are already successfully in use in the laboratories for testing ARGOS control loops.

  2. A dual-energy medical instrument for measurement of x-ray source voltage and dose rate

    Science.gov (United States)

    Ryzhikov, V. D.; Naydenov, S. V.; Volkov, V. G.; Opolonin, O. D.; Makhota, S.; Pochet, T.; Smith, C. F.

    2016-03-01

    An original dual-energy detector and medical instrument have been developed to measure the output voltages and dose rates of X-ray sources. Theoretical and experimental studies were carried out to characterize the parameters of a new scintillator-photodiode sandwich-detector based on specially-prepared zinc selenide crystals in which the low-energy detector (LED) works both as the detector of the low-energy radiation and as an absorption filter allowing the highenergy fraction of the radiation to pass through to the high-energy detector (HED). The use of the LED as a low-energy filter in combination with a separate HED opens broad possibilities for such sandwich structures. In particular, it becomes possible to analyze and process the sum, difference and ratio of signals coming from these detectors, ensuring a broad (up to 106) measurement range of X-ray intensity from the source and a leveling of the energy dependence. We have chosen an optimum design of the detector and the geometry of the component LED and HED parts that allow energy-dependence leveling to within specified limits. The deviation in energy dependence of the detector does not exceed about 5% in the energy range from 30 to 120 keV. The developed detector and instrument allow contactless measurement of the anode voltage of an X-ray emitter from 40 to 140 kV with an error no greater than 3%. The dose rate measurement range is from 1 to 200 R/min. An original medical instrument has passed clinical testing and was recommended for use in medical institutions for X-ray diagnostics.

  3. Comparison of US and European codes and regulations for the construction of LWR pressure components

    International Nuclear Information System (INIS)

    Maurer, H.A.

    1983-01-01

    The study was intended as a contribution to a stepwise harmonization of European Regulations. The same safety related principles are applied in Europe and in US to assure the quality of all primary system components. Divergencies exist primarily in the organisation of quality assurance. US and European codes and regulations admit only approved materials for the fabrication of pressure components. The German and French requirements ask, however, more restrictive limits as far as trace elements are concerned which, during operation, may contribute to the embrittlement of the material. A further difference results from the considerably larger scope of materials examinations in European countries. A comparative list of the numbers of test specimens required under the different codes was prepared. Also for the hydrostatic test, differences were found. In European countries the test pressure for primary system components vary from 1.1 to 2.0 times the design pressure, while in the US the test pressure of the components is dependent on the design pressure of the entire system, 1.25 times design pressure. (orig./HP)

  4. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  5. Evaluation Framework for Search Instruments

    International Nuclear Information System (INIS)

    Warren, Glen A.; Smith, Leon E.; Cooper, Matt W.; Kaye, William R.

    2005-01-01

    A framework for quantitatively evaluating current and proposed gamma-ray search instrument designs has been developed. The framework is designed to generate a large library of ''virtual neighborhoods'' that can be used to test and evaluate nearly any gamma-ray sensor type. Calculating nuisance-source emissions and combining various sources to create a large number of random virtual scenes places a significant computational burden on the development of the framework. To reduce this burden, a number of radiation transport simplifications have been made which maintain the essential physics ingredients for the quantitative assessment of search instruments while significantly reducing computational times. The various components of the framework, from the simulation and benchmarking of nuisance source emissions to the computational engine for generating the gigabytes of simulated search scenes, are discussed

  6. MyMolDB: a micromolecular database solution with open source and free components.

    Science.gov (United States)

    Xia, Bing; Tai, Zheng-Fu; Gu, Yu-Cheng; Li, Bang-Jing; Ding, Li-Sheng; Zhou, Yan

    2011-10-01

    To manage chemical structures in small laboratories is one of the important daily tasks. Few solutions are available on the internet, and most of them are closed source applications. The open-source applications typically have limited capability and basic cheminformatics functionalities. In this article, we describe an open-source solution to manage chemicals in research groups based on open source and free components. It has a user-friendly interface with the functions of chemical handling and intensive searching. MyMolDB is a micromolecular database solution that supports exact, substructure, similarity, and combined searching. This solution is mainly implemented using scripting language Python with a web-based interface for compound management and searching. Almost all the searches are in essence done with pure SQL on the database by using the high performance of the database engine. Thus, impressive searching speed has been archived in large data sets for no external Central Processing Unit (CPU) consuming languages were involved in the key procedure of the searching. MyMolDB is an open-source software and can be modified and/or redistributed under GNU General Public License version 3 published by the Free Software Foundation (Free Software Foundation Inc. The GNU General Public License, Version 3, 2007. Available at: http://www.gnu.org/licenses/gpl.html). The software itself can be found at http://code.google.com/p/mymoldb/. Copyright © 2011 Wiley Periodicals, Inc.

  7. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  8. The Los Alamos accelerator code group

    Energy Technology Data Exchange (ETDEWEB)

    Krawczyk, F.L.; Billen, J.H.; Ryne, R.D.; Takeda, Harunori; Young, L.M.

    1995-05-01

    The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below the authors describe the LAACG`s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.

  9. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  10. EPR-technical codes - a common basis for the EPR

    International Nuclear Information System (INIS)

    Zaiss, W.; Appell, B.

    1997-01-01

    The design and construction of Nuclear Power Plants implies a full set of codes and standards to define the construction rules of components and equipment. Rules are existing and are currently implemented, respectively in France and Germany (mainly RCCs and KTA safety standards). In the frame of the EPR-project, the common objective requires an essential industrial work programme between engineers from both countries to elaborate a common set of codes and regulations. These new industrial rules are called the ETCs (EPR Technical Codes). In the hierarchy the ETCs are - in case of France - on the common level of basic safety rules (RFS), design and construction rules (RCC) and - in Germany - belonging to RSK guidelines and KTA safety standards. A set of six ETCs will be elaborated to cover: safety and process, mechanical components, electrical equipment, instrumentation and control, civil works, fire protection. (orig.)

  11. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  12. Fire-accident analysis code (FIRAC) verification

    International Nuclear Information System (INIS)

    Nichols, B.D.; Gregory, W.S.; Fenton, D.L.; Smith, P.R.

    1986-01-01

    The FIRAC computer code predicts fire-induced transients in nuclear fuel cycle facility ventilation systems. FIRAC calculates simultaneously the gas-dynamic, material transport, and heat transport transients that occur in any arbitrarily connected network system subjected to a fire. The network system may include ventilation components such as filters, dampers, ducts, and blowers. These components are connected to rooms and corridors to complete the network for moving air through the facility. An experimental ventilation system has been constructed to verify FIRAC and other accident analysis codes. The design emphasizes network system characteristics and includes multiple chambers, ducts, blowers, dampers, and filters. A larger industrial heater and a commercial dust feeder are used to inject thermal energy and aerosol mass. The facility is instrumented to measure volumetric flow rate, temperature, pressure, and aerosol concentration throughout the system. Aerosol release rates and mass accumulation on filters also are measured. We have performed a series of experiments in which a known rate of thermal energy is injected into the system. We then simulated this experiment with the FIRAC code. This paper compares and discusses the gas-dynamic and heat transport data obtained from the ventilation system experiments with those predicted by the FIRAC code. The numerically predicted data generally are within 10% of the experimental data

  13. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    Science.gov (United States)

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  14. Remote and Virtual Instrumentation Platform for Distance Learning

    Directory of Open Access Journals (Sweden)

    Tom Eppes

    2010-08-01

    Full Text Available This journal presents distance learning using the National Instruments ELVIS II and how Multisim can be combined with ELVIS II for distance learning. National Instrument’s ELVIS II is a new version that can easily be used for e-learning. It features 12 of the commonly used instruments in engineering and science laboratories, including an oscilloscope, a function generator, a variable power supply, and an isolated digital multi-meter in a low-cost and easy-to-use platform and completes integration with Multisim software for SPICE simulation, which simplifies the teaching of circuit design. As NI ELVIS II is based on LabView, designers can easily customize the 12 instruments or can create their own using the provided source code for the instruments.

  15. Code of practice for in-core instrumentation for neutron fluence rate (flux) measurements in power reactors

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    This standard applies to in-core (on-line) neutron detectors and instrumentation which is designed for safety, information or control purposes. It also applies to components in so far as these components are contained within the primary envelope of the reactor. The detector types usually used are dc ionization chambers and self-powered neutron detectors

  16. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  17. Regulatory instrument review: Aging management of LWR cables, containment and basemat, reactor coolant pumps, and motor-operated valves

    International Nuclear Information System (INIS)

    Werry, E.V.; Somasundaram, S.

    1995-09-01

    The results of Stage 2 of the Regulatory Instrument Review are presented in this volume. Selected regulatory instruments, such as the Code of Federal Regulations (CFR), US Nuclear Regulatory Commission (NRC), Regulatory Guides, and ASME Codes, were investigated to determine the extent to which these regulations apply aging management to selected safety-related components in nuclear power plants. The Regulatory Instrument Review was funded by the NRC under the Nuclear Plant Aging Research (NPAR) program. Stage 2 of the review focused on four safety-related structures and components; namely, cables, containment and basemat, reactor coolant pumps, and motor-operated valves. The review suggests that the primary-emphasis of the regulatory instruments was on the design, construction, start-up, and operation of a nuclear power plant, and that aging issues were primarily addressed after an aging-related problem was recognized. This Stage 2 review confirms the results of the prior review; (see Regulatory Instrument Review: Management of Aging of LWR Major Safety-Related Components NUREG/CR-5490. The observations indicate that the regulations generally address management of age-related degradation indirectly. Specific age-related degradation phenomena frequently are dealt with in bulletins and notices or through generic issues, letters, etc. The major recommendation of this report, therefore, is that the regulatory instruments should more directly and explicitly address the aging phenomenon and the management of the age-related degradation process

  18. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  19. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  20. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  1. HTGR nuclear heat source component design and experience

    International Nuclear Information System (INIS)

    Peinado, C.O.; Wunderlich, R.G.; Simon, W.A.

    1982-05-01

    The high-temperature gas-cooled reactor (HTGR) nuclear heat source components have been under design and development since the mid-1950's. Two power plants have been designed, constructed, and operated: the Peach Bottom Atomic Power Station and the Fort St. Vrain Nuclear Generating Station. Recently, development has focused on the primary system components for a 2240-MW(t) steam cycle HTGR capable of generating about 900 MW(e) electric power or alternately producing high-grade steam and cogenerating electric power. These components include the steam generators, core auxiliary heat exchangers, primary and auxiliary circulators, reactor internals, and thermal barrier system. A discussion of the design and operating experience of these components is included

  2. Coupled geochemical and solute transport code development

    International Nuclear Information System (INIS)

    Morrey, J.R.; Hostetler, C.J.

    1985-01-01

    A number of coupled geochemical hydrologic codes have been reported in the literature. Some of these codes have directly coupled the source-sink term to the solute transport equation. The current consensus seems to be that directly coupling hydrologic transport and chemical models through a series of interdependent differential equations is not feasible for multicomponent problems with complex geochemical processes (e.g., precipitation/dissolution reactions). A two-step process appears to be the required method of coupling codes for problems where a large suite of chemical reactions must be monitored. Two-step structure requires that the source-sink term in the transport equation is supplied by a geochemical code rather than by an analytical expression. We have developed a one-dimensional two-step coupled model designed to calculate relatively complex geochemical equilibria (CTM1D). Our geochemical module implements a Newton-Raphson algorithm to solve heterogeneous geochemical equilibria, involving up to 40 chemical components and 400 aqueous species. The geochemical module was designed to be efficient and compact. A revised version of the MINTEQ Code is used as a parent geochemical code

  3. Neutron dosimetry at SLAC: Neutron sources and instrumentation

    International Nuclear Information System (INIS)

    Liu, J.C.; Jenkins, T.M.; McCall, R.C.; Ipe, N.E.

    1991-10-01

    This report summarizes in detail the dosimetric characteristics of the five radioisotopic type neutron sources ( 238 PuBe, 252 Cf, 238 PuB, 238 PuF 4 , and 238 PuLi) and the neutron instrumentation (moderated BF 3 detector, Anderson-Braun (AB) detector, AB remmeter, Victoreen 488 Neutron Survey Meter, Beam Shut-Off Ionization Chamber, 12 C plastic scintillator detector, moderated indium foil detector, and moderated and bare TLDs) that are commonly used for neutron dosimetry at the Stanford Linear Accelerator Center (SLAC). 36 refs,. 19 figs

  4. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  5. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  6. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  7. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    International Nuclear Information System (INIS)

    Keney, G.S.

    1981-08-01

    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV

  8. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  9. Polarization diversity scheme on spectral polarization coding optical code-division multiple-access network

    Science.gov (United States)

    Yen, Chih-Ta; Huang, Jen-Fa; Chang, Yao-Tang; Chen, Bo-Hau

    2010-12-01

    We present an experiment demonstrating the spectral-polarization coding optical code-division multiple-access system introduced with a nonideal state of polarization (SOP) matching conditions. In the proposed system, the encoding and double balanced-detection processes are implemented using a polarization-diversity scheme. Because of the quasiorthogonality of Hadamard codes combining with array waveguide grating routers and a polarization beam splitter, the proposed codec pair can encode-decode multiple code words of Hadamard code while retaining the ability for multiple-access interference cancellation. The experimental results demonstrate that when the system is maintained with an orthogonal SOP for each user, an effective reduction in the phase-induced intensity noise is obtained. The analytical SNR values are found to overstate the experimental results by around 2 dB when the received effective power is large. This is mainly limited by insertion losses of components and a nonflattened optical light source. Furthermore, the matching conditions can be improved by decreasing nonideal influences.

  10. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  11. Separation of musical instruments based on amplitude and frequency comodulation

    Science.gov (United States)

    Jacobson, Barry D.; Cauwenberghs, Gert; Quatieri, Thomas F.

    2002-05-01

    In previous work, amplitude comodulation was investigated as a basis for monaural source separation. Amplitude comodulation refers to similarities in amplitude envelopes of individual spectral components emitted by particular types of sources. In many types of musical instruments, amplitudes of all resonant modes rise/fall, and start/stop together during the course of normal playing. We found that under certain well-defined conditions, a mixture of constant frequency, amplitude comodulated sources can unambiguously be decomposed into its constituents on the basis of these similarities. In this work, system performance was improved by relaxing the constant frequency requirement. String instruments, for example, which are normally played with vibrato, are both amplitude and frequency comodulated sources, and could not be properly tracked under the constant frequency assumption upon which our original algorithm was based. Frequency comodulation refers to similarities in frequency variations of individual harmonics emitted by these types of sources. The analytical difficulty is in defining a representation of the source which properly tracks frequency varying components. A simple, fixed filter bank can only track an individual spectral component for the duration in which it is within the passband of one of the filters. Alternatives are therefore explored which are amenable to real-time implementation.

  12. Swift Burst Alert Telescope (BAT) Instrument Response

    International Nuclear Information System (INIS)

    Parsons, A.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Hullinger, D.; Krimm, H.; Markwardt, C.; Tueller, J.; Fenimore, E.; Palmer, D.; Sato, G.; Takahashi, T.; Nakazawa, K.; Okada, Y.; Takahashi, H.; Suzuki, M.; Tashiro, M.

    2004-01-01

    The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts, and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. This paper describes the spectral models and the ground calibration experiments used to determine the BAT response to an accuracy suitable for gamma-ray burst studies

  13. Neutron dosimetry at SLAC: Neutron sources and instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Liu, J.C.; Jenkins, T.M.; McCall, R.C.; Ipe, N.E.

    1991-10-01

    This report summarizes in detail the dosimetric characteristics of the five radioisotopic type neutron sources ({sup 238}PuBe, {sup 252}Cf, {sup 238}PuB, {sup 238}PuF{sub 4}, and {sup 238}PuLi) and the neutron instrumentation (moderated BF{sub 3} detector, Anderson-Braun (AB) detector, AB remmeter, Victoreen 488 Neutron Survey Meter, Beam Shut-Off Ionization Chamber, {sup 12}C plastic scintillator detector, moderated indium foil detector, and moderated and bare TLDs) that are commonly used for neutron dosimetry at the Stanford Linear Accelerator Center (SLAC). 36 refs,. 19 figs.

  14. Design of the power sources for portable nuclear instruments

    International Nuclear Information System (INIS)

    Chen Wei; Fang Fang; Cui Yan; Cui Junliang; Zhou Wei

    2007-01-01

    How to charge for the portable equipments is always a topical subject aimed by people, the application of new type batteries and Battery Management brings great facility to people's life, the rechargeable battery for portable equipments is widely used in portable equipments, but the convenience of the charging power source is limited in special situation. This paper will discuss how to combining rechargeable battery with traditional alkaline batteries for charging the portable instruments. (authors)

  15. Advanced Technologies For Heterodyne Radio Astronomy Instrumentation - Part1 By A. Pavolotsky, And Advanced Technologies For Heterodyne Radio Astronomy Instrumentation - Part2 By V. Desmaris

    Science.gov (United States)

    Pavolotsky, Alexey

    2018-01-01

    Modern and future heterodyne radio astronomy instrumentation critically depends on availability of advanced fabrication technologies and components. In Part1 of the Poster, we present the thin film fabrication process for SIS mixer receivers, utilizing either AlOx, or AlN barrier superconducting tunnel junctions developed and supported by GARD. The summary of the process design rules is presented. It is well known that performance of waveguide mixer components critically depends on accuracy of their geometrical dimensions. At GARD, all critical mechanical parts are 3D-mapped with a sub-um accuracy. Further progress of heterodyne instrumentation requires new efficient and compact sources of LO signal. We present SIS-based frequency multiplier, which could become a new option for LO source. Future radio astronomy THz receivers will need waveguide components, which fabricating due to their tiny dimensions is not feasible by traditional mechanical machining. We present the alternative micromachining technique for fabricating waveguide component for up 5 THz band and probably beyond.

  16. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  17. The TALL-3D facility design and commissioning tests for validation of coupled STH and CFD codes

    Energy Technology Data Exchange (ETDEWEB)

    Grishchenko, Dmitry, E-mail: dmitry@safety.sci.kth.se; Jeltsov, Marti, E-mail: marti@safety.sci.kth.se; Kööp, Kaspar, E-mail: kaspar@safety.sci.kth.se; Karbojian, Aram, E-mail: karbojan@kth.se; Villanueva, Walter, E-mail: walter@safety.sci.kth.se; Kudinov, Pavel, E-mail: pavel@safety.sci.kth.se

    2015-08-15

    Highlights: • Design of a heavy liquid thermal-hydraulic loop for CFD/STH code validation. • Description of the loop instrumentation and assessment of measurement error. • Experimental data from forced to natural circulation transient. - Abstract: Application of coupled CFD (Computational Fluid Dynamics) and STH (System Thermal Hydraulics) codes is a prerequisite for computationally affordable and sufficiently accurate prediction of thermal-hydraulics of complex systems. Coupled STH and CFD codes require validation for understanding and quantification of the sources of uncertainties in the code prediction. TALL-3D is a liquid Lead Bismuth Eutectic (LBE) loop developed according to the requirements for the experimental data for validation of coupled STH and CFD codes. The goals of the facility design are to provide (i) mutual feedback between natural circulation in the loop and complex 3D mixing and stratification phenomena in the pool-type test section, (ii) a possibility to validate standalone STH and CFD codes for each subsection of the facility, and (iii) sufficient number of experimental data to separate the process of input model calibration and code validation. Description of the facility design and its main components, approach to estimation of experimental uncertainty and calibration of model input parameters that are not directly measured in the experiment are discussed in the paper. First experimental data from the forced to natural circulation transient is also provided in the paper.

  18. Analysis and simulation of a small-angle neutron scattering instrument on a 1 MW long pulse spallation source

    International Nuclear Information System (INIS)

    Olah, G.A.; Hjelm, R.P.; Lujan, M. Jr.

    1996-01-01

    We studied the design and performance of a small-angle neutron scattering (SANS) instrument for a proposed 1 MW, 60 Hz long pulsed spallation source at the Los Alamos Neutron Science Center (LANSCE). An analysis of the effects of source characteristics and chopper performance combined with instrument simulations using the LANSCE Monte Carlo instrument simulations package shows that the T 0 chopper should be no more than 5 m from the source with the frame overlap and frame definition choppers at 5.6 and greater than 7 m, respectively. The study showed that an optimal pulse structure has an exponential decaying tail with τ ∼ 750 μs. The Monte Carlo simulations were used to optimize the LPSS SANS, showing that an optimal length is 18 m. The simulations show that an instrument with variable length is best to match the needs of a given measurement. The performance of the optimized LPSS instrument was found to be comparable with present world standard instruments

  19. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  20. Validation of comprehensive space radiation transport code

    International Nuclear Information System (INIS)

    Shinn, J.L.; Simonsen, L.C.; Cucinotta, F.A.

    1998-01-01

    The HZETRN code has been developed over the past decade to evaluate the local radiation fields within sensitive materials on spacecraft in the space environment. Most of the more important nuclear and atomic processes are now modeled and evaluation within a complex spacecraft geometry with differing material components, including transition effects across boundaries of dissimilar materials, are included. The atomic/nuclear database and transport procedures have received limited validation in laboratory testing with high energy ion beams. The codes have been applied in design of the SAGE-III instrument resulting in material changes to control injurious neutron production, in the study of the Space Shuttle single event upsets, and in validation with space measurements (particle telescopes, tissue equivalent proportional counters, CR-39) on Shuttle and Mir. The present paper reviews the code development and presents recent results in laboratory and space flight validation

  1. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  2. Health physics instrument manual

    International Nuclear Information System (INIS)

    Gupton, E.D.

    1978-08-01

    The purpose of this manual is to provide apprentice health physics surveyors and other operating groups not directly concerned with radiation detection instruments a working knowledge of the radiation detection and measuring instruments in use at the Laboratory. The characteristics and applications of the instruments are given. Portable instruments, stationary instruments, personnel monitoring instruments, sample counters, and miscellaneous instruments are described. Also, information sheets on calibration sources, procedures, and devices are included. Gamma sources, beta sources, alpha sources, neutron sources, special sources, a gamma calibration device for badge dosimeters, and a calibration device for ionization chambers are described

  3. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  4. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  5. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  6. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  7. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  8. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  9. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  10. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  11. Imperative-program transformation by instrumented-interpreter specialization

    DEFF Research Database (Denmark)

    Debois, Søren

    2008-01-01

    We describe how to implement strength reduction, loop-invariant code motion and loop quasi-invariant code motion by specializing instrumented interpreters. To curb code duplication intrinsic to such specialization, we introduce a new program transformation, rewinding, which uses Moore-automata mi......We describe how to implement strength reduction, loop-invariant code motion and loop quasi-invariant code motion by specializing instrumented interpreters. To curb code duplication intrinsic to such specialization, we introduce a new program transformation, rewinding, which uses Moore...

  12. Indigenous instrumentation for mass spectrometry: Part II - development of plasma source mass spectrometers. PD-5-3

    International Nuclear Information System (INIS)

    Nataraju, V.

    2007-01-01

    The growing demands from analytical community, for a precise isotope ratio and ultra trace concentration measurements, has lead to significant improvement in mass spectrometer instrumentation development with respect to sensitivity, detection limits, precision and accuracy. Among the many analytical techniques available, plasma source mass spectrometers like Inductively Coupled Plasma Mass Spectrometry (ICPMS), multi collector (MC) ICPMS and Glow Discharge Mass Spectrometry (GDMS), have matured into reliable tools for the above applications. Where as ICPMS is by far the most successful method for aqueous solutions, GDMS is being applied for bulk and impurity analysis of conducting as well non-conducting solids. VPID, BARC has been developing mass spectrometers for different inorganic applications of DAE users. Over the years expertise has been developed in all the aspects of mass spectrometry instrumentation. Part 1 of this indigenous instrumentation on mass spectrometry gives details of magnetic sector instruments with either EI or TI source for isotopic ratio analysis. The present paper is a continuation of that on plasma source and quadrupole mass spectrometers. This paper covers i) ICP-QMS, ii) MC-ICPMS, iii) GDMS and iv) QMS

  13. Neutron scattering instrumentation for biology at spallation neutron sources

    Energy Technology Data Exchange (ETDEWEB)

    Pynn, R. [Los Alamos National Laboratory, NM (United States)

    1994-12-31

    Conventional wisdom holds that since biological entities are large, they must be studied with cold neutrons, a domain in which reactor sources of neutrons are often supposed to be pre-eminent. In fact, the current generation of pulsed spallation neutron sources, such as LANSCE at Los Alamos and ISIS in the United Kingdom, has demonstrated a capability for small angle scattering (SANS) - a typical cold- neutron application - that was not anticipated five years ago. Although no one has yet built a Laue diffractometer at a pulsed spallation source, calculations show that such an instrument would provide an exceptional capability for protein crystallography at one of the existing high-power spoliation sources. Even more exciting is the prospect of installing such spectrometers either at a next-generation, short-pulse spallation source or at a long-pulse spallation source. A recent Los Alamos study has shown that a one-megawatt, short-pulse source, which is an order of magnitude more powerful than LANSCE, could be built with today`s technology. In Europe, a preconceptual design study for a five-megawatt source is under way. Although such short-pulse sources are likely to be the wave of the future, they may not be necessary for some applications - such as Laue diffraction - which can be performed very well at a long-pulse spoliation source. Recently, it has been argued by Mezei that a facility that combines a short-pulse spallation source similar to LANSCE, with a one-megawatt, long-pulse spallation source would provide a cost-effective solution to the global shortage of neutrons for research. The basis for this assertion as well as the performance of some existing neutron spectrometers at short-pulse sources will be examined in this presentation.

  14. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  15. Developing a GIS for CO2 analysis using lightweight, open source components

    Science.gov (United States)

    Verma, R.; Goodale, C. E.; Hart, A. F.; Kulawik, S. S.; Law, E.; Osterman, G. B.; Braverman, A.; Nguyen, H. M.; Mattmann, C. A.; Crichton, D. J.; Eldering, A.; Castano, R.; Gunson, M. R.

    2012-12-01

    There are advantages to approaching the realm of geographic information systems (GIS) using lightweight, open source components in place of a more traditional web map service (WMS) solution. Rapid prototyping, schema-less data storage, the flexible interchange of components, and open source community support are just some of the benefits. In our effort to develop an application supporting the geospatial and temporal rendering of remote sensing carbon-dioxide (CO2) data for the CO2 Virtual Science Data Environment project, we have connected heterogeneous open source components together to form a GIS. Utilizing widely popular open source components including the schema-less database MongoDB, Leaflet interactive maps, the HighCharts JavaScript graphing library, and Python Bottle web-services, we have constructed a system for rapidly visualizing CO2 data with reduced up-front development costs. These components can be aggregated together, resulting in a configurable stack capable of replicating features provided by more standard GIS technologies. The approach we have taken is not meant to replace the more established GIS solutions, but to instead offer a rapid way to provide GIS features early in the development of an application and to offer a path towards utilizing more capable GIS technology in the future.

  16. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  17. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  18. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  19. Pilot studies of management of ageing of nuclear power plant instrumentation and control components

    International Nuclear Information System (INIS)

    Burnay, S.G.; Simola, K.; Kossilov, A.; Pachner, J.

    1993-01-01

    This paper describes pilot studies which have been implemented to study the aging behavior of safety related component parts of nuclear power plants. In 1989 the IAEA initiated work on pilot studies related to the aging of such components. Four components were identified for study. They are the primary nozzle of a reactor vessel; a motor operated isolating valve; the concrete containment building; and instrumentation and control cables within the containment facility. The study was begun with phase 1 efforts directed toward understanding the aging process, and methods for monitoring and minimizing the effects of aging. Phase 2 efforts are directed toward aging studies, documentation of the ideas put forward, and research to answer questions identified in phase 1. This paper describes progress made on two of these components, namely the motor operated isolation valves, and in-containment I ampersand C cables

  20. SM-1 negative ion source

    International Nuclear Information System (INIS)

    Huang Zhenjun; Wang Jianzhen

    1987-01-01

    The working principle and characteristics of SM-1 Negative Ion Source is mainly introduced. In the instrument, there is a device to remove O 3 . This instrument can keep high density of negative ions which is generated by the electrical coronas setting out electricity at negative high voltage and can remove the O 3 component which is harmful to the human body. The density of negative ions is higher than 2.5 x 10 6 p./cm 3 while that of O 3 components is less than 1 ppb at the distance of 50 cm from the panel of the instrument. The instrument sprays negative ions automatically without the help of electric fan, so it works noiselessly. It is widely used in national defence, industry, agriculture, forestry, stock raising, sidelines and in the places with an equipment of low density of negative ion or high concentration of O 3 components. Besides, the instrument may also be used to treat diseases, to prevent against rot, to arrest bacteria, to purify air and so on

  1. A global catalogue of large SO2 sources and emissions derived from the Ozone Monitoring Instrument

    Directory of Open Access Journals (Sweden)

    V. E. Fioletov

    2016-09-01

    Full Text Available Sulfur dioxide (SO2 measurements from the Ozone Monitoring Instrument (OMI satellite sensor processed with the new principal component analysis (PCA algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr−1 to more than 4000 kt yr−1 of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources; power plants (297; smelters (53; and sources related to the oil and gas industry (65. The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005–2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30 % of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80 % over the 2005–2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr−1 and not detected by OMI.

  2. A Global Catalogue of Large SO2 Sources and Emissions Derived from the Ozone Monitoring Instrument

    Science.gov (United States)

    Fioletov, Vitali E.; McLinden, Chris A.; Krotkov, Nickolay; Li, Can; Joiner, Joanna; Theys, Nicolas; Carn, Simon; Moran, Mike D.

    2016-01-01

    Sulfur dioxide (SO2) measurements from the Ozone Monitoring Instrument (OMI) satellite sensor processed with the new principal component analysis (PCA) algorithm were used to detect large point emission sources or clusters of sources. The total of 491 continuously emitting point sources releasing from about 30 kt yr(exp -1) to more than 4000 kt yr(exp -1) of SO2 per year have been identified and grouped by country and by primary source origin: volcanoes (76 sources); power plants (297); smelters (53); and sources related to the oil and gas industry (65). The sources were identified using different methods, including through OMI measurements themselves applied to a new emission detection algorithm, and their evolution during the 2005- 2014 period was traced by estimating annual emissions from each source. For volcanic sources, the study focused on continuous degassing, and emissions from explosive eruptions were excluded. Emissions from degassing volcanic sources were measured, many for the first time, and collectively they account for about 30% of total SO2 emissions estimated from OMI measurements, but that fraction has increased in recent years given that cumulative global emissions from power plants and smelters are declining while emissions from oil and gas industry remained nearly constant. Anthropogenic emissions from the USA declined by 80% over the 2005-2014 period as did emissions from western and central Europe, whereas emissions from India nearly doubled, and emissions from other large SO2-emitting regions (South Africa, Russia, Mexico, and the Middle East) remained fairly constant. In total, OMI-based estimates account for about a half of total reported anthropogenic SO2 emissions; the remaining half is likely related to sources emitting less than 30 kt yr(exp -1) and not detected by OMI.

  3. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  4. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  5. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  6. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  7. Application of thin-film breakdown counters for characterization of neutron field of the VESUVIO instrument at the ISIS spallation source

    Science.gov (United States)

    Smirnov, A. N.; Pietropaolo, A.; Prokofiev, A. V.; Rodionova, E. E.; Frost, C. D.; Ansell, S.; Schooneveld, E. M.; Gorini, G.

    2012-09-01

    The high-energy neutron field of the VESUVIO instrument at the ISIS facility has been characterized using the technique of thin-film breakdown counters (TFBC). The technique utilizes neutron-induced fission reactions of natU and 209Bi with detection of fission fragments by TFBCs. Experimentally determined count rates of the fragments are ≈50% higher than those calculated using spectral neutron flux simulated with the MCNPX code. This work is a part of the project to develop ChipIr, a new dedicated facility for the accelerated testing of electronic components and systems for neutron-induced single event effects in the new Target Station 2 at ISIS. The TFBC technique has shown to be applicable for on-line monitoring of the neutron flux in the neutron energy range 1-800 MeV at the position of the device under test (DUT).

  8. Application of thin-film breakdown counters for characterization of neutron field of the VESUVIO instrument at the ISIS spallation source

    International Nuclear Information System (INIS)

    Smirnov, A.N.; Pietropaolo, A.; Prokofiev, A.V.; Rodionova, E.E.; Frost, C.D.; Ansell, S.; Schooneveld, E.M.; Gorini, G.

    2012-01-01

    The high-energy neutron field of the VESUVIO instrument at the ISIS facility has been characterized using the technique of thin-film breakdown counters (TFBC). The technique utilizes neutron-induced fission reactions of nat U and 209 Bi with detection of fission fragments by TFBCs. Experimentally determined count rates of the fragments are ≈50% higher than those calculated using spectral neutron flux simulated with the MCNPX code. This work is a part of the project to develop ChipIr, a new dedicated facility for the accelerated testing of electronic components and systems for neutron-induced single event effects in the new Target Station 2 at ISIS. The TFBC technique has shown to be applicable for on-line monitoring of the neutron flux in the neutron energy range 1–800 MeV at the position of the device under test (DUT).

  9. Development and application of computer codes for multidimensional thermalhydraulic analyses of nuclear reactor components

    International Nuclear Information System (INIS)

    Carver, M.B.

    1983-01-01

    Components of reactor systems and related equipment are identified in which multidimensional computational thermal hydraulics can be used to advantage to assess and improve design. Models of single- and two-phase flow are reviewed, and the governing equations for multidimensional analysis are discussed. Suitable computational algorithms are introduced, and sample results from the application of particular multidimensional computer codes are given

  10. The qualification of electrical components and instrumentations relevant to safety; La qualificazione dei componenti elettrici e di strumentazione rilevanti per la sicurezza

    Energy Technology Data Exchange (ETDEWEB)

    Zambardi, F [ENEA - Direzione Sicurezza Nucleare e Protezione Sanitaria, Divisione Sistemi Elettrici e Strumentazione, Rome (Italy)

    1989-03-15

    Systems and components relevant to safety of nuclear power plants must maintain their functional integrity in order to assure accident prevention and mitigation. Redundancy is utilized against random failures, nevertheless care must be taken to avoid common failures in redundant components. Main sources of degradation and common cause failures consist in the aging effects and in the changes of environmental conditions which occur during the plant life and the postulated accidents. These causes of degradation are expected to be especially significant for instrumentation and electrical equipment, which can have a primary role in safety systems. The qualification is the methodology by which component safety requirements can be met against the above mentioned causes of degradation. In this report the connection between the possible, plant conditions and the resulting degradation effects on components is preliminarily addressed. A general characterization of the qualification is then presented. Basis, methods and peculiar aspects are discussed and the qualification by testing is taken into special account. Technical and organizational aspects related to a plant qualification program are also focused. The report ends with a look to the most significant research and development activities. (author)

  11. The OpenPMU Platform for Open Source Phasor Measurements

    OpenAIRE

    Laverty, David M.; Best, Robert J.; Brogan, Paul; Al-Khatib, Iyad; Vanfretti, Luigi; Morrow, D John

    2013-01-01

    OpenPMU is an open platform for the development of phasor measurement unit (PMU) technology. A need has been identified for an open-source alternative to commercial PMU devices tailored to the needs of the university researcher and for enabling the development of new synchrophasor instruments from this foundation. OpenPMU achieves this through open-source hardware design specifications and software source code, allowing duplicates of the OpenPMU to be fabricated under open-source licenses. Th...

  12. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  13. Evaluation of the Inductive Coupling between Equivalent Emission Sources of Components

    Directory of Open Access Journals (Sweden)

    Moisés Ferber

    2012-01-01

    Full Text Available The electromagnetic interference between electronic systems or between their components influences the overall performance. It is important thus to model these interferences in order to optimize the position of the components of an electronic system. In this paper, a methodology to construct the equivalent model of magnetic field sources is proposed. It is based on the multipole expansion, and it represents the radiated emission of generic structures in a spherical reference frame. Experimental results for different kinds of sources are presented illustrating our method.

  14. Decision n. 2010-DC-0175 made on the 4. of February 2010 by the Nuclear Safety Authority and defining technical modalities and periodicities of controls as prescribed in the R. 4452-12 and R. 4452-13 articles of the Labour Code as well as in the R. 1333-7 et R. 1333-95 articles of the Public Health Code

    International Nuclear Information System (INIS)

    2010-01-01

    This document defines the modalities of technical controls of sources and equipment emitting ionizing radiation, the modalities of technical controls of the surroundings, the modalities of the control of the efficiency of radioprotection organization and technical equipment (management of sealed and non-sealed radioactive sources, elimination of effluents and wastes associated with these sources), and the modalities of control of measurement instruments and of protection devices, altogether in application of the Labour code and of the Public Health code. Requirements defined by these both codes are recalled in appendix. They may depend on the radioactive source type or on the equipment (electric equipment generating X rays, particle accelerator, sealed sources or equipment containing such sources, not-sealed sources)

  15. Development of an object oriented nodal code using the refined AFEN derived from the method of component decomposition

    International Nuclear Information System (INIS)

    Noh, J. M.; Yoo, J. W.; Joo, H. K.

    2004-01-01

    In this study, we invented a method of component decomposition to derive the systematic inter-nodal coupled equations of the refined AFEN method and developed an object oriented nodal code to solve the derived coupled equations. The method of component decomposition decomposes the intra-nodal flux expansion of a nodal method into even and odd components in three dimensions to reduce the large coupled linear system equation into several small single equations. This method requires no additional technique to accelerate the iteration process to solve the inter-nodal coupled equations, since the derived equations can automatically act as the coarse mesh re-balance equations. By utilizing the object oriented programming concepts such as abstraction, encapsulation, inheritance and polymorphism, dynamic memory allocation, and operator overloading, we developed an object oriented nodal code that can facilitate the input/output and the dynamic control of the memories, and can make the maintenance easy. (authors)

  16. Neutronic evolution of SENA reactor during the first and second cycles. Comparison between the experimental power distributions obtained from the in-core instrumentation evaluation code CIRCE and the theoretical power values computed with the two-dimensional diffusion-evolution code EVOE

    International Nuclear Information System (INIS)

    Andrieux, Chantal

    1976-03-01

    The neutronic evolution of the reacteur Sena during the first and second cycles is presented. The experimental power distributions, obtained from the in-core instrumentation evaluation code CIRCE are compared with the theoretical powers calculated with the two-dimensional diffusion-evolution code EVOE. The CIRCE code allows: the study of the evolution of the principal parameters of the core, the comparison of the results of measured and theoretical estimates. Therefore this study has a great interest for the knowledge of the neutronic evolution of the core, as well as the validation of the refinement of theoretical estimation methods. The core calculation methods and requisite data for the evaluation of the measurements are presented after a brief description of the SENA core and its inner instrumentation. The principle of the in-core instrumentation evaluation code CIRCE, and calculation of the experimental power distributions and nuclear core parameters are then exposed. The results of the evaluation are discussed, with a comparison of the theoretical and experimental results. Taking account of the approximations used, these results, as far as the first and second cycles at SENA are concerned, are satisfactory, the deviations between theoretical and experimental power distributions being lower than 3% at the middle of the reactor and 9% at the periphery [fr

  17. The source of the intermediate wavelength component of the Earth's magnetic field

    Science.gov (United States)

    Harrison, C. G. A.

    1985-01-01

    The intermediate wavelength component of the Earth's magnetic field has been well documented by observations made by MAGSAT. It has been shown that some significant fraction of this component is likely to be caused within the core of the Earth. Evidence for this comes from analysis of the intermediate wavelength component revealed by spherical harmonics between degrees 14 and 23, in which it is shown that it is unlikely that all of this signal is crustal. Firstly, there is no difference between average continental source strength and average oceanic source strength, which is unlikely to be the case if the anomalies reside within the crust, taking into account the very different nature and thickness of continental and oceanic crust. Secondly, there is almost no latitudinal variation in the source strength, which is puzzling if the sources are within the crust and have been formed by present or past magnetic fields with a factor of two difference in intensity between the equator and the poles. If however most of the sources for this field reside within the core, then these observations are not very surprising.

  18. New developments in the McStas neutron instrument simulation package

    DEFF Research Database (Denmark)

    Willendrup, Peter Kjær; Bergbäck Knudsen, Erik; Klinkby, Esben Bryndt

    2014-01-01

    , virtual experiments, data analysis and user training. McStas was founded as a scienti_c, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of the main new developments in McStas 2.0 (December 2012) and McStas 2.1 (expected fall 2013......), including many new components, component parameter uniformisation, partial loss of backward compatibility, updated source brilliance descriptions, developments toward new tools and user interfaces, web interfaces and a new method for estimating beam losses and background from neutron optics....

  19. Towards an International Code for administrative cooperation in tax matter and international tax governance

    Directory of Open Access Journals (Sweden)

    Eva Andrés Aucejo

    2017-12-01

    Full Text Available There is not a “Global Code” that encodes the duty of cooperation between tax authorities in the world, concerning the global tax system. This article addresses this issue by proposing a global Code of administrative cooperation in tax matters including both tax relations: between States, and between States, taxpayers and intermediary’s agents. It follows a wide concept of tax governance. The findings of this research have highlighted several practical applications for future practice. article analyses, firstly, the State of the question, starting with the legal sources (international and European sources of hard law and soft law reviewing the differences with the Code as here proposed. It also examines some important Agents who emit relevant normative in international administrative tax cooperation and the role that these agents are developing nowadays (sometimes international organizations but also States like the United States, which Congress enacted the Foreign Account Tax Compliance Act, FATCA. Overlapping and gaps between different regulations are underlined. Finally, the consequences of this “General Code” lack for the functioning of a good international governance, are described. Hence, the need to create an International Cooperation Code on tax matters and international fiscal governance is concluded. That Code could be proposed by any International Organization as the World Bank nature, for instance, or the International Monetary Fund or whichever International or European Organization. This instrument could be documented through a multilateral instrument (soft law, to be signed by the States to become an international legal source (hard law. Filling this Code as Articulated Text (form could be very useful for the International Community towards an International Tax Governance.

  20. Open-Source 3-D Platform for Low-Cost Scientific Instrument Ecosystem.

    Science.gov (United States)

    Zhang, C; Wijnen, B; Pearce, J M

    2016-08-01

    The combination of open-source software and hardware provides technically feasible methods to create low-cost, highly customized scientific research equipment. Open-source 3-D printers have proven useful for fabricating scientific tools. Here the capabilities of an open-source 3-D printer are expanded to become a highly flexible scientific platform. An automated low-cost 3-D motion control platform is presented that has the capacity to perform scientific applications, including (1) 3-D printing of scientific hardware; (2) laboratory auto-stirring, measuring, and probing; (3) automated fluid handling; and (4) shaking and mixing. The open-source 3-D platform not only facilities routine research while radically reducing the cost, but also inspires the creation of a diverse array of custom instruments that can be shared and replicated digitally throughout the world to drive down the cost of research and education further. © 2016 Society for Laboratory Automation and Screening.

  1. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  2. Investigation of genes coding for inflammatory components in Parkinson's disease.

    Science.gov (United States)

    Håkansson, Anna; Westberg, Lars; Nilsson, Staffan; Buervenich, Silvia; Carmine, Andrea; Holmberg, Björn; Sydow, Olof; Olson, Lars; Johnels, Bo; Eriksson, Elias; Nissbrandt, Hans

    2005-05-01

    Several findings obtained recently indicate that inflammation may contribute to the pathogenesis in Parkinson's disease (PD). Genetic variants of genes coding for components involved in immune reactions in the brain might therefore influence the risk of developing PD or the age of disease onset. Five single nucleotide polymorphisms (SNPs) in the genes coding for interferon-gamma (IFN-gamma; T874A in intron 1), interferon-gamma receptor 2 (IFN-gamma R2; Gln64Arg), interleukin-10 (IL-10; G1082A in the promoter region), platelet-activating factor acetylhydrolase (PAF-AH; Val379Ala), and intercellular adhesion molecule 1 (ICAM-1; Lys469Glu) were genotyped, using pyrosequencing, in 265 patients with PD and 308 controls. None of the investigated SNPs was found to be associated with PD; however, the G1082A polymorphism in the IL-10 gene promoter was found to be related to the age of disease onset. Linear regression showed a significantly earlier onset with more A-alleles (P = 0.0095; after Bonferroni correction, P = 0.048), resulting in a 5-year delayed age of onset of the disease for individuals having two G-alleles compared with individuals having two A-alleles. The results indicate that the IL-10 G1082A SNP could possibly be related to the age of onset of PD. Copyright 2005 Movement Disorder Society.

  3. Detection of instrument or component failures in a nuclear plant by Luenberger observers

    International Nuclear Information System (INIS)

    Wilburn, N.P.; Colley, R.W.; Alexandro, F.J.; Clark, R.N.

    1985-01-01

    A diagnostic system, which will distinguish between instrument failures (flowmeters, etc.) and component failures (valves, filters, etc.) that show the same symptoms, has been developed for nuclear Plants using Luenberger observers. Luenberger observers are online computer based modules constructed following the technology of Clark [3]. A seventh order model of an FFTF subsystem was constructed using the Advanced Continuous Simulation Language (ACSL) and was used to show through simulation that Luenberger observers can be applied to nuclear systems

  4. Proceedings of the workshop on neutron instrumentation for a long-pulse spallation source

    International Nuclear Information System (INIS)

    Alonso, J.; Schroeder, L.; Pynn, R.

    1995-01-01

    This workshop was carried out under the auspices of the Lawrence Berkeley National Laboratory Pulsed Spallation Source activity and its Pulsed Spallation Source Committee (PSSC). One of our activities has been the sponsorship of workshops related to neutron production by pulsed sources. At the Crystal City PSSC meeting a decision was made to hold a workshop on the instrumentation opportunities at a long-pulse spallation source (LPSS). The enclosed material represents the results of deliberations of the three working groups into which the participants were divided, covering elastic scattering, inelastic scattering and fundamental physics, as well as contributions from individual participants. We hope that the material in this report will be useful to the neutron scattering community as it develops a road-map for future neutron sources. The workshop was held at LBNL in mid-April with about sixty very dedicated participants from the US and abroad. This report presents the charge for the workshop: Based on the bench mark source parameters provided by Gary Russell, determine how a suite of spectrometers in each of the three working group's area of expertise would perform at an LPSS and compare this performance with that of similar spectrometers at a continuous source or a short-pulse source. Identify and discuss modifications to these spectrometers that would enhance their performance at an LPSS. Identify any uncertainties in the analysis of spectrometer performance that require further research. Describe what R ampersand D is needed to resolve these issues. Discuss how the performance of instruments would be affected by changes in source parameters such as repetition rate, proton pulse length, and the characteristic time of pulse tails. Identify beneficial changes that could become goals for target/moderator designers. Identify novel methods that might be applied at an LPSS. Selected papers are indexed separately for inclusion in the Energy Science and Technology

  5. Exploration of the Challenges of Neutron Optics and Instrumentation at Long Pulsed Spallation Sources

    DEFF Research Database (Denmark)

    Klenø, Kaspar Hewitt

    In this thesis I have explored the challenges of long guides and instrumentation for the long pulsed European Spallation Source. I have derived the theory needed for quantifying the performance of a guide using brilliance transfer. With this tool it is easier to objectively compare how well diffe...... the simulations and optimisations of one particular instrument, the Compact SANS, on which I have worked on the design of the guide, collimation, and chopper systems....

  6. Advanced sources and optical components for the McStas neutron scattering instrument simulation package

    DEFF Research Database (Denmark)

    Farhi, E.; Monzat, C.; Arnerin, R.

    2014-01-01

    -up, including lenses and prisms. A new library for McStas adds the ability to describe any geometrical arrangement as a set of polygons. This feature has been implemented in most sample scattering components such as Single_crystal, Incoherent, Isotropic_Sqw (liquids/amorphous/powder), PowderN as well...

  7. MDEP Technical Report TR-CSWG-02. Technical Report on Lessons Learnt on Achieving Harmonisation of Codes and Standards for Pressure Boundary Components in Nuclear Power Plants

    International Nuclear Information System (INIS)

    2013-01-01

    This report was prepared by the Multinational Design Evaluation Program's (MDEP's) Codes and Standards Working Group (CSWG). The primary, long-term goal of MDEP's CSWG is to achieve international harmonisation of codes and standards for pressure-boundary components in nuclear power plants. The CSWG recognised early on that the first step to achieving harmonisation is to understand the extent of similarities and differences amongst the pressure-boundary codes and standards used in various countries. To assist the CSWG in its long-term goals, several standards developing organisations (SDOs) from various countries performed a comparison of their pressure-boundary codes and standards to identify the extent of similarities and differences in code requirements and the reasons for their differences. The results of the code-comparison project provided the CSWG with valuable insights in developing the subsequent actions to take with SDOs and the nuclear industry to pursue harmonisation of codes and standards. The results enabled the CSWG to understand from a global perspective how each country's pressure-boundary code or standard evolved into its current form and content. The CSWG recognised the important fact that each country's pressure-boundary code or standard is a comprehensive, living document that is continually being updated and improved to reflect changing technology and common industry practices unique to each country. The rules in the pressure-boundary codes and standards include comprehensive requirements for the design and construction of nuclear power plant components including design, materials selection, fabrication, examination, testing and overpressure protection. The rules also contain programmatic and administrative requirements such as quality assurance; conformity assessment (e.g., third-party inspection); qualification of welders, welding equipment and welding procedures; non-destructive examination (NDE) practices and

  8. Application of thin-film breakdown counters for characterization of neutron field of the VESUVIO instrument at the ISIS spallation source

    Energy Technology Data Exchange (ETDEWEB)

    Smirnov, A.N. [V.G. Khlopin Radium Institute, St. Petersburg (Russian Federation); Pietropaolo, A., E-mail: antonino.pietropaolo@roma2.infn.it [CNISM UdR Tor Vergata, and Centro NAST Roma, Italy Scientifica 1 I-00133 Roma Italy (Italy); Prokofiev, A.V. [The Svedberg Laboratory, Uppsala University, Uppsala (Sweden); Rodionova, E.E. [V.G. Khlopin Radium Institute, St. Petersburg (Russian Federation); Frost, C.D.; Ansell, S.; Schooneveld, E.M. [ISIS Facility, Rutherford Appleton Laboratory, Chilton (United Kingdom); Gorini, G. [Dipartimento di Fisica ' G. Occhialini,' Universita degli Studi di Milano-Bicocca, Milano (Italy)

    2012-09-21

    The high-energy neutron field of the VESUVIO instrument at the ISIS facility has been characterized using the technique of thin-film breakdown counters (TFBC). The technique utilizes neutron-induced fission reactions of {sup nat}U and {sup 209}Bi with detection of fission fragments by TFBCs. Experimentally determined count rates of the fragments are Almost-Equal-To 50% higher than those calculated using spectral neutron flux simulated with the MCNPX code. This work is a part of the project to develop ChipIr, a new dedicated facility for the accelerated testing of electronic components and systems for neutron-induced single event effects in the new Target Station 2 at ISIS. The TFBC technique has shown to be applicable for on-line monitoring of the neutron flux in the neutron energy range 1-800 MeV at the position of the device under test (DUT).

  9. Radon in buildings: instrumentation of an experimental house

    International Nuclear Information System (INIS)

    Ameon, R.; Diez, O.; Dupuis, M.; Merle-Szeremeta, A.

    2004-01-01

    IRSN decided to develop a code called RADON 2 for conducting simple and methodical studies of indoor radon concentrations. Since a validity check must be performed of the phenomenological model on which the code is based, an experimental program was initiated in 2002, within which a house in Brittany, located on a well-characterized uranium-bearing geological formation, was fitted with special instruments. After characterizing the soil underlying the house, the instrumentation implemented on site continuously monitors a number of parameters to characterize: the radon source term in the building (exhalation rate of 222 Rn at the ground/building interface and at soil surface, radon concentration in the soil and in outdoor air); radon penetration by advection (differential pressure in the house basement); the driving mechanisms for natural ventilation in the house (weather conditions, indoor temperature and relative humidity); radon distribution throughout the house by air flow and radon diffusion (indoor radon concentration at each floor of the house). Using the experimental data acquired over the past two years, the phenomena governing radon penetration inside the house (wind and stack effect) and radon extraction (fresh air supply rate) have been characterized to lay down the bases for validating the newly developed code

  10. The Coherent X-ray Imaging (CXI) Instrument at the Linac Coherent Light Source (LCLS)

    International Nuclear Information System (INIS)

    Boutet, Sebastien

    2011-01-01

    The Linac Coherent Light Source (LCLS) has become the first ever operational hard X-ray Free Electron Laser in 2009. It will operate as a user facility capable of delivering unique research opportunities in multiple fields of science. The LCLS and the LCLS Ultrafast Science Instruments (LUSI) construction projects are developing instruments designed to make full use of the capabilities afforded by the LCLS beam. One such instrument is being designed to utilize the LCLS coherent beam to image with high resolution any sub-micron object. This instrument is called the Coherent X-ray Imaging (CXI) instrument. This instrument will provide a flexible optical system capable of tailoring key beam parameters for the users. A suite of shot-to-shot diagnostics will also be provided to characterize the beam on every pulse. The provided instrumentation will include multi-purpose sample environments, sample delivery and a custom detector capable of collecting 2D data at 120 Hz. In this article, the LCLS will be briefly introduced along with the technique of Coherent X-ray Diffractive Imaging (CXDI). A few examples of scientific opportunities using the CXI instrument will be described. Finally, the conceptual layout of the instrument will be presented along with a description of the key requirements for the overall system and specific devices required.

  11. TOF powder diffractometer on a reactor source

    International Nuclear Information System (INIS)

    Bleif, H.J.; Wechsler, D.; Mezei, F.

    1999-01-01

    Complete text of publication follows. The performance of time-of-flight (TOF) methods on Long Pulse Spallation Sources can be studied at a reactor source. For this purpose a prototype TOF monochromator instrument will be installed at the KFKI reactor in Budapest. The initial setup will be a powder diffractometer with a resolution of δd/d down to 2 x 10 -3 at a wavelength of 1 A. The instrument uses choppers to produce neutron pulses of down to 10 μs FWHM. The optimal neutron source for a chopper instrument is a Long Pulse Spallation Source, but even on a continuous source simulations have shown that this instrument outperforms a conventional crystal monochromator powder diffractometer at high resolution. The main components of the TOF instrument are one double chopper defining the time resolution and two single choppers to select the wavelength range and to prevent frame overlap. For inelastic experiments a further chopper can be added in front of the sample. The neutron guide has a super-mirror coating and a curvature of 3500m. The total flight path is 20m and there are 24 single detectors in backscattering geometry. (author)

  12. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  13. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  14. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  15. Flux Gain for Next-Generation Neutron-Scattering Instruments Resulting From Improved Supermirror Performance

    International Nuclear Information System (INIS)

    Rehm, C.

    2001-01-01

    Next-generation spallation neutron source facilities will offer instruments with unprecedented capabilities through simultaneous enhancement of source power and usage of advanced optical components. The Spallation Neutron Source (SNS), already under construction at Oak Ridge National Laboratory and scheduled to be completed by 2006, will provide greater than an order of magnitude more effective source flux than current state-of-the-art facilities, including the most advanced research reactors. An additional order of magnitude gain is expected through the use of new optical devices and instrumentation concepts. Many instrument designs require supermirror (SM) neutron guides with very high critical angles for total reflection. In this contribution, they discuss how the performance of modern neutron scattering instruments depends on the efficiency of these supermirrors. They outline ideas for enhancing the performance of the SM coatings, particularly for improving the reflectivity at the position of the critical wave vector transfer. A simulation program has been developed which allows different approaches for SM designs to be studied. Possible instrument performance gains are calculated for the example of the SNS reflectometer

  16. Code cases for implementing risk-based inservice testing in the ASME OM code

    International Nuclear Information System (INIS)

    Rowley, C.W.

    1996-01-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices

  17. Code cases for implementing risk-based inservice testing in the ASME OM code

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  18. Monte Carlo simulation of the spear reflectometer at LANSCE

    International Nuclear Information System (INIS)

    Smith, G.S.

    1995-01-01

    The Monte Carlo instrument simulation code, MCLIB, contains elements to represent several components found in neutron spectrometers including slits, choppers, detectors, sources and various samples. Using these elements to represent the components of a neutron scattering instrument, one can simulate, for example, an inelastic spectrometer, a small angle scattering machine, or a reflectometer. In order to benchmark the code, we chose to compare simulated data from the MCLIB code with an actual experiment performed on the SPEAR reflectometer at LANSCE. This was done by first fitting an actual SPEAR data set to obtain the model scattering-length-density profile, Β(z), for the sample and the substrate. Then these parameters were used as input values for the sample scattering function. A simplified model of SPEAR was chosen which contained all of the essential components of the instrument. A code containing the MCLIB subroutines was then written to simulate this simplified instrument. The resulting data was then fit and compared to the actual data set in terms of the statistics, resolution and accuracy

  19. Challenges of the Open Source Component Marketplace in the Industry

    Science.gov (United States)

    Ayala, Claudia; Hauge, Øyvind; Conradi, Reidar; Franch, Xavier; Li, Jingyue; Velle, Ketil Sandanger

    The reuse of Open Source Software components available on the Internet is playing a major role in the development of Component Based Software Systems. Nevertheless, the special nature of the OSS marketplace has taken the “classical” concept of software reuse based on centralized repositories to a completely different arena based on massive reuse over Internet. In this paper we provide an overview of the actual state of the OSS marketplace, and report preliminary findings about how companies interact with this marketplace to reuse OSS components. Such data was gathered from interviews in software companies in Spain and Norway. Based on these results we identify some challenges aimed to improve the industrial reuse of OSS components.

  20. Absolute measurement of LDR brachytherapy source emitted power: Instrument design and initial measurements.

    Science.gov (United States)

    Malin, Martha J; Palmer, Benjamin R; DeWerd, Larry A

    2016-02-01

    Energy-based source strength metrics may find use with model-based dose calculation algorithms, but no instruments exist that can measure the energy emitted from low-dose rate (LDR) sources. This work developed a calorimetric technique for measuring the power emitted from encapsulated low-dose rate, photon-emitting brachytherapy sources. This quantity is called emitted power (EP). The measurement methodology, instrument design and performance, and EP measurements made with the calorimeter are presented in this work. A calorimeter operating with a liquid helium thermal sink was developed to measure EP from LDR brachytherapy sources. The calorimeter employed an electrical substitution technique to determine the power emitted from the source. The calorimeter's performance and thermal system were characterized. EP measurements were made using four (125)I sources with air-kerma strengths ranging from 2.3 to 5.6 U and corresponding EPs of 0.39-0.79 μW, respectively. Three Best Medical 2301 sources and one Oncura 6711 source were measured. EP was also computed by converting measured air-kerma strengths to EPs through Monte Carlo-derived conversion factors. The measured EP and derived EPs were compared to determine the accuracy of the calorimeter measurement technique. The calorimeter had a noise floor of 1-3 nW and a repeatability of 30-60 nW. The calorimeter was stable to within 5 nW over a 12 h measurement window. All measured values agreed with derived EPs to within 10%, with three of the four sources agreeing to within 4%. Calorimeter measurements had uncertainties ranging from 2.6% to 4.5% at the k = 1 level. The values of the derived EPs had uncertainties ranging from 2.9% to 3.6% at the k = 1 level. A calorimeter capable of measuring the EP from LDR sources has been developed and validated for (125)I sources with EPs between 0.43 and 0.79 μW.

  1. Beam Instrumentation for the Spallation Neutron Source Ring

    International Nuclear Information System (INIS)

    Witkover, R. L.; Cameron, P. R.; Shea, T. J.; Connolly, R. C.; Kesselman, M.

    1999-01-01

    The Spallation Neutron Source (SNS) will be constructed by a multi-laboratory collaboration with BNL responsible for the transfer lines and ring. The 1 MW beam power necessitates careful monitoring to minimize un-controlled loss. This high beam power will influence the design of the monitors in the high energy beam transport line (HEBT) from linac to ring, in the ring, and in the ring-to-target transfer line (RTBT). The ring instrumentation must cover a 3-decade range of beam intensity during accumulation. Beam loss monitoring will be especially critical since un-controlled beam loss must be kept below 10 -4 . A Beam-In-Gap (BIG) monitor is being designed to assure out-of-bucket beam will not be lost in the ring

  2. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  3. Application of the ASME-code-case N 47 to a typical thickwalled HTR-component made of Incoloy 800

    International Nuclear Information System (INIS)

    Kemter, F.; Schmidt, A.

    Several components of the HTR-plant are exposed to temperatures beyond 500 0 C, i.e. within the high-temperature range. The service life of those components is not only limited by fatigue damage but also mainly by creep damage and accumulated inelastic strain. These can be conservatively estimated according to the ASME-Code (high temperature part CC N47) by means of the results of elastic calculations, yet this simplified method to provide evidence often leads to calculated overloads such as the present case of the live steam collector of the steam generator of a HTR. For providing the evidence that the actual loads of the component are within permissible limits, comprehensive inelastic analyses have to be referred to in such a case. The two-dimensional inelastic analysis which is reported here in detail shows that the creep and fatigue failure as well as the inelastic extensions of the live steam collectors accumulated during the service time are below the permissible limit stated in the ASME-Code and failure of those components while used in the reactor can this be excluded. (orig.) [de

  4. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  5. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  6. Patient-specific targeting guides compared with traditional instrumentation for glenoid component placement in shoulder arthroplasty: a multi-surgeon study in 70 arthritic cadaver specimens.

    Science.gov (United States)

    Throckmorton, Thomas W; Gulotta, Lawrence V; Bonnarens, Frank O; Wright, Stephen A; Hartzell, Jeffrey L; Rozzi, William B; Hurst, Jason M; Frostick, Simon P; Sperling, John W

    2015-06-01

    The purpose of this study was to compare the accuracy of patient-specific guides for total shoulder arthroplasty (TSA) with traditional instrumentation in arthritic cadaver shoulders. We hypothesized that the patient-specific guides would place components more accurately than standard instrumentation. Seventy cadaver shoulders with radiographically confirmed arthritis were randomized in equal groups to 5 surgeons of varying experience levels who were not involved in development of the patient-specific guidance system. Specimens were then randomized to patient-specific guides based off of computed tomography scanning, standard instrumentation, and anatomic TSA or reverse TSA. Variances in version or inclination of more than 10° and more than 4 mm in starting point were considered indications of significant component malposition. TSA glenoid components placed with patient-specific guides averaged 5° of deviation from the intended position in version and 3° in inclination; those with standard instrumentation averaged 8° of deviation in version and 7° in inclination. These differences were significant for version (P = .04) and inclination (P = .01). Multivariate analysis of variance to compare the overall accuracy for the entire cohort (TSA and reverse TSA) revealed patient-specific guides to be significantly more accurate (P = .01) for the combined vectors of version and inclination. Patient-specific guides also had fewer instances of significant component malposition than standard instrumentation did. Patient-specific targeting guides were more accurate than traditional instrumentation and had fewer instances of component malposition for glenoid component placement in this multi-surgeon cadaver study of arthritic shoulders. Long-term clinical studies are needed to determine if these improvements produce improved functional outcomes. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  7. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  8. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  9. Cross platform SCA component using C++ builder and KYLIX

    International Nuclear Information System (INIS)

    Nishimura, Hiroshi; Timossi, Chiris; McDonald, James L.

    2003-01-01

    A cross-platform component for EPICS Simple Channel Access (SCA) has been developed. EPICS client programs with GUI become portable at their C++ source-code level both on Windows and Linux by using Borland C++ Builder 6 and Kylix 3 on these platforms respectively

  10. Use of Annotations for Component and Framework Interoperability

    Science.gov (United States)

    David, O.; Lloyd, W.; Carlson, J.; Leavesley, G. H.; Geter, F.

    2009-12-01

    The popular programming languages Java and C# provide annotations, a form of meta-data construct. Software frameworks for web integration, web services, database access, and unit testing now take advantage of annotations to reduce the complexity of APIs and the quantity of integration code between the application and framework infrastructure. Adopting annotation features in frameworks has been observed to lead to cleaner and leaner application code. The USDA Object Modeling System (OMS) version 3.0 fully embraces the annotation approach and additionally defines a meta-data standard for components and models. In version 3.0 framework/model integration previously accomplished using API calls is now achieved using descriptive annotations. This enables the framework to provide additional functionality non-invasively such as implicit multithreading, and auto-documenting capabilities while achieving a significant reduction in the size of the model source code. Using a non-invasive methodology leads to models and modeling components with only minimal dependencies on the modeling framework. Since models and modeling components are not directly bound to framework by the use of specific APIs and/or data types they can more easily be reused both within the framework as well as outside of it. To study the effectiveness of an annotation based framework approach with other modeling frameworks, a framework-invasiveness study was conducted to evaluate the effects of framework design on model code quality. A monthly water balance model was implemented across several modeling frameworks and several software metrics were collected. The metrics selected were measures of non-invasive design methods for modeling frameworks from a software engineering perspective. It appears that the use of annotations positively impacts several software quality measures. In a next step, the PRMS model was implemented in OMS 3.0 and is currently being implemented for water supply forecasting in the

  11. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  12. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy - Part 1: Model components for sources parameterization

    Science.gov (United States)

    Azzaro, Raffaele; Barberi, Graziella; D'Amico, Salvatore; Pace, Bruno; Peruzza, Laura; Tuvè, Tiziana

    2017-11-01

    The volcanic region of Mt. Etna (Sicily, Italy) represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA), the first results and maps of which are presented in a companion paper, Peruzza et al. (2017). The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades). The analysis of the frequency-magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude-size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool - FiSH (Pace et al., 2016) - that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be implemented in PSHA maps

  13. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  14. Instrument calls and real-time code for laboratory automation

    International Nuclear Information System (INIS)

    Taber, L.; Ames, H.S.; Yamauchi, R.K.; Barton, G.W. Jr.

    1978-01-01

    These programs are the result of a joint Lawrence Livermore Laboratory and Environmental Protection Agency project to automate water quality laboratories. They form the interface between the analytical instruments and the BASIC language programs for data reduction and analysis. They operate on Data General NOVA 840's at Cincinnati and Chicago and on a Data General ECLIPSE C330 at Livermore. The operating system consists of unmodified RDOS, Data General's disk operating system, and Data General's multiuser BASIC modified to provide the instrument CALLs and other functions described. Instruments automated at various laboratories include Technicon AutoAnalyzers, atomic absorption spectrophotometers, total organic carbon analyzers, an emission spectrometer, an electronic balance, sample changers, and an optical spectrophotometer. Other instruments may be automated using these same CALLs, or new CALLs may be written as described

  15. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  16. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  17. Use of the ORACLE DBMS in determining the response of complex scientific instrumentation

    International Nuclear Information System (INIS)

    Auerbach, J.M.; DeMartini, B.J.; McCauley, E.W.

    1984-01-01

    In the Laser Fusion Program at Lawrence Livermore National Laboratory, a single laser fusion experiment lasts only a billionth of a second but in this time high speed instrumentation collects data that when digitized will create a data bank of several megabytes. This first level of data must be processed in several stages to put it in a form useful for interpretation of the experiments. One stage involves unfolding the source characteristics from the data and response of the instrument. This involves calculating the response of the instrument from the characteristics of each of its components. It is in this calculation that the ORACLE DBMS has become an invaluable tool for manipulation and archiving of the component data

  18. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  19. Field instrumentation for hydrofracturing stress measurements

    International Nuclear Information System (INIS)

    Bjarnason, Bjarni; Torikka, Arne.

    1989-08-01

    A recently developed system for rock stress measurements by the hydraulic fracturing method is documented in detail. The new equipment is intended for measurement in vertical or nearvertical boreholes, down to a maximum depth of 1000 m. The minimum borehole, diameter required is 56 mm. Downhole instrumentation comprises a straddle packer assembly for borehole fracturing, equipment for determination of fracture orientations and a pressure transducer. The downhole tools are operated by means of a multihose system, containing high pressure hydraulic tubings, signal cable and carrying wire into one hose unit. The surface components of the equipment include a system for generation and control of water pressures up to approximately 75 MPa, an hydraulically operated drum for the multihose and a data acquisition system. All surface instrumentation is permanently mounted on a truck, which also serves as power source for the instrumentation. In addition to the description of instrumentation, the theoretical fundament and the testing procedures associated with the hydraulic fracturing method are briefly outlined

  20. Problems in instrumentation for S-odorant emissions

    International Nuclear Information System (INIS)

    Hall, H.J.

    1974-01-01

    Instrumentation to measure sulfur-containing odorants in stack emissions is much more difficult than in the ambient atmosphere, and must be matched to the specific source: key components aside from H 2 S are methyl mercaptan in paper mills, COS/CS 2 in a refinery Claus plant, and SO 2 /SO 3 in a combustion stack. Satisfactory operation for a period of six months was not achieved by any instrument in this service, in a lab and field evaluation of eight instruments of three types commercially available as of 1971. The effects of interferent gases H 2 O, SO 2 , CO 2 /CO and particulates which are diluted in ambient samples are greatly aggravated in stack gases, where the ratio of odorant to interferent may be 1:1000 or less, due to the very great sensitivity of human receptors to S-odorants. The most serious problem proved to be the analysis for odorless carbonyl sulfide, which is commonly formed where S compounds are oxidized in a reducing atmosphere. This COS has been undetected or mistaken for odorous H 2 S in most analyses. A field instrument for the general case would provide exactly simultaneous readings at five minute intervals or less for the five components H 2 S, SO 2 , COS, CSH, and total S, or their equivalent. This may be simplified to four components or less only when the composition of the sample gas is positively known

  1. Adaptable recursive binary entropy coding technique

    Science.gov (United States)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  2. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  3. Thermal neutron self-shielding correction factors for large sample instrumental neutron activation analysis using the MCNP code

    International Nuclear Information System (INIS)

    Tzika, F.; Stamatelatos, I.E.

    2004-01-01

    Thermal neutron self-shielding within large samples was studied using the Monte Carlo neutron transport code MCNP. The code enabled a three-dimensional modeling of the actual source and geometry configuration including reactor core, graphite pile and sample. Neutron flux self-shielding correction factors derived for a set of materials of interest for large sample neutron activation analysis are presented and evaluated. Simulations were experimentally verified by measurements performed using activation foils. The results of this study can be applied in order to determine neutron self-shielding factors of unknown samples from the thermal neutron fluxes measured at the surface of the sample

  4. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  5. Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder

    Science.gov (United States)

    Staats, Matt

    2009-01-01

    We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.

  6. Bioanalytical and instrumental analysis of thyroid hormone disrupting compounds in water sources along the Yangtze River

    International Nuclear Information System (INIS)

    Shi Wei; Wang Xiaoyi; Hu Guanjiu; Hao Yingqun; Zhang Xiaowei; Liu Hongling; Wei Si; Wang Xinru; Yu Hongxia

    2011-01-01

    Thyroid hormone (TH) agonist and antagonist activities of water sources along the Yangtze River in China were surveyed by a green monkey kidney fibroblast (CV-1) cell-based TH reporter gene assay. Instrumental analysis was conducted to identify the responsible thyroid-active compounds. Instrumentally derived L-3,5,3'-triiodothyronine (T 3 ) equivalents (T 3 -EQs) and thyroid receptor (TR) antagonist activity equivalents referring to dibutyl phthalate (DBP-EQs) were calculated from the concentrations of individual congeners. The reporter gene assay demonstrated that three out of eleven water sources contained TR agonist activity equivalents (TR-EQs), ranging from 286 to 293 ng T 3 /L. Anti-thyroid hormone activities were found in all water sources with the TR antagonist activity equivalents referring to DBP (Ant-TR-EQs), ranging from 51.5 to 555.3 μg/L. Comparisons of the equivalents from instrumental and biological assays suggested that high concentrations of DBP and di-2-ethylhexyl phthalate (DEHP) were responsible for the observed TR antagonist activities at some locations along the Yangtze River. - Research highlights: → First of all, we indicated the instrumentally derived L-3,5,3'-triiodothyronine (T 3 ) equivalents (T 3 -EQs) and thyroid receptor (TR) antagonist activity equivalents referring to DBP (DBP-EQs) for the very first time. → Secondly, high concentrations of DBP and DEHP might be responsible for the observed TR antagonist activities at some locations. → Finally, we found that thyroid receptor (TR) antagonist activities were very common in Yangtze River. More attentions should be paid to the TR antagonist activities and the responsible compounds. - In vitro bioassay responses observed in Yangtze River source water extracts showed great TR antagonist activities, and DBP and DEHP were responsible.

  7. Bioanalytical and instrumental analysis of thyroid hormone disrupting compounds in water sources along the Yangtze River

    Energy Technology Data Exchange (ETDEWEB)

    Shi Wei [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China); Wang Xiaoyi [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China); Jiangsu Academy of Environmental Science, Nanjing 210036 (China); Hu Guanjiu; Hao Yingqun [State Environmental Protection Key Laboratory of Monitoring and Analysis for Organic Pollutants in Surface Water, Jiangsu Provincial Environmental Monitoring Center, Nanjing 210036 (China); Zhang Xiaowei [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China); Liu Hongling, E-mail: hlliu@nju.edu.c [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China); Wei Si [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China); Wang Xinru [Key Laboratory of Reproductive Medicine and Institute of Toxicology, Nanjing Medical University, Nanjing 210029 (China); Yu Hongxia, E-mail: hongxiayu@nju.edu.c [State Key Laboratory of Pollution Control and Resource Reuse, School of the Environment, Nanjing University, Nanjing 210093 (China)

    2011-02-15

    Thyroid hormone (TH) agonist and antagonist activities of water sources along the Yangtze River in China were surveyed by a green monkey kidney fibroblast (CV-1) cell-based TH reporter gene assay. Instrumental analysis was conducted to identify the responsible thyroid-active compounds. Instrumentally derived L-3,5,3'-triiodothyronine (T{sub 3}) equivalents (T{sub 3}-EQs) and thyroid receptor (TR) antagonist activity equivalents referring to dibutyl phthalate (DBP-EQs) were calculated from the concentrations of individual congeners. The reporter gene assay demonstrated that three out of eleven water sources contained TR agonist activity equivalents (TR-EQs), ranging from 286 to 293 ng T{sub 3}/L. Anti-thyroid hormone activities were found in all water sources with the TR antagonist activity equivalents referring to DBP (Ant-TR-EQs), ranging from 51.5 to 555.3 {mu}g/L. Comparisons of the equivalents from instrumental and biological assays suggested that high concentrations of DBP and di-2-ethylhexyl phthalate (DEHP) were responsible for the observed TR antagonist activities at some locations along the Yangtze River. - Research highlights: First of all, we indicated the instrumentally derived L-3,5,3'-triiodothyronine (T{sub 3}) equivalents (T{sub 3}-EQs) and thyroid receptor (TR) antagonist activity equivalents referring to DBP (DBP-EQs) for the very first time. Secondly, high concentrations of DBP and DEHP might be responsible for the observed TR antagonist activities at some locations. Finally, we found that thyroid receptor (TR) antagonist activities were very common in Yangtze River. More attentions should be paid to the TR antagonist activities and the responsible compounds. - In vitro bioassay responses observed in Yangtze River source water extracts showed great TR antagonist activities, and DBP and DEHP were responsible.

  8. LOFT reactor vessel 290/sup 0/ downcomer stalk instrument penetration flange stress analysis

    Energy Technology Data Exchange (ETDEWEB)

    Finicle, D.P.

    1978-06-06

    The LOFT Reactor Vessel 290/sup 0/ Downcomer Stalk Instrument Penetration Flange Stress Analysis has been completed using normal operational and blowdown loading. A linear elastic analysis was completed using simplified hand analysis techniques. The analysis was in accordance with the 1977 ASME Boiler and Pressure Vessel Code, Section III, for a Class 1 component. Loading included internal pressure, bolt preload, and thermal gradients due to normal operating and blowdown.

  9. The reflection component in NS LMXBs

    Directory of Open Access Journals (Sweden)

    D’Aí A.

    2014-01-01

    Full Text Available Thanks to the good spectral resolution and large effective area of the EPIC/PN instrument on board of XMM-Newton, we have at hand a large number of observations of accreting low-mass X-ray binaries, that allow for the fist time a comprehensive view on the characteristics of the reflection component at different accretion regimes and to probe the effects of a magnetosphere on its formation. We focus here on a comparative analysis of the reflection component from a series of spectroscopic studies on selected sources: 4U 1705-44, observed both in the soft and hard state, the pulsating ms pulsars SAX J1808.4-3658 and IGR J17511-3057, and the intermittent pulsar HETE J1900-2455. Although the sources can present very similar accretion rates and continuum shapes, the reflection parameters do not generally result the same, moreover the effect of a magnetosphere on the formation of the reflection component appears elusive.

  10. The collection and analysis of transient test data using the mobile instrumentation data acquisition system (MIDAS)

    International Nuclear Information System (INIS)

    Uncapher, W.L.; Arviso, M.

    1995-01-01

    Packages designed to transport radioactive materials are required to survive exposure to environments defined in Code of Federal Regulations. Cask designers can investigate package designs through structural and thermal testing of full-scale packages, components, or representative models. The acquisition of reliable response data from instrumentation measurement devices is an essential part of this testing activity. Sandia National Laboratories, under the sponsorship of the US Department of Energy (DOE), has developed the Mobile Instrumentation Data Acquisition System (MIDAS) dedicated to the collection and processing of structural and thermal data from regulatory tests

  11. Experimental studies and modelling of cation interactions with solid materials: application to the MIMICC project. (Multidimensional Instrumented Module for Investigations on chemistry-transport Coupled Codes)

    International Nuclear Information System (INIS)

    Hardin, Emmanuelle

    1999-01-01

    The study of cation interactions with solid materials is useful in order to define the chemistry interaction component of the MIMICC project (Multidimensional Instrumented Module for Investigations on chemistry-transport Coupled Codes). This project will validate the chemistry-transport coupled codes. Database have to be supplied on the cesium or ytterbium interactions with solid materials in suspension. The solid materials are: a strong cation exchange resin, a natural sand which presents small impurities, and a zirconium phosphate. The cation exchange resin is useful to check that the surface complexation theory can be applied on a pure cation exchanger. The sand is a natural material, and its isotherms will be interpreted using pure oxide-cation system data, such as pure silica-cation data. Then the study on the zirconium phosphate salt is interesting because of the increasing complexity in the processes (dissolution, sorption and co-precipitation). These data will enable to approach natural systems, constituted by several complex solids which can interfere on each other. These data can also be used for chemistry-transport coupled codes. Potentiometric titration, sorption isotherms, sorption kinetics, cation surface saturation curves are made, in order to obtain the different parameters relevant to the cation sorption at the solid surface, for each solid-electrolyte-cation system. The influence of different parameters such as ionic strength, pH, and electrolyte is estimated. All the experimental curves are fitted with FITEQL code based on the surface complexation theory using the constant capacitance model, in order to give a mechanistic interpretation of the ion retention phenomenon at the solid surface. The speciation curves of all systems are plotted, using the FITEQL code too. Systems with an increasing complexity are studied: dissolution, sorption and coprecipitation coexist in the cation-salt systems. Then the data obtained on each single solid, considered

  12. Data acquisition system for the neutron scattering instruments at the intense pulsed neutron source

    International Nuclear Information System (INIS)

    Crawford, R.K.; Daly, R.T.; Haumann, J.R.; Hitterman, R.L.; Morgan, C.B.; Ostrowski, G.E.; Worlton, T.G.

    1981-01-01

    The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which is now coming on line for basic research in neutron scattering and neutron radiation damage. This paper describes the data-acquisition system which will handle data acquisition and instrument control for the time-of-flight neutron-scattering instruments at IPNS. This discussion covers the scientific and operational requirements for this system, and the system architecture that was chosen to satisfy these requirements. It also provides an overview of the current system implementation including brief descriptions of the hardware and software which have been developed

  13. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  14. Heat source component development program. Quarterly report for April--June 1977

    Energy Technology Data Exchange (ETDEWEB)

    Foster, E.L. Jr. (comp.)

    1977-07-01

    This is the third in a series of quarterly reports describing the results of several experimental programs being conducted at Battelle-Columbus to develop components for advanced radioisotope heat source applications. The heat sources will for the most part be used in advanced static and dynamic power conversion systems. The specific component development efforts which are described include: improved selective and nonselective vents for helium release from the fuel containment; an improved reentry member and an improved impact member, singly and combined. The unitized reentry-impact member (RIM) is under development to be used as a bifunctional ablator. The development of a unitized reentry-impact member (RIM) has been stopped and the efforts are being redirected to the evaluation of materials that could be used in the near term for the module housing of the General Purpose Heat Source (GPHS). This redirection will be particularly felt in the selection of (improved) materials for reentry analysis and in the experimental evaluation of materials in impact tests. Finally thermochemical supporting studies are reported.

  15. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  16. Experience with copper oxide production in antiproton source components at Fermi National Accelerator Laboratory

    International Nuclear Information System (INIS)

    Ader, Christine R.; Harms, Elvin R. Jr; Morgan, James P.

    2000-01-01

    The Antiproton (Pbar) Source at Fermi National Accelerator Laboratory is a facility comprised of a target station, two rings called the Debuncher and Accumulator and the transport lines between those rings and the remainder of the particle accelerator complex. Water is by far the most common medium for carrying excess heat away from components, primarily electromagnets, in this facility. The largest of the water systems found in Pbar is the 95 degree Fahrenheit Low Conductivity Water (LCW) system. LCW is water which has had free ions removed, increasing its resistance to electrical current. This water circuit is used to cool magnets, power supplies, and stochastic cooling components and typically has a resistivity of 11--18 megaohms-cm. For more than ten years the Antiproton rings were plagued with overheating magnets due to plugged water-cooling channels. Various repairs have been tried over the years with no permanent success. Throughout all of this time, water samples have indicated copper oxide, CuO, as the source of the contamination. Matters came to a head in early 1997 following a major underground LCW leak between the Central Utilities Building and the Antiproton Rings enclosures. Over a span of several weeks following system turn-on, some twenty magnets overheated leading to unreliable Pbar source operation. Although it was known that oxygen in the system reacts with the copper tubing to form CuO, work to remedy this problem was not undertaken until this time period. Leaks, large quantities of make-up water, infrequent filter replacement, and thermal cycling also result in an increase in the corrosion product release rate. A three-pronged approach has been implemented to minimize the amount of copper oxide available to plug the magnets: (1) installation of an oxygen removal system capable of achieving dissolved oxygen concentrations in the parts per billion (ppb) range; (2) regular closed-loop filter/flushing of the copper headers and magnets and stainless

  17. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  18. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  19. Codes of conduct: An extra suave instrument of EU governance?

    DEFF Research Database (Denmark)

    Borras, Susana

    able to coordinate actors successfully (effectiveness)? and secondly, under what conditions are codes of conduct able to generate democratically legitimate political processes? The paper examines carefully a recent case study, the “Code of Conduct for the Recruitment of Researchers” (CCRR). The code...... establishes a specific set of voluntary norms and principles that shall guide the recruiting process of researchers by European research organizations (universities, public research organizations and firms) in the 33 countries of the single market minded initiative of the European Research Area. A series...

  20. Integrated computer codes for nuclear power plant severe accident analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jordanov, I; Khristov, Y [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika

    1996-12-31

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs.

  1. Integrated computer codes for nuclear power plant severe accident analysis

    International Nuclear Information System (INIS)

    Jordanov, I.; Khristov, Y.

    1995-01-01

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs

  2. Instrument for observing transient cosmic gamma-ray sources for the ISEE-C Heliocentric spacecraft

    International Nuclear Information System (INIS)

    Evans, W.D.; Aiello, W.P.; Klebesadel, R.W.

    1977-12-01

    Satellite instrumentation that would serve as one element of a three-satellite network to provide precise directional information for the recently discovered cosmic gamma-ray bursts is described. The proposed network would be capable of determining source locations with uncertainties of less than one arc minute, sufficient for a meaningful optical and radio search. The association of the gamma bursts with a known type of astrophysical object provides the most direct method for establishing source distances and thus defining the overall energetics of the emission process

  3. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  4. Compact blackbody calibration sources for in-flight calibration of spaceborne infrared instruments

    Science.gov (United States)

    Scheiding, S.; Driescher, H.; Walter, I.; Hanbuch, K.; Paul, M.; Hartmann, M.; Scheiding, M.

    2017-11-01

    High-emissivity blackbodies are mandatory as calibration sources in infrared radiometers. Besides the requirements on the high spectral emissivity and low reflectance, constraints regarding energy consumption, installation space and mass must be considered during instrument design. Cavity radiators provide an outstanding spectral emissivity to the price of installation space and mass of the calibration source. Surface radiation sources are mainly limited by the spectral emissivity of the functional coating and the homogeneity of the temperature distribution. The effective emissivity of a "black" surface can be optimized, by structuring the substrate with the aim to enlarge the ratio of the surface to its projection. Based on the experiences of the Mercury Radiometer and Thermal Infrared Spectrometer (MERTIS) calibration source MBB3, the results of the surface structuring on the effective emissivity are described analytically and compared to the experimental performance. Different geometries are analyzed and the production methods are discussed. The high-emissivity temperature calibration source features values of 0.99 for wavelength from 5 μm to 10 μm and emissivity larger than 0.95 for the spectral range from 10 μm to 40 μm.

  5. Optical CDMA components requirements

    Science.gov (United States)

    Chan, James K.

    1998-08-01

    Optical CDMA is a complementary multiple access technology to WDMA. Optical CDMA potentially provides a large number of virtual optical channels for IXC, LEC and CLEC or supports a large number of high-speed users in LAN. In a network, it provides asynchronous, multi-rate, multi-user communication with network scalability, re-configurability (bandwidth on demand), and network security (provided by inherent CDMA coding). However, optical CDMA technology is less mature in comparison to WDMA. The components requirements are also different from WDMA. We have demonstrated a video transport/switching system over a distance of 40 Km using discrete optical components in our laboratory. We are currently pursuing PIC implementation. In this paper, we will describe the optical CDMA concept/features, the demonstration system, and the requirements of some critical optical components such as broadband optical source, broadband optical amplifier, spectral spreading/de- spreading, and fixed/programmable mask.

  6. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  7. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  8. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  9. Scheme for radiography/tomography with a low-brilliance neutron source at the CPHS

    International Nuclear Information System (INIS)

    Huang Zhifeng; Xiao Yongshun; Zhang Ran; Li Yuanji; Han Xiaoxue; Shao Beibei; Wang Xuewu; Wei Jie; Loong, C.-K.

    2011-01-01

    A cold neutron radiography/tomography instrument was designed and will soon undergo construction at the Compact Pulsed Hadron Source (CPHS) of Tsinghua University, China. In this paper, we report the physical design of the instrument and propose a scheme to implement several techniques at a later phase to enhance the utilization of the flux on larger samples. This includes code-aperture, grating-based imaging, prompt gamma-ray analysis and 3D emission CT.

  10. Identifying sources of emerging organic contaminants in a mixed use watershed using principal components analysis.

    Science.gov (United States)

    Karpuzcu, M Ekrem; Fairbairn, David; Arnold, William A; Barber, Brian L; Kaufenberg, Elizabeth; Koskinen, William C; Novak, Paige J; Rice, Pamela J; Swackhamer, Deborah L

    2014-01-01

    Principal components analysis (PCA) was used to identify sources of emerging organic contaminants in the Zumbro River watershed in Southeastern Minnesota. Two main principal components (PCs) were identified, which together explained more than 50% of the variance in the data. Principal Component 1 (PC1) was attributed to urban wastewater-derived sources, including municipal wastewater and residential septic tank effluents, while Principal Component 2 (PC2) was attributed to agricultural sources. The variances of the concentrations of cotinine, DEET and the prescription drugs carbamazepine, erythromycin and sulfamethoxazole were best explained by PC1, while the variances of the concentrations of the agricultural pesticides atrazine, metolachlor and acetochlor were best explained by PC2. Mixed use compounds carbaryl, iprodione and daidzein did not specifically group with either PC1 or PC2. Furthermore, despite the fact that caffeine and acetaminophen have been historically associated with human use, they could not be attributed to a single dominant land use category (e.g., urban/residential or agricultural). Contributions from septic systems did not clarify the source for these two compounds, suggesting that additional sources, such as runoff from biosolid-amended soils, may exist. Based on these results, PCA may be a useful way to broadly categorize the sources of new and previously uncharacterized emerging contaminants or may help to clarify transport pathways in a given area. Acetaminophen and caffeine were not ideal markers for urban/residential contamination sources in the study area and may need to be reconsidered as such in other areas as well.

  11. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  12. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  13. MULTI-COMPONENT ANALYSIS OF POSITION-VELOCITY CUBES OF THE HH 34 JET

    International Nuclear Information System (INIS)

    Rodríguez-González, A.; Esquivel, A.; Raga, A. C.; Cantó, J.; Curiel, S.; Riera, A.; Beck, T. L.

    2012-01-01

    We present an analysis of Hα spectra of the HH 34 jet with two-dimensional spectral resolution. We carry out multi-Gaussian fits to the spatially resolved line profiles and derive maps of the intensity, radial velocity, and velocity width of each of the components. We find that close to the outflow source we have three components: a high (negative) radial velocity component with a well-collimated, jet-like morphology; an intermediate velocity component with a broader morphology; and a positive radial velocity component with a non-collimated morphology and large linewidth. We suggest that this positive velocity component is associated with jet emission scattered in stationary dust present in the circumstellar environment. Farther away from the outflow source, we find only two components (a high, negative radial velocity component, which has a narrower spatial distribution than an intermediate velocity component). The fitting procedure was carried out with the new AGA-V1 code, which is available online and is described in detail in this paper.

  14. Radioisotope instruments

    CERN Document Server

    Cameron, J F; Silverleaf, D J

    1971-01-01

    International Series of Monographs in Nuclear Energy, Volume 107: Radioisotope Instruments, Part 1 focuses on the design and applications of instruments based on the radiation released by radioactive substances. The book first offers information on the physical basis of radioisotope instruments; technical and economic advantages of radioisotope instruments; and radiation hazard. The manuscript then discusses commercial radioisotope instruments, including radiation sources and detectors, computing and control units, and measuring heads. The text describes the applications of radioisotop

  15. Nanopositioning techniques development for synchrotron radiation instrumentation applications at the Advanced Photon Source

    International Nuclear Information System (INIS)

    Shu Deming

    2010-01-01

    At modern synchrotron radiation sources and beamlines, high-precision positioning techniques present a significant opportunity to support state-of-the-art synchrotron radiation research. Meanwhile, the required instrument positioning performance and capabilities, such as resolution, dynamic range, repeatability, speed, and multiple axes synchronization are exceeding the limit of commercial availability. This paper presents the current nanopositioning techniques developed for the Argonne Center for Nanoscale Materials (CNM)/Advanced Photon Source (APS) hard x-ray nanoprobe and high-resolution x-ray monochromators and analyzers for the APS X-ray Operations and Research (XOR) beamlines. Future nanopositioning techniques to be developed for the APS renewal project will also be discussed.

  16. Evolutionary programming for neutron instrument optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Bentley, Phillip M. [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany)]. E-mail: phillip.bentley@hmi.de; Pappas, Catherine [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Habicht, Klaus [Hahn-Meitner Institut, Glienicker Strasse 100, D-14109 Berlin (Germany); Lelievre-Berna, Eddy [Institut Laue-Langevin, 6 rue Jules Horowitz, BP 156, 38042 Grenoble Cedex 9 (France)

    2006-11-15

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations.

  17. Evolutionary programming for neutron instrument optimisation

    International Nuclear Information System (INIS)

    Bentley, Phillip M.; Pappas, Catherine; Habicht, Klaus; Lelievre-Berna, Eddy

    2006-01-01

    Virtual instruments based on Monte-Carlo techniques are now integral part of novel instrumentation development and the existing codes (McSTAS and Vitess) are extensively used to define and optimise novel instrumental concepts. Neutron spectrometers, however, involve a large number of parameters and their optimisation is often a complex and tedious procedure. Artificial intelligence algorithms are proving increasingly useful in such situations. Here, we present an automatic, reliable and scalable numerical optimisation concept based on the canonical genetic algorithm (GA). The algorithm was used to optimise the 3D magnetic field profile of the NSE spectrometer SPAN, at the HMI. We discuss the potential of the GA which combined with the existing Monte-Carlo codes (Vitess, McSTAS, etc.) leads to a very powerful tool for automated global optimisation of a general neutron scattering instrument, avoiding local optimum configurations

  18. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  19. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  20. Investigating the association of cardiovascular effects with personal exposure to particle components and sources

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Chang-fu, E-mail: changfu@ntu.edu.tw [Department of Public Health, National Taiwan University, Taipei 100, Taiwan (China); Institute of Environmental Health, National Taiwan University, Taipei 100, Taiwan (China); Institute of Occupational Medicine and Industrial Hygiene, Taipei 100, Taiwan (China); Li, Ya-Ru; Kuo, I-Chun [Institute of Environmental Health, National Taiwan University, Taipei 100, Taiwan (China); Hsu, Shih-Chieh [Research Center for Environmental Changes, Academia Sinica, Taipei 115, Taiwan (China); Lin, Lian-Yu; Su, Ta-Chen [Department of Internal Medicine, National Taiwan University Hospital, Taipei 100, Taiwan (China)

    2012-08-01

    Background: Few studies included information on components and sources when exploring the cardiovascular health effects from personal exposure to particulate matters (PM). We previously reported that exposure to PM between 1.0 and 2.5 {mu}m (PM{sub 2.5-1}) was associated with increased cardio-ankle vascular index (CAVI, an arterial stiffness index), while exposure to PM smaller than 0.25 {mu}m (PM{sub 0.25}) decreased the heart rate variability (HRV) indices. The purpose of this study was to investigate the association between PM elements and cardiovascular health effects and identify responsible sources. Methods: In a panel study of seventeen mail carriers, the subjects were followed for 5-6 days while delivering mail outdoors. Personal filter samples of PM{sub 2.5-1} and PM{sub 0.25} were analyzed for their elemental concentrations. The source-specific exposures were further estimated by using absolute principal factor analysis. We analyzed the component- and source-specific health effects on HRV indices and CAVI using mixed models. Results: Several elements in PM{sub 2.5-1} (e.g., cadmium and strontium) were associated with the CAVI. Subsequent analyses showed that an interquartile range increase in exposure to PM from regional sources was significantly associated with a 3.28% increase in CAVI (95% confidence interval (CI), 1.47%-5.13%). This significant effect remained (3.35%, CI: 1.62%-5.11%) after controlling for the ozone exposures. For exposures to PM{sub 0.25}, manganese, calcium, nickel, and chromium were associated with the CAVI and/or the HRV indices. Conclusions: Our study suggests that PM{sub 2.5-1} and PM{sub 0.25} components may be associated with different cardiovascular effects. Health risks from exposure to PM from sources other than vehicle exhaust should not be underappreciated. - Highlights: Black-Right-Pointing-Pointer Increased arterial stiffness was related to the components in particles between 1.0 and 2.5 {mu}m. Black

  1. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  2. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  3. Supervision of the vibration of rotating components

    International Nuclear Information System (INIS)

    1982-06-01

    The aim of the investifation was to plead for the systematization and uniformity of surveillance and to form a source of information to the makers of instruments, suppliers of engines, consultants and others. Two essential topics are treated, namely rotor dynamics and measuring methods for vibration control. An inventory of damages and problems of rotating machinery is presented. Recommendations concerning various supervision programs of reactor safety, the importance of components, risk of missiles and erroreous operations are given along with instructions how to get hold of suitable instruments. Experience from nuclear power plants is said to be essential. Experimental activity at Ringhals and/or Forsmark power plant is proposed. (G.B.)

  4. Nanosurveyor 2: A Compact Instrument for Nano-Tomography at the Advanced Light Source

    Science.gov (United States)

    Celestre, Richard; Nowrouzi, Kasra; Shapiro, David A.; Denes, Peter; Joseph, John M.; Schmid, Andreas; Padmore, Howard A.

    2017-06-01

    The Advanced Light Source has developed a compact tomographic microscope based on soft x-ray ptychography for the study of nanoscale materials [1,2]. The microscope utilizes the sample manipulator mechanism from a commercial TEM coupled with laser interferometric feedback for zone plate positioning and a fast frame rate charge-coupled device detector for soft x-ray diffraction measurements. The microscope has achieved point to point (25 nm steps) scan rates of greater than 120 Hz with a positioning accuracy of better than 1 nm RMS. The instrument will enable the use of commercially available sample holders compatible with FEI transmission electron microscopes thus also allowing in-situ measurement of samples using both soft x-rays and electrons. This instrument is a refinement of a currently commissioned instrument called The Nanosurveyor, which has demonstrated resolution of better than 10 nm in two dimensions using 750 eV x-rays. Once moved to the new Coherent Scattering and Microscopy beamline it will enable spectromicroscopy and tomography of nano-materials with wavelength limited spatial resolution.

  5. Synchrotron light sources and free-electron lasers accelerator physics, instrumentation and science applications

    CERN Document Server

    Khan, Shaukat; Schneider, Jochen; Hastings, Jerome

    2016-01-01

    Hardly any other discovery of the nineteenth century did have such an impact on science and technology as Wilhelm Conrad Röntgen’s seminal find of the X-rays. X-ray tubes soon made their way as excellent instruments for numerous applications in medicine, biology, materials science and testing, chemistry and public security. Developing new radiation sources with higher brilliance and much extended spectral range resulted in stunning developments like the electron synchrotron and electron storage ring and the freeelectron laser. This handbook highlights these developments in fifty chapters. The reader is given not only an inside view of exciting science areas but also of design concepts for the most advanced light sources. The theory of synchrotron radiation and of the freeelectron laser, design examples and the technology basis are presented. The handbook presents advanced concepts like seeding and harmonic generation, the booming field of Terahertz radiation sources and upcoming brilliant light sources dri...

  6. Ranking of risk significant components for the Davis-Besse Component Cooling Water System

    International Nuclear Information System (INIS)

    Seniuk, P.J.

    1994-01-01

    Utilities that run nuclear power plants are responsible for testing pumps and valves, as specified by the American Society of Mechanical Engineers (ASME) that are required for safe shutdown, mitigating the consequences of an accident, and maintaining the plant in a safe condition. These inservice components are tested according to ASME Codes, either the earlier requirements of the ASME Boiler and Pressure Vessel Code, Section XI, or the more recent requirements of the ASME Operation and Maintenance Code, Section IST. These codes dictate test techniques and frequencies regardless of the component failure rate or significance of failure consequences. A probabilistic risk assessment or probabilistic safety assessment may be used to evaluate the component importance for inservice test (IST) risk ranking, which is a combination of failure rate and failure consequences. Resources for component testing during the normal quarterly verification test or postmaintenance test are expensive. Normal quarterly testing may cause component unavailability. Outage testing may increase outage cost with no real benefit. This paper identifies the importance ranking of risk significant components in the Davis-Besse component cooling water system. Identifying the ranking of these risk significant IST components adds technical insight for developing the appropriate test technique and test frequency

  7. Instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Decreton, M

    2001-04-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor.

  8. Instrumentation

    International Nuclear Information System (INIS)

    Decreton, M.

    2001-01-01

    SCK-CEN's research and development programme on instrumentation involves the assessment and the development of sensitive measurement systems used within a radiation environment. Particular emphasis is on the assessment of optical fibre components and their adaptability to radiation environments. The evaluation of ageing processes of instrumentation in fission plants, the development of specific data evaluation strategies to compensate for ageing induced degradation of sensors and cable performance form part of these activities. In 2000, particular emphasis was on in-core reactor instrumentation applied to fusion, accelerator driven and water-cooled fission reactors. This involved the development of high performance instrumentation for irradiation experiments in the BR2 reactor in support of new instrumentation needs for MYRRHA, and for diagnostic systems for the ITER reactor

  9. Instrument Control (iC) - An Open-Source Software to Automate Test Equipment.

    Science.gov (United States)

    Pernstich, K P

    2012-01-01

    It has become common practice to automate data acquisition from programmable instrumentation, and a range of different software solutions fulfill this task. Many routine measurements require sequential processing of certain tasks, for instance to adjust the temperature of a sample stage, take a measurement, and repeat that cycle for other temperatures. This paper introduces an open-source Java program that processes a series of text-based commands that define the measurement sequence. These commands are in an intuitive format which provides great flexibility and allows quick and easy adaptation to various measurement needs. For each of these commands, the iC-framework calls a corresponding Java method that addresses the specified instrument to perform the desired task. The functionality of iC can be extended with minimal programming effort in Java or Python, and new measurement equipment can be addressed by defining new commands in a text file without any programming.

  10. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  11. A code for the calculation of self-absorption fractions of photons

    International Nuclear Information System (INIS)

    Jaegers, P.; Landsberger, S.

    1988-01-01

    Neutron activation analysis (NAA) is now a well-established technique used by many researchers and commercial companies. It is often wrongly assumed that these NAA methods are matrix independent over a wide variety of samples. Accuracy at the level of a few percent is often difficult to achieve, since components such as timing, pulse pile-up, high dead-time corrections, sample positioning, and chemical separations may severely compromise the results. One area that has received little attention is the calculation of the effect of self-absorption of gamma-rays (including low-energy ones) in samples, particularly those with major components of high-Z values. The analysis of trace components in lead samples is an obvious example, but other high-Z matrices such as various permutations and combinations of zinc, tin, lead, copper, silver, antimony, etc.; ore concentrates; and meteorites are also affected. The authors have developed a simple but effective personal-computer-compatible user-friendly code, however, which can calculate the amount of energy signal that is lost due to the presence of any amount of one or more Z components. The program is based on Dixon's paper of 1951 for the calculation of self-absorption corrections for linear, cylindrical, and spherical sources. To determine the self-absorption fraction of a photon in a source, the FORTRAN computer code SELFABS was written

  12. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. 77 FR 6463 - Revisions to Labeling Requirements for Blood and Blood Components, Including Source Plasma...

    Science.gov (United States)

    2012-02-08

    ... Blood Components, Including Source Plasma; Correction AGENCY: Food and Drug Administration, HHS. ACTION..., Including Source Plasma,'' which provided incorrect publication information regarding a 60-day notice that...

  15. Quality Assurance for Space Instruments Built with COTS

    DEFF Research Database (Denmark)

    Guldager, Peter Buch; Thuesen, Gøsta Guldbæk; Jørgensen, John Leif

    2005-01-01

    be changed at any time and have major consequences to the components ability to survive the space environment. A safe way to protect to components, which are not Latch-Up immune, is to protect the components with a Latch-Up protection circuit. A strict control has to be established, when procuring COTS......Instruments for space can be built with COTS. However no radiation data are available for COTS, so the only way to ensure that the components can survive the space environment is to irradiate each component. Samples from each Lot have to be irradiated, because the manufac-turing process can...... component, testing and manufacturing the instrument before the instrument is qualified for space. By having a strict control with instrument built with COTS, it is possible to manufacture a reliable instrument as with Rad-Hard components....

  16. Analysis of Iterated Hard Decision Decoding of Product Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2007-01-01

    Products of Reed-Solomon codes are important in applications because they offer a combination of large blocks, low decoding complexity, and good performance. A recent result on random graphs can be used to show that with high probability a large number of errors can be corrected by iterating...... minimum distance decoding. We present an analysis related to density evolution which gives the exact asymptotic value of the decoding threshold and also provides a closed form approximation to the distribution of errors in each step of the decoding of finite length codes....

  17. Seismic instrumentation for nuclear power plants

    International Nuclear Information System (INIS)

    Senne Junior, M.

    1983-01-01

    A seismic instrumentation system used in Nuclear Power Plants to monitor the design parameters of systems, structures and components, needed to provide safety to those Plants, against the action of earthquakes is described. The instrumentation described is based on the nuclear standards in force. The minimum amount of sensors and other components used, as well as their general localization, is indicated. The operation of the instrumentation system as a whole and the handling of the recovered data are dealt with accordingly. The various devices used are not covered in detail, except for the accelerometer, which is the seismic instrumentation basic component. (Author) [pt

  18. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  19. Simulation of a suite of generic long-pulse neutron instruments to optimize the time structure of the European Spallation Source.

    Science.gov (United States)

    Lefmann, Kim; Klenø, Kaspar H; Birk, Jonas Okkels; Hansen, Britt R; Holm, Sonja L; Knudsen, Erik; Lieutenant, Klaus; von Moos, Lars; Sales, Morten; Willendrup, Peter K; Andersen, Ken H

    2013-05-01

    We here describe the result of simulations of 15 generic neutron instruments for the long-pulsed European Spallation Source. All instruments have been simulated for 20 different settings of the source time structure, corresponding to pulse lengths between 1 ms and 2 ms; and repetition frequencies between 10 Hz and 25 Hz. The relative change in performance with time structure is given for each instrument, and an unweighted average is calculated. The performance of the instrument suite is proportional to (a) the peak flux and (b) the duty cycle to a power of approximately 0.3. This information is an important input to determining the best accelerator parameters. In addition, we find that in our simple guide systems, most neutrons reaching the sample originate from the central 3-5 cm of the moderator. This result can be used as an input in later optimization of the moderator design. We discuss the relevance and validity of defining a single figure-of-merit for a full facility and compare with evaluations of the individual instrument classes.

  20. Simulation of a suite of generic long-pulse neutron instruments to optimize the time structure of the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Lefmann, Kim; Kleno, Kaspar H.; Holm, Sonja L.; Sales, Morten [Nanoscience and eScience Centers, Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O (Denmark); Danish Workpackage for the ESS Design Update Phase, Universitetsparken 5, 2100 Copenhagen O (Denmark); Birk, Jonas Okkels [Nanoscience and eScience Centers, Niels Bohr Institute, University of Copenhagen, Universitetsparken 5, 2100 Copenhagen O (Denmark); Danish Workpackage for the ESS Design Update Phase, Universitetsparken 5, 2100 Copenhagen O (Denmark); Laboratory for Quantum Magnetism, Ecole Polytecnique Federale de Lausanne (EPFL), 1015 Lausanne (Switzerland); Hansen, Britt R.; Knudsen, Erik; Willendrup, Peter K. [Institute of Physics, Technical University of Denmark, 2800 Lyngby (Denmark); Danish Workpackage for the ESS Design Update Phase, 2800 Lyngby (Denmark); Lieutenant, Klaus [Institute for Energy Technology, Instituttveien 18, 2007 Kjeller (Norway); Helmholtz Center for Energy and Materials, Hahn-Meitner Platz, 14109 Berlin (Germany); German Work Package for the ESS Design Update, Hahn-Meitner Platz, 14109 Berlin (Germany); Moos, Lars von [Department of Energy Conversion and Storage, Technical University of Denmark, 4000 Roskilde (Denmark); Danish Workpackage for the ESS Design Update Phase, 2800 Lyngby (Denmark); Institute for Energy Conversion, Technical University of Denmark, 4000 Roskilde (Denmark); Andersen, Ken H. [European Spallation Source ESS AB, 22100 Lund (Sweden)

    2013-05-15

    We here describe the result of simulations of 15 generic neutron instruments for the long-pulsed European Spallation Source. All instruments have been simulated for 20 different settings of the source time structure, corresponding to pulse lengths between 1 ms and 2 ms; and repetition frequencies between 10 Hz and 25 Hz. The relative change in performance with time structure is given for each instrument, and an unweighted average is calculated. The performance of the instrument suite is proportional to (a) the peak flux and (b) the duty cycle to a power of approximately 0.3. This information is an important input to determining the best accelerator parameters. In addition, we find that in our simple guide systems, most neutrons reaching the sample originate from the central 3-5 cm of the moderator. This result can be used as an input in later optimization of the moderator design. We discuss the relevance and validity of defining a single figure-of-merit for a full facility and compare with evaluations of the individual instrument classes.

  1. Simulation of a suite of generic long-pulse neutron instruments to optimize the time structure of the European Spallation Source

    International Nuclear Information System (INIS)

    Lefmann, Kim; Klenø, Kaspar H.; Holm, Sonja L.; Sales, Morten; Birk, Jonas Okkels; Hansen, Britt R.; Knudsen, Erik; Willendrup, Peter K.; Lieutenant, Klaus; Moos, Lars von; Andersen, Ken H.

    2013-01-01

    We here describe the result of simulations of 15 generic neutron instruments for the long-pulsed European Spallation Source. All instruments have been simulated for 20 different settings of the source time structure, corresponding to pulse lengths between 1 ms and 2 ms; and repetition frequencies between 10 Hz and 25 Hz. The relative change in performance with time structure is given for each instrument, and an unweighted average is calculated. The performance of the instrument suite is proportional to (a) the peak flux and (b) the duty cycle to a power of approximately 0.3. This information is an important input to determining the best accelerator parameters. In addition, we find that in our simple guide systems, most neutrons reaching the sample originate from the central 3–5 cm of the moderator. This result can be used as an input in later optimization of the moderator design. We discuss the relevance and validity of defining a single figure-of-merit for a full facility and compare with evaluations of the individual instrument classes.

  2. Empirical validation of the triple-code model of numerical processing for complex math operations using functional MRI and group Independent Component Analysis of the mental addition and subtraction of fractions.

    Science.gov (United States)

    Schmithorst, Vincent J; Brown, Rhonda Douglas

    2004-07-01

    The suitability of a previously hypothesized triple-code model of numerical processing, involving analog magnitude, auditory verbal, and visual Arabic codes of representation, was investigated for the complex mathematical task of the mental addition and subtraction of fractions. Functional magnetic resonance imaging (fMRI) data from 15 normal adult subjects were processed using exploratory group Independent Component Analysis (ICA). Separate task-related components were found with activation in bilateral inferior parietal, left perisylvian, and ventral occipitotemporal areas. These results support the hypothesized triple-code model corresponding to the activated regions found in the individual components and indicate that the triple-code model may be a suitable framework for analyzing the neuropsychological bases of the performance of complex mathematical tasks. Copyright 2004 Elsevier Inc.

  3. MWIR hyperspectral imaging with the MIDAS instrument

    Science.gov (United States)

    Honniball, Casey I.; Wright, Rob; Lucey, Paul G.

    2017-02-01

    Hyperspectral imaging (HSI) in the Mid-Wave InfraRed (MWIR, 3-5 microns) can provide information on a variety of science applications from determining the chemical composition of lava lakes on Jupiter's moon Io, to investigating the amount of carbon liberated into the Earth's atmosphere during a wildfire. The limited signal available in the MWIR presents technical challenges to achieving high signal-to-noise ratios, and therefore it is typically necessary to cryogenically cool MWIR instruments. With recent improvements in microbolometer technology and emerging interferometric techniques, we have shown that uncooled microbolometers coupled with a Sagnac interferometer can achieve high signal-to-noise ratios for long-wave infrared HSI. To explore if this technique can be applied to the MWIR, this project, with funding from NASA, has built the Miniaturized Infrared Detector of Atmospheric Species (MIDAS). Standard characterization tests are used to compare MIDAS against a cryogenically cooled photon detector to evaluate the MIDAS instruments' ability to quantify gas concentrations. Atmospheric radiative transfer codes are in development to explore the limitations of MIDAS and identify the range of science objectives that MIDAS will most likely excel at. We will simulate science applications with gas cells filled with varying gas concentrations and varying source temperatures to verify our results from lab characterization and our atmospheric modeling code.

  4. Radioactive concrete sources at IRD/CNEN, Brazil, for calibration of uranium exploration and environmental field instruments

    International Nuclear Information System (INIS)

    Barreto, P.M.C.; Campos, C.A.; Malheiros, T.M.M.; Locborg, L.

    1988-01-01

    A radiometric calibration system consisting of eight radioactive concrete sources was constructed at the Institute of Radiation Protection and Dosimetry (IRD) of the Brazilian Nuclear Energy Commission (CNEN). These sources, stimulating rock outcrops, are available to geophysicists interested in uranium explotation and scientists working with natural radioactivity in environmental research. The sources are of cylindrical shape with 3m diameter and 0.5m thickness weighing approximately 7.5 tonnes each. They are disposed in a circle having in its centre a 4m diameter water pond for cosmi-ray and instrument noise corrections. Uranium, thorium and potassium ores were added to the concrete under such conditions as to achieve perfect homogenization. One hundred and four samples were collected and analysed by eight laboratories. In addition, in-situ radiometric grade determination were performed with calibrated instruments resulting a total of 2.100 determinations of U, Th and K, from which the reference values were assigned to each source. With this system, it is possible to calculate sensitivity constants and stripping ratios for portable gamma-ray spectrometers. It also provides excellent means for the calibration of radiation detectors used in environmental monitoring, in which humidity, temperature and omni-directional gamma flux, similar to the natural environmental, are simulated. (author) [pt

  5. Structural integrity assessment of a pressure container component. Design and service code implementation. Case studies

    International Nuclear Information System (INIS)

    Sanzi, H.C.

    2006-01-01

    In the present work, the most important results of the local stresses occurred in the cracked pipes with a axial through-wall crack (outer), produced during operation of a Petrochemical Plant, using finite elements method, are presented. As requested, the component has been verified based 3D FE plastic analysis, under the postulated failure loading, assuring with this method a high degree of accuracy in the results. Codes used by Design and Service, as ASME Section VIII Div. 2 and API 579, have been used in the analysis. (author) [es

  6. A modular simulation code applied to pressurized water nuclear power plants

    International Nuclear Information System (INIS)

    Agnoux, D.

    1992-01-01

    Analysis of the overall operation of an installation requires taking into account all couplings between the various components and integrating all the automatic actions initiated by control and instrumentation. The tool used for this analysis must be a high performing simulation model, flexible enough to be able to be quickly adapted to varying configurations. In order to study the behaviour of PWR nuclear power stations during normal or incidental operating transients, EDF-SEPTEN has developed the ERABLE code (Etudes Reacteurs a Base LEGO), based on the LEGO software package. (author)

  7. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  8. Code of Conduct for wind-power projects - Feasibility study; Code of Conduct fuer windkraftprojekte. Machbarkeitsstudie - Schlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Strub, P. [Pierre Strub, freischaffender Berater, Binningen (Switzerland); Ziegler, Ch. [Inter Act, Basel (Switzerland)

    2009-02-15

    This final report deals with the results of a feasibility study concerning the development of a Code of Conduct for wind-power projects. The aim is to strengthen the acceptance of wind-power by the general public. The necessity of new, voluntary market instruments is discussed. The urgency of development in this area is quoted as being high, and the authors consider the feasibility of the definition of a code of conduct as being proven. The code of conduct can, according to the authors, be of use at various levels but primarily in project development. Further free-enterprise instruments are also suggested that should help support socially compatible and successful market development. It is noted that the predominant portion of those questioned are prepared to co-operate in further work on the subject

  9. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  10. Correspondence between audio and visual deep models for musical instrument detection in video recordings

    OpenAIRE

    Slizovskaia, Olga; Gómez, Emilia; Haro, Gloria

    2017-01-01

    This work aims at investigating cross-modal connections between audio and video sources in the task of musical instrument recognition. We also address in this work the understanding of the representations learned by convolutional neural networks (CNNs) and we study feature correspondence between audio and visual components of a multimodal CNN architecture. For each instrument category, we select the most activated neurons and investigate exist- ing cross-correlations between neurons from the ...

  11. Code of practice and design principles, for installed radiological protection systems

    International Nuclear Information System (INIS)

    Powell, R.G.

    1980-09-01

    The main points on which a guide for designers and installers of radiological protection instrumentation (RPI) should be based have been examined by a small group of instrumentation engineers. The purpose of this document is to present a comprehensive and detailed review of these points. It is intended to give an overall coverage and serve as a reference document for specific points; it should also be of value to newcomers to the RPI field. The code presents a standard of good practice and takes the form of recommendations only. The contents cover: the requirement for RPI; design, availability and reliability, information displays, human factors, power supplies, manufacture, quality assurance, testing, cost, installation, operation, maintenance and documentation. Appendices include: Radon and thoron decay series, air sampling, reliability of component combinations and redundancy. (28 references). (author)

  12. Creating and purifying an observation instrument using the generalizability theory

    Directory of Open Access Journals (Sweden)

    Elena Rodríguez-Naveiras

    2013-12-01

    Full Text Available The control of quality of data it is one of the most relevant aspects in observational researches. The Generalizability Theory (GT provides a method of analysis that allows us to isolate the various sources of error measurement. At the same time, it helps us to determine the extent to which various factors can change and analyze the effect on the generalizability coefficient. In the work shown here, there are two studies aimed to creating and purifying an observation instrument, Observation Protocol in the Teaching Functions (Protocolo de Funciones Docentes, PROFUNDO, v1 and v2, for behavioral assessment which has been carried out by instructors in a social-affective out-of-school program. The reliability and homogeneity studies are carried out once the instrument has been created and purified. The reliability study will be done through the GT method taking both codes (c and agents (a as differential facets in. The generalization will be done through observers using a crossed multi-faceted design (A × O × C. In the homogeneity study the generalization facet will be done through codes using the same design that the reliability study.

  13. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  14. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  15. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  16. Data from the Mars Science Laboratory CheMin XRD/XRF Instrument

    Science.gov (United States)

    Vaniman, David; Blake, David; Bristow, Tom; DesMarais, David; Achilles, Cherie; Anderson, Robert; Crips, Joy; Morookian, John Michael; Spanovich, Nicole; Vasavada, Ashwin; hide

    2013-01-01

    The CheMin instrument on the Mars Science Laboratory (MSL) rover Curiosity uses a Co tube source and a CCD detector to acquire mineralogy from diffracted primary X-rays and chemical information from fluoresced X-rays. CheMin has been operating at the MSL Gale Crater field site since August 5, 2012 and has provided the first X-ray diffraction (XRD) analyses in situ on a body beyond Earth. Data from the first sample collected, the Rocknest eolian soil, identify a basaltic mineral suite, predominantly plagioclase (approx.An50), forsteritic olivine (approx.Fo58), augite and pigeonite, consistent with expectation that detrital grains on Mars would reflect widespread basaltic sources. Minor phases (each XRD. This amorphous component is attested to by a broad rise in background centered at approx.27deg 2(theta) (Co K(alpha)) and may include volcanic glass, impact glass, and poorly crystalline phases including iron oxyhydroxides; a rise at lower 2(theta) may indicate allophane or hisingerite. Constraints from phase chemistry of the crystalline components, compared with a Rocknest bulk composition from the APXS instrument on Curiosity, indicate that in sum the amorphous or poorly crystalline components are relatively Si, Al, Mg-poor and enriched in Ti, Cr, Fe, K, P, S, and Cl. All of the identified crystalline phases are volatile-free; H2O, SO2 and CO2 volatile releases from a split of this sample analyzed by the SAM instrument on Curiosity are associated with the amorphous or poorly ordered materials. The Rocknest eolian soil may be a mixture of local detritus, mostly crystalline, with a regional or global set of dominantly amorphous or poorly ordered components. The Rocknest sample was targeted by MSL for "first time analysis" to demonstrate that a loose deposit could be scooped, sieved to <150 microns, and delivered to instruments in the body of the rover. A drilled sample of sediment in outcrop is anticipated. At the time of writing this abstract, promising outcrops are

  17. Benchmarking shielding simulations for an accelerator-driven spallation neutron source

    Directory of Open Access Journals (Sweden)

    Nataliia Cherkashyna

    2015-08-01

    Full Text Available The shielding at an accelerator-driven spallation neutron facility plays a critical role in the performance of the neutron scattering instruments, the overall safety, and the total cost of the facility. Accurate simulation of shielding components is thus key for the design of upcoming facilities, such as the European Spallation Source (ESS, currently in construction in Lund, Sweden. In this paper, we present a comparative study between the measured and the simulated neutron background at the Swiss Spallation Neutron Source (SINQ, at the Paul Scherrer Institute (PSI, Villigen, Switzerland. The measurements were carried out at several positions along the SINQ monolith wall with the neutron dosimeter WENDI-2, which has a well-characterized response up to 5 GeV. The simulations were performed using the Monte-Carlo radiation transport code geant4, and include a complete transport from the proton beam to the measurement locations in a single calculation. An agreement between measurements and simulations is about a factor of 2 for the points where the measured radiation dose is above the background level, which is a satisfactory result for such simulations spanning many energy regimes, different physics processes and transport through several meters of shielding materials. The neutrons contributing to the radiation field emanating from the monolith were confirmed to originate from neutrons with energies above 1 MeV in the target region. The current work validates geant4 as being well suited for deep-shielding calculations at accelerator-based spallation sources. We also extrapolate what the simulated flux levels might imply for short (several tens of meters instruments at ESS.

  18. Code of a Tokamak Fusion Energy Facility ITER

    International Nuclear Information System (INIS)

    Yasuhide Asada; Kenzo Miya; Kazuhiko Hada; Eisuke Tada

    2002-01-01

    The technical structural code for ITER (International Thermonuclear Experimental Fusion Reactor) and, as more generic applications, for D-T burning fusion power facilities (hereafter, Fusion Code) should be innovative because of their quite different features of safety and mechanical components from nuclear fission reactors, and the necessity of introducing several new fabrication and examination technologies. Introduction of such newly developed technologies as inspection-free automatic welding into the Fusion Code is rationalized by a pilot application of a new code concept of s ystem-based code for integrity . The code concept means an integration of element technical items necessary for construction, operation and maintenance of mechanical components of fusion power facilities into a single system to attain an optimization of the total margin of these components. Unique and innovative items of the Fusion Code are typically as follows: - Use of non-metals; - Cryogenic application; - New design margins on allowable stresses, and other new design rules; - Use of inspection-free automatic welding, and other newly developed fabrication technologies; - Graded approach of quality assurance standard to cover radiological safety-system components as well as non-safety-system components; - Consideration on replacement components. (authors)

  19. Neutronics of the IFMIF neutron source: development and analysis

    International Nuclear Information System (INIS)

    Wilson, P.P.H.

    1999-01-01

    The accurate analysis of this system required the development of a code system and methodology capable of modelling the various physical processes. A generic code system for the neutronics analysis of neutron sources has been created by loosely integrating existing components with new developments: the data processing code NJOY, the Monte Carlo neutron transport code MCNP, and the activation code ALARA were supplemented by a damage data processing program, damChar, and integrated with a number of flexible and extensible modules for the Perl scripting language. Specific advances were required to apply this code system to IFMIF. Based on the ENDF-6 data format requirements of this system, new data evaluations have been implemented for neutron transport and activation. Extensive analysis of the Li(d, xn) reaction has led to a new MCNP source function module, M c DeLi, based on physical reaction models and capable of accurate and flexible modelling of the IFMIF neutron source term. In depth analyses of the neutron flux spectra and spatial distribution throughout the high flux test region permitted a basic validation of the tools and data. The understanding of the features of the neutron flux provided a foundation for the analyses of the other neutron responses. (orig./DGE) [de

  20. Bayesian component separation: The Planck experience

    Science.gov (United States)

    Wehus, Ingunn Kathrine; Eriksen, Hans Kristian

    2018-05-01

    Bayesian component separation techniques have played a central role in the data reduction process of Planck. The most important strength of this approach is its global nature, in which a parametric and physical model is fitted to the data. Such physical modeling allows the user to constrain very general data models, and jointly probe cosmological, astrophysical and instrumental parameters. This approach also supports statistically robust goodness-of-fit tests in terms of data-minus-model residual maps, which are essential for identifying residual systematic effects in the data. The main challenges are high code complexity and computational cost. Whether or not these costs are justified for a given experiment depends on its final uncertainty budget. We therefore predict that the importance of Bayesian component separation techniques is likely to increase with time for intensity mapping experiments, similar to what has happened in the CMB field, as observational techniques mature, and their overall sensitivity improves.

  1. Development and application of an educational 3D X-ray CT instrument

    International Nuclear Information System (INIS)

    Arakawa, Etsuo; Iwami, Ryutaro; Motohisa, Yasuko; Kamezawa, Chika; Kamogawa, Masashi; Voegeli, Wolfgang

    2016-01-01

    A three-dimensional (3D) X-ray computed tomography (CT) instrument for radiation education was developed. The structure of the instrument is such that the main parts, i.e. the X-ray source, specimen rotation stage, and two-dimensional detector can be easily observed. An experiment using a fruit of green pepper as a specimen was performed. CT images and intermediate steps for obtaining them, i.e. radiographs, sinograms after Radon transform, and real and imaginary parts of Fourier components in reciprocal space during inverse Radon transform are shown. We propose that these images will help students to understand the principle and mechanism of X-ray CT instruments visually. (author)

  2. FRESCO, a simplified code for cost analysis of fusion power plants

    International Nuclear Information System (INIS)

    Bustreo, C.; Casini, G.; Zollino, G.; Bolzonella, T.; Piovan, R.

    2013-01-01

    Highlights: • FRESCO is a code for rapid evaluation of the cost of electricity of a fusion power plant. • Parameters of the basic machine and unitary costs of components derived from ITER. • Power production components and plant power balance are extrapolated from PPCS. • A special effort is made in the investigation of the pulsed operation scenarios. • Technical and economical FRESCO results are compared with those of two PPCS models. -- Abstract: FRESCO (Fusion REactor Simplified COsts) is a code based on simplified models of physics, engineering and economical aspects of a TOKAMAK-like pulsed or steady-state fusion power plant. The experience coming from various aspects of ITER design, including selection of materials and operating scenarios, is exploited as much as possible. Energy production and plant power balance, including the recirculation requirements, are derived from two models of the PPCS European study, the helium cooled lithium/lead blanket model reactor (model AB) and the helium cooled ceramic one (model B). A detailed study of the availability of the power plant due, among others, to the replacement of plasma facing components, is also included in the code. The economics of the fusion power plant is evaluated through the levelized cost approach. Costs of the basic components are scaled from the corresponding values of the ITER project, the ARIES studies and SCAN model. The costs of plant auxiliaries, including those of the magnetic and electric systems, tritium plants, instrumentation, buildings and thermal energy storage if any, are recovered from ITER values and from those of other power plants. Finally, the PPCS models AB and B are simulated and the main results are reported in this paper

  3. FIRAC: a computer code to predict fire-accident effects in nuclear facilities

    International Nuclear Information System (INIS)

    Bolstad, J.W.; Krause, F.R.; Tang, P.K.; Andrae, R.W.; Martin, R.A.; Gregory, W.S.

    1983-01-01

    FIRAC is a medium-sized computer code designed to predict fire-induced flows, temperatures, and material transport within the ventilating systems and other airflow pathways in nuclear-related facilities. The code is designed to analyze the behavior of interconnected networks of rooms and typical ventilation system components. This code is one in a family of computer codes that is designed to provide improved methods of safety analysis for the nuclear industry. The structure of this code closely follows that of the previously developed TVENT and EVENT codes. Because a lumped-parameter formulation is used, this code is particularly suitable for calculating the effects of fires in the far field (that is, in regions removed from the fire compartment), where the fire may be represented parametrically. However, a fire compartment model to simulate conditions in the enclosure is included. This model provides transport source terms to the ventilation system that can affect its operation and in turn affect the fire

  4. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  5. IB: A Monte Carlo simulation tool for neutron scattering instrument design under PVM and MPI

    International Nuclear Information System (INIS)

    Zhao Jinkui

    2011-01-01

    Design of modern neutron scattering instruments relies heavily on Monte Carlo simulation tools for optimization. IB is one such tool written in C++ and implemented under Parallel Virtual Machine and the Message Passing Interface. The program was initially written for the design and optimization of the EQ-SANS instrument at the Spallation Neutron Source. One of its features is the ability to group simple instrument components into more complex ones at the user input level, e.g. grouping neutron mirrors into neutron guides and curved benders. The simulation engine manages the grouped components such that neutrons entering a group are properly operated upon by all components, multiple times if needed, before exiting the group. Thus, only a few basic optical modules are needed at the programming level. For simulations that require higher computer speeds, the program can be compiled and run in parallel modes using either the PVM or the MPI architectures.

  6. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  7. THYDE-P2 code: RCS (reactor-coolant system) analysis code

    International Nuclear Information System (INIS)

    Asahi, Yoshiro; Hirano, Masashi; Sato, Kazuo

    1986-12-01

    THYDE-P2, being characterized by the new thermal-hydraulic network model, is applicable to analysis of RCS behaviors in response to various disturbances including LB (large break)-LOCA(loss-of-coolant accident). In LB-LOCA analysis, THYDE-P2 is capable of through calculation from its initiation to complete reflooding of the core without an artificial change in the methods and models. The first half of the report is the description of the methods and models for use in the THYDE-P2 code, i.e., (1) the thermal-hydraulic network model, (2) the various RCS components models, (3) the heat sources in fuel, (4) the heat transfer correlations, (5) the mechanical behavior of clad and fuel, and (6) the steady state adjustment. The second half of the report is the user's mannual for the THYDE-P2 code (version SV04L08A) containing items; (1) the program control (2) the input requirements, (3) the execution of THYDE-P2 job, (4) the output specifications and (5) the sample problem to demonstrate capability of the thermal-hydraulic network model, among other things. (author)

  8. New GOES satellite synchronized time code generation

    Science.gov (United States)

    Fossler, D. E.; Olson, R. K.

    1984-01-01

    The TRAK Systems' GOES Satellite Synchronized Time Code Generator is described. TRAK Systems has developed this timing instrument to supply improved accuracy over most existing GOES receiver clocks. A classical time code generator is integrated with a GOES receiver.

  9. Novel mass spectrometric instrument for gaseous and particulate characterization and monitoring

    International Nuclear Information System (INIS)

    Coggiola, M.J.

    1993-04-01

    Under contract DE-AC21-92MC29116, SRI International will develop a unique new instrument that will be capable of providing real-time (< l minute), quantitative, chemical characterization of gaseous and particulate pollutants generated from DOE waste cleanup activities. The instrument will be capable of detecting and identifying volatile organic compounds, polynuclear aromatic hydrocarbons, heavy metals, and transuranic species released during waste cleanup activities. The instrument will be unique in its ability to detect and quantify in real-time these diverse pollutants in both vapor and particulate form. The instrument to be developed under this program will consist of several major components: (1) an isokinetic sampler capable of operating over a wide range of temperatures (up to 500 K) and flow rates; (2) a high pressure to low pressure transition and sampling region that efficiently separates particles from vapor-phase components for separate, parallel analyses; (3) two small mass spectrometers, one optimized for organic analysis using a unique field ionization source and one optimized for particulate characterization using thermal pyrolysis and electron-impact ionization (EI); and (4) a powerful personal computer for control and data acquisition

  10. On-line monitoring and inservice inspection in codes

    International Nuclear Information System (INIS)

    Bartonicek, J.; Zaiss, W.; Bath, H.R.

    1999-01-01

    The relevant regulatory codes determine the ISI tasks and the time intervals for recurrent components testing for evaluation of operation-induced damaging or ageing in order to ensure component integrity on the basis of the last available quality data. In-service quality monitoring is carried out through on-line monitoring and recurrent testing. The requirements defined by the engineering codes elaborated by various institutions are comparable, with the KTA nuclear engineering and safety codes being the most complete provisions for quality evaluation and assurance after different, defined service periods. German conventional codes for assuring component integrity provide exclusively for recurrent inspection regimes (mainly pressure tests and optical testing). The requirements defined in the KTA codes however always demanded more specific inspections relying on recurrent testing as well as on-line monitoring. Foreign codes for ensuring component integrity concentrate on NDE tasks at regular time intervals, with time intervals scope of testing activities being defined on the basis of the ASME code, section XI. (orig./CB) [de

  11. Codes maintained by the LAACG [Los Alamos Accelerator Code Group] at the NMFECC

    International Nuclear Information System (INIS)

    Wallace, R.; Barts, T.

    1990-01-01

    The Los Alamos Accelerator Code Group (LAACG) maintains two groups of design codes at the National Magnetic Fusion Energy Computing Center (NMFECC). These codes, principally electromagnetic field solvers, are used for the analysis and design of electromagnetic components for accelerators, e.g., magnets, rf structures, pickups, etc. In this paper, the status and future of the installed codes will be discussed with emphasis on an experimental version of one set of codes, POISSON/SUPERFISH

  12. Recent developments for the HEADTAIL code: updating and benchmarks

    CERN Document Server

    Quatraro, D; Salvant, B

    2010-01-01

    The HEADTAIL code models the evolution of a single bunch interacting with a localized impedance source or an electron cloud, optionally including space charge. The newest version of HEADTAIL relies on a more detailed optical model of the machine taken from MAD-X and is more flexible in handling and distributing the interaction and observation points along the simulated machine. In addition, the option of the interaction with the wake field of specific accelerator components has been added, such that the user can choose to load dipolar and quadrupolar components of the wake from the impedance database ZBASE. The case of a single LHC-type bunch interacting with the realistic distribution of the kicker wake fields inside the SPS has been successfully compared with a single integrated beta-weighted kick per turn. The current version of the code also contains a new module for the longitudinal dynamics to calculate the evolution of a bunch inside an accelerating bucket.

  13. Instrumentation for two-phase flow measurements in code verification experiments

    International Nuclear Information System (INIS)

    Fincke, J.R.; Anderson, J.L.; Arave, A.E.; Deason, V.A.; Lassahn, G.D.; Goodrich, L.D.; Colson, J.B.; Fickas, E.T.

    1981-01-01

    The development of instrumentation and techniques for the measurement of mass flow rate in two-phase flows conducted at the Idaho National Engineering Laboratory during the past year is briefly described. Instruments discussed are the modular drag-disc turbine transducer, the gamma densitometers, the ultrasonic densitometer, Pitot tubes, and full-flow drag screens. Steady state air-water and transient steam-water data are presented

  14. Heat source component development program. Report for July--December 1978

    International Nuclear Information System (INIS)

    Foster, E.L. Jr.

    1979-01-01

    This is the seventh of a series of reports describing the results of several analytical and experimental programs being conducted at Battelle-Columbus Laboratories to develop components for advanced radioisotope heat source applications. The heat sources will for the most part be used in advanced static and dynamic power conversion systems. Battelle's support of LASL during the current reporting period has been to determine the operational and reentry response of selected heat source trial designs, and their thermal response to a space shuttle solid propellant fire environment. Thermal, ablation, and thermal stress analyses were conducted using two-dimensional modeling techniques previously employed for the analysis of the earlier trial design versions, and modified in part to improve the modeling accuracy. Further modifications were made to improve the modeling accuracy as described herein. Thermal, ablation, and thermal stress analyses were then conducted for the trial design selected by LASL/DOE for more detailed studies using three-dimensional modeling techniques

  15. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  16. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  17. MPEG-compliant joint source/channel coding using discrete cosine transform and substream scheduling for visual communication over packet networks

    Science.gov (United States)

    Kim, Seong-Whan; Suthaharan, Shan; Lee, Heung-Kyu; Rao, K. R.

    2001-01-01

    Quality of Service (QoS)-guarantee in real-time communication for multimedia applications is significantly important. An architectural framework for multimedia networks based on substreams or flows is effectively exploited for combining source and channel coding for multimedia data. But the existing frame by frame approach which includes Moving Pictures Expert Group (MPEG) cannot be neglected because it is a standard. In this paper, first, we designed an MPEG transcoder which converts an MPEG coded stream into variable rate packet sequences to be used for our joint source/channel coding (JSCC) scheme. Second, we designed a classification scheme to partition the packet stream into multiple substreams which have their own QoS requirements. Finally, we designed a management (reservation and scheduling) scheme for substreams to support better perceptual video quality such as the bound of end-to-end jitter. We have shown that our JSCC scheme is better than two other two popular techniques by simulation and real video experiments on the TCP/IP environment.

  18. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  19. Performance Tuning of x86 OpenMP Codes with MAQAO

    Science.gov (United States)

    Barthou, Denis; Charif Rubial, Andres; Jalby, William; Koliai, Souad; Valensi, Cédric

    Failing to find the best optimization sequence for a given application code can lead to compiler generated codes with poor performances or inappropriate code. It is necessary to analyze performances from the assembly generated code to improve over the compilation process. This paper presents a tool for the performance analysis of multithreaded codes (OpenMP programs support at the moment). MAQAO relies on static performance evaluation to identify compiler optimizations and assess performance of loops. It exploits static binary rewriting for reading and instrumenting object files or executables. Static binary instrumentation allows the insertion of probes at instruction level. Memory accesses can be captured to help tune the code, but such traces require to be compressed. MAQAO can analyze the results and provide hints for tuning the code. We show on some examples how this can help users improve their OpenMP applications.

  20. Labview Interface Concepts Used in NASA Scientific Investigations and Virtual Instruments

    Science.gov (United States)

    Roth, Don J.; Parker, Bradford H.; Rapchun, David A.; Jones, Hollis H.; Cao, Wei

    2001-01-01

    This article provides an overview of several software control applications developed for NASA using LabVIEW. The applications covered here include (1) an Ultrasonic Measurement System for nondestructive evaluation of advanced structural materials, an Xray Spectral Mapping System for characterizing the quality and uniformity of developing photon detector materials, (2) a Life Testing System for these same materials, (3) and the instrument panel for an aircraft mounted Cloud Absorption Radiometer that measures the light scattered by clouds in multiple spectral bands. Many of the software interface concepts employed are explained. Panel layout and block diagram (code) strategies for each application are described. In particular, some of the more unique features of the applications' interfaces and source code are highlighted. This article assumes that the reader has a beginner-to-intermediate understanding of LabVIEW methods.

  1. Troubleshooting in nuclear instruments

    International Nuclear Information System (INIS)

    1987-06-01

    This report on troubleshooting of nuclear instruments is the product of several scientists and engineers, who are closely associated with nuclear instrumentation and with the IAEA activities in the field. The text covers the following topics: Preamplifiers, amplifiers, scalers, timers, ratemeters, multichannel analyzers, dedicated instruments, tools, instruments, accessories, components, skills, interfaces, power supplies, preventive maintenance, troubleshooting in systems, radiation detectors. The troubleshooting and repair of instruments is illustrated by some real examples

  2. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  3. Fusion-component lifetime analysis

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1982-09-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modeling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The individual coefficients within the equations are different for each material. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO and to analyze the limiter for FED/INTOR

  4. Current Status of the Elevated Temperature Structure Design Codes for VHTR

    International Nuclear Information System (INIS)

    Kim, Jong-Bum; Kim, Seok-Hoon; Park, Keun-Bae; Lee, Won-Jae

    2006-01-01

    An elevated temperature structure design and analysis is one of the key issues in the VHTR (Very High Temperature Reactor) project to achieve an economic production of hydrogen which will be an essential energy source for the near future. Since the operating temperature of a VHTR is above 850 .deg. C, the existing code and standards are insufficient for a high temperature structure design. Thus the issues concerning a material selection and behaviors are being studied for the main structural components of a VHTR in leading countries such as US, France, UK, and Japan. In this study, the current status of the ASME code, French RCC-MR, UK R5, and Japanese code were investigated and the necessary R and D items were discussed

  5. Open Source AV solution supporting In Situ Simulation

    DEFF Research Database (Denmark)

    Krogh, Kristian; Pociunas, Gintas; Dahl, Mads Ronald

    the software to meet our expectations for a portable AV system for VAD. The system would make use of “off the shelf” hardware components which are widely available and easily replaced or expanded. The developed AV software and coding is contracted to be available as Copyleft Open Source to ensure low cost...... a stable AV software that has be developed and implemented for an in situ simulation initiative. This version (1.3) is the first on released as Open Source (Copyleft) software (see QR tag). We have found that it is possible to deliver multi-camera video assisted debriefing in a mobile, in situ simulation...... environment using an AV system constructed from “off the shelf” components and Open Source software....

  6. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  7. Simulation of the thermalhydraulic behavior of a molten core within a structure, with the three dimensions three components TOLBIAC code

    Energy Technology Data Exchange (ETDEWEB)

    Spindler, B.; Moreau, G.M.; Pigny S. [Centre d`Etudes Nucleaires de Grenoble (France)

    1995-09-01

    The TOLBIAC code is devoted to the simulation of the behavior of a molten core within a structure (pressure vessel of core catcher), taking into account the relative position of the core components, the wall ablation and the crust formation. The code is briefly described: 3D model, physical properties and constitutive laws. wall ablation and crust model. Two results are presented: the simulation of the COPO experiment (natural convection with water in a 1/2 scale elliptic pressure vessel), and the simulation of the behavior of a corium in a PWR pressure vessel, with ablation and crust formation.

  8. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  9. Instrumentation for Position Sensitive Detector-Powder diffractometer at CENM-Maamora

    International Nuclear Information System (INIS)

    Messous, M.-Y.; Belhorma, B.; Labrim, H.; El-Bakkari, B.; Jabri, H.

    2013-06-01

    Linear position sensitive detectors are widely used to configure neutron diffractometer and other instruments. Necessary front-end electronics and data acquisition system was developed to fulfil such instruments built around the research reactor. In this paper, the front-end electronics dedicated to the neutron powder diffractometer which will be installed in the axial beam port of the Triga Mark II research reactor (Center of Nuclear Studies of Maamora) is described. It consists of High voltage power supply, a Position-decoder and a Multichannel analyzer and data acquisition software. The 3 He-PSD detector response exposed to the neutron flow emitted by 252 Cf source held in paraffin spheres with distinct thicknesses for moderation effect, is shown. Monte-Carlo N Particles code (MCNP) simulations were also performed to study both the detector performance and the paraffin efficiency. (authors)

  10. Development and Verification of the Computer Codes for the Fast Reactors Nuclear Safety Justification

    International Nuclear Information System (INIS)

    Kisselev, A.E.; Mosunova, N.A.; Strizhov, V.F.

    2015-01-01

    The information on the status of the work on development of the system of the nuclear safety codes for fast liquid metal reactors is presented in paper. The purpose of the work is to create an instrument for NPP neutronic, thermohydraulic and strength justification including human and environment radiation safety. The main task that is to be solved by the system of codes developed is the analysis of the broad spectrum of phenomena taking place on the NPP (including reactor itself, NPP components, containment rooms, industrial site and surrounding area) and analysis of the impact of the regular and accidental releases on the environment. The code system is oriented on the ability of fully integrated modeling of the NPP behavior in the coupled definition accounting for the wide range of significant phenomena taking place on the NPP under normal and accident conditions. It is based on the models that meet the state-of-the-art knowledge level. The codes incorporate advanced numerical methods and modern programming technologies oriented on the high-performance computing systems. The information on the status of the work on verification of the separate codes of the system of codes is also presented. (author)

  11. 3-component beamforming analysis of ambient seismic noise field for Love and Rayleigh wave source directions

    Science.gov (United States)

    Juretzek, Carina; Hadziioannou, Céline

    2014-05-01

    Our knowledge about common and different origins of Love and Rayleigh waves observed in the microseism band of the ambient seismic noise field is still limited, including the understanding of source locations and source mechanisms. Multi-component array methods are suitable to address this issue. In this work we use a 3-component beamforming algorithm to obtain source directions and polarization states of the ambient seismic noise field within the primary and secondary microseism bands recorded at the Gräfenberg array in southern Germany. The method allows to distinguish between different polarized waves present in the seismic noise field and estimates Love and Rayleigh wave source directions and their seasonal variations using one year of array data. We find mainly coinciding directions for the strongest acting sources of both wave types at the primary microseism and different source directions at the secondary microseism.

  12. Steam 80 steam generator instrumentation

    International Nuclear Information System (INIS)

    Carson, W.H.; Harris, H.H.

    1980-01-01

    This paper describes two special instrumentation packages in an integral economizer (preheater) steam generator of one of the first System 80 plants scheduled to go into commercial operation. The purpose of the instrumentation is to obtain accurate operating information from regions of the secondary side of the steam generator inaccessible to normal plant instrumentation. In addition to verification of the System 80 steam generator design predictions, the data obtained will assist in verification of steam generator thermal/hydraulic computer codes developed for generic use in the industry

  13. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  14. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  15. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  16. Challenges to implementation of the WHO Global Code of Practice on International Recruitment of Health Personnel: the case of Sudan.

    Science.gov (United States)

    Abuagla, Ayat; Badr, Elsheikh

    2016-06-30

    The WHO Global Code of Practice on the International Recruitment of Health Personnel (hereafter the WHO Code) was adopted by the World Health Assembly in 2010 as a voluntary instrument to address challenges of health worker migration worldwide. To ascertain its relevance and effectiveness, the implementation of the WHO Code needs to be assessed based on country experience; hence, this case study on Sudan. This qualitative study depended mainly on documentary sources in addition to key informant interviews. Experiences of the authors has informed the analysis. Migration of Sudanese health workers represents a major health system challenge. Over half of Sudanese physicians practice abroad and new trends are showing involvement of other professions and increased feminization. Traditional destinations include Gulf States, especially Saudi Arabia and Libya, as well as the United Kingdom and the Republic of Ireland. Low salaries, poor work environment, and a lack of adequate professional development are the leading push factors. Massive emigration of skilled health workers has jeopardized coverage and quality of healthcare and health professional education. Poor evidence, lack of a national policy, and active recruitment in addition to labour market problems were barriers for effective migration management in Sudan. Response of destination countries in relation to cooperative arrangements with Sudan as a source country has always been suboptimal, demonstrating less attention to solidarity and ethical dimensions. The WHO Code boosted Sudan's efforts to address health worker migration and health workforce development in general. Improving migration evidence, fostering a national dialogue, and promoting bilateral agreements in addition to catalysing health worker retention strategies are some of the benefits accrued. There are, however, limitations in publicity of the WHO Code and its incorporation into national laws and regulatory frameworks for ethical recruitment. The

  17. Visualization of Instrumental Verification Information Details (VIVID) : code development, description, and usage.

    Energy Technology Data Exchange (ETDEWEB)

    Roy, Christopher John; Bainbridge, Bruce L.; Potter, Donald L.; Blottner, Frederick G.; Black, Amalia Rebecca

    2005-03-01

    The formulation, implementation and usage of a numerical solution verification code is described. This code uses the Richardson extrapolation procedure to estimate the order of accuracy and error of a computational program solution. It evaluates multiple solutions performed in numerical grid convergence studies to verify a numerical algorithm implementation. Analyses are performed on both structured and unstructured grid codes. Finite volume and finite element discretization programs are examined. Two and three-dimensional solutions are evaluated. Steady state and transient solution analysis capabilities are present in the verification code. Multiple input data bases are accepted. Benchmark options are included to allow for minimal solution validation capability as well as verification.

  18. Instrument Control (iC) – An Open-Source Software to Automate Test Equipment

    Science.gov (United States)

    Pernstich, K. P.

    2012-01-01

    It has become common practice to automate data acquisition from programmable instrumentation, and a range of different software solutions fulfill this task. Many routine measurements require sequential processing of certain tasks, for instance to adjust the temperature of a sample stage, take a measurement, and repeat that cycle for other temperatures. This paper introduces an open-source Java program that processes a series of text-based commands that define the measurement sequence. These commands are in an intuitive format which provides great flexibility and allows quick and easy adaptation to various measurement needs. For each of these commands, the iC-framework calls a corresponding Java method that addresses the specified instrument to perform the desired task. The functionality of iC can be extended with minimal programming effort in Java or Python, and new measurement equipment can be addressed by defining new commands in a text file without any programming. PMID:26900522

  19. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources; SCRIC: un code pour calculer l'absorption et l'emission detaillees de plasmas hors equilibre, inhomogenes et etendus; application aux sources EUV a base de xenon

    Energy Technology Data Exchange (ETDEWEB)

    Gaufridy de Dortan, F. de

    2006-07-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  20. Performance Engineering Technology for Scientific Component Software

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.

    2007-05-08

    Large-scale, complex scientific applications are beginning to benefit from the use of component software design methodology and technology for software development. Integral to the success of component-based applications is the ability to achieve high-performing code solutions through the use of performance engineering tools for both intra-component and inter-component analysis and optimization. Our work on this project aimed to develop performance engineering technology for scientific component software in association with the DOE CCTTSS SciDAC project (active during the contract period) and the broader Common Component Architecture (CCA) community. Our specific implementation objectives were to extend the TAU performance system and Program Database Toolkit (PDT) to support performance instrumentation, measurement, and analysis of CCA components and frameworks, and to develop performance measurement and monitoring infrastructure that could be integrated in CCA applications. These objectives have been met in the completion of all project milestones and in the transfer of the technology into the continuing CCA activities as part of the DOE TASCS SciDAC2 effort. In addition to these achievements, over the past three years, we have been an active member of the CCA Forum, attending all meetings and serving in several working groups, such as the CCA Toolkit working group, the CQoS working group, and the Tutorial working group. We have contributed significantly to CCA tutorials since SC'04, hosted two CCA meetings, participated in the annual ACTS workshops, and were co-authors on the recent CCA journal paper [24]. There are four main areas where our project has delivered results: component performance instrumentation and measurement, component performance modeling and optimization, performance database and data mining, and online performance monitoring. This final report outlines the achievements in these areas for the entire project period. The submitted progress

  1. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  2. Pool scrubbing models for iodine components

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K [Battelle Ingenieurtechnik GmbH, Eschborn (Germany)

    1996-12-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs.

  3. Pool scrubbing models for iodine components

    International Nuclear Information System (INIS)

    Fischer, K.

    1996-01-01

    Pool scrubbing is an important mechanism to retain radioactive fission products from being carried into the containment atmosphere or into the secondary piping system. A number of models and computer codes has been developed to predict the retention of aerosols and fission product vapours that are released from the core and injected into water pools of BWR and PWR type reactors during severe accidents. Important codes in this field are BUSCA, SPARC and SUPRA. The present paper summarizes the models for scrubbing of gaseous Iodine components in these codes, discusses the experimental validation, and gives an assessment of the state of knowledge reached and the open questions which persist. The retention of gaseous Iodine components is modelled by the various codes in a very heterogeneous manner. Differences show up in the chemical species considered, the treatment of mass transfer boundary layers on the gaseous and liquid sides, the gas-liquid interface geometry, calculation of equilibrium concentrations and numerical procedures. Especially important is the determination of the pool water pH value. This value is affected by basic aerosols deposited in the water, e.g. Cesium and Rubidium compounds. A consistent model requires a mass balance of these compounds in the pool, thus effectively coupling the pool scrubbing phenomena of aerosols and gaseous Iodine species. Since the water pool conditions are also affected by drainage flow of condensate water from different regions in the containment, and desorption of dissolved gases on the pool surface is determined by the gas concentrations above the pool, some basic limitations of specialized pool scrubbing codes are given. The paper draws conclusions about the necessity of coupling between containment thermal-hydraulics and pool scrubbing models, and proposes ways of further simulation model development in order to improve source term predictions. (author) 2 tabs., refs

  4. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  5. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  6. Using open source music software to teach live electronics in pre-college music education

    OpenAIRE

    Roels, Hans

    2010-01-01

    A basic course of live electronics is needed in pre- college music education to teach children how to perform on a digital musical instrument. This paper describes the basic components of such a live electronics course, examines whether open source music software is suited to realize these components and finally presents Abunch, a library in Pure Data created by the author, as a solution for the potential educational disadvantages of open source music softw...

  7. Runtime Instrumentation of SystemC/TLM2 Interfaces for Fault Tolerance Requirements Verification in Software Cosimulation

    Directory of Open Access Journals (Sweden)

    Antonio da Silva

    2014-01-01

    Full Text Available This paper presents the design of a SystemC transaction level modelling wrapping library that can be used for the assertion of system properties, protocol compliance, or fault injection. The library uses C++ virtual table hooks as a dynamic binary instrumentation technique to inline wrappers in the TLM2 transaction path. This technique can be applied after the elaboration phase and needs neither source code modifications nor recompilation of the top level SystemC modules. The proposed technique has been successfully applied to the robustness verification of the on-board boot software of the Instrument Control Unit of the Solar Orbiter’s Energetic Particle Detector.

  8. Quantifying undesired parallel components in Thévenin-equivalent acoustic source parameters

    DEFF Research Database (Denmark)

    Nørgaard, Kren Rahbek; Neely, Stephen T.; Rasetshwane, Daniel M.

    2018-01-01

    in the source parameters. Such parallel components can result from, e.g., a leak in the ear tip or improperly accounting for evanescent modes, and introduce errors into subsequent measurements of impedance and reflectance. This paper proposes a set of additional error metrics that are capable of detecting...

  9. Strength evaluation code STEP for brittle materials

    International Nuclear Information System (INIS)

    Ishihara, Masahiro; Futakawa, Masatoshi.

    1997-12-01

    In a structural design using brittle materials such as graphite and/or ceramics it is necessary to evaluate the strength of component under complex stress condition. The strength of ceramic materials is said to be influenced by the stress distribution. However, in the structural design criteria simplified stress limits had been adopted without taking account of the strength change with the stress distribution. It is, therefore, important to evaluate the strength of component on the basis of the fracture model for brittle material. Consequently, the strength evaluation program, STEP, on a brittle fracture of ceramic materials based on the competing risk theory had been developed. Two different brittle fracture modes, a surface layer fracture mode dominated by surface flaws and an internal fracture mode by internal flaws, are treated in the STEP code in order to evaluate the strength of brittle fracture. The STEP code uses stress calculation results including complex shape of structures analyzed by the generalized FEM stress analysis code, ABAQUS, so as to be possible to evaluate the strength of brittle fracture for the structures having complicate shapes. This code is, therefore, useful to evaluate the structural integrity of arbitrary shapes of components such as core graphite components in the HTTR, heat exchanger components made of ceramics materials etc. This paper describes the basic equations applying to the STEP code, code system with a combination of the STEP and the ABAQUS codes and the result of the verification analysis. (author)

  10. LOFT instrumentation

    International Nuclear Information System (INIS)

    Bixby, W.W.

    1979-01-01

    A description of instrumentation used in the Loss-of-Fluid Test (LOFT) large break Loss-of-Coolant Experiments is presented. Emphasis is placed on hydraulic and thermal measurements in the primary system piping and components, reactor vessel, and pressure suppression system. In addition, instrumentation which is being considered for measurement of phenomena during future small break testing is discussed. (orig.) 891 HP/orig. 892 BRE [de

  11. Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual

    Science.gov (United States)

    1979-09-01

    imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin

  12. Source Term Characteristics Analysis for Structural Components in PWR spent fuel assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kook, Dong Hak; Choi, Heui Joo; Cho, Dong Keun [KAERI, Daejeon (Korea, Republic of)

    2010-12-15

    Source terms of metal waste comprising a spent fuel assembly are relatively important when the spent fuel is pyroprocessed, because cesium, strontium, and transuranics are not a concern any more in the aspect of source term of permanent disposal. In this study, characteristics of radiation source terms for each structural component in spent fuel assembly was analyzed by using ORIGEN-S with a assumption that 10 metric tons of uranium is pyroprocessed. At first, mass and volume for each structural component of the fuel assembly were calculated in detail. Activation cross section library was generated by using KENO-VI/ORIGEN-S module for top-end piece and bottom-end piece, because those are located at outer core under different neutron spectrum compared to that of inner core. As a result, values of radioactivity, decay heat, and hazard index were reveled to be 1.32x1015 Bequerels, 238 Watts, 4.32x109 m3 water, respectively, at 10 years after discharge. Those values correspond to 0.6 %, 1.1 %, 0.1 %, respectively, compared to that of spent fuel. Inconel 718 grid plate was shown to be the most important component in the all aspects of radioactivity, decay heat, and hazard index although the mass occupies only 1 % of the total. It was also shown that if the Inconel 718 grid plate is managed separately, the radioactivity and hazard index of metal waste could be decreased to 25{approx}50 % and 35{approx}40 %, respectively. As a whole, decay heat of metal waste was shown to be negligible in the aspect of disposal system design, while the radioactivity and hazard index are important

  13. A Low-Cost, Simplified Platform of Interchangeable, Ambient Ionization Sources for Rapid, Forensic Evidence Screening on Portable Mass Spectrometric Instrumentation

    Directory of Open Access Journals (Sweden)

    Patrick W. Fedick

    2018-03-01

    Full Text Available Portable mass spectrometers (MS are becoming more prevalent due to improved instrumentation, commercialization, and the robustness of new ionization methodologies. To increase utility towards diverse field-based applications, there is an inherent need for rugged ionization source platforms that are simple, yet robust towards analytical scenarios that may arise. Ambient ionization methodologies have evolved to target specific real-world problems and fulfill requirements of the analysis at hand. Ambient ionization techniques continue to advance towards higher performance, with specific sources showing variable proficiency depending on application area. To realize the full potential and applicability of ambient ionization methods, a selection of sources may be more prudent, showing a need for a low-cost, flexible ionization source platform. This manuscript describes a centralized system that was developed for portable MS systems that incorporates modular, rapidly-interchangeable ionization sources comprised of low-cost, commercially-available parts. Herein, design considerations are reported for a suite of ambient ionization sources that can be crafted with minimal machining or customization. Representative spectral data is included to demonstrate applicability towards field processing of forensic evidence. While this platform is demonstrated on portable instrumentation, retrofitting to lab-scale MS systems is anticipated.

  14. MODELING THERMAL DUST EMISSION WITH TWO COMPONENTS: APPLICATION TO THE PLANCK HIGH FREQUENCY INSTRUMENT MAPS

    International Nuclear Information System (INIS)

    Meisner, Aaron M.; Finkbeiner, Douglas P.

    2015-01-01

    We apply the Finkbeiner et al. two-component thermal dust emission model to the Planck High Frequency Instrument maps. This parameterization of the far-infrared dust spectrum as the sum of two modified blackbodies (MBBs) serves as an important alternative to the commonly adopted single-MBB dust emission model. Analyzing the joint Planck/DIRBE dust spectrum, we show that two-component models provide a better fit to the 100-3000 GHz emission than do single-MBB models, though by a lesser margin than found by Finkbeiner et al. based on FIRAS and DIRBE. We also derive full-sky 6.'1 resolution maps of dust optical depth and temperature by fitting the two-component model to Planck 217-857 GHz along with DIRBE/IRAS 100 μm data. Because our two-component model matches the dust spectrum near its peak, accounts for the spectrum's flattening at millimeter wavelengths, and specifies dust temperature at 6.'1 FWHM, our model provides reliable, high-resolution thermal dust emission foreground predictions from 100 to 3000 GHz. We find that, in diffuse sky regions, our two-component 100-217 GHz predictions are on average accurate to within 2.2%, while extrapolating the Planck Collaboration et al. single-MBB model systematically underpredicts emission by 18.8% at 100 GHz, 12.6% at 143 GHz, and 7.9% at 217 GHz. We calibrate our two-component optical depth to reddening, and compare with reddening estimates based on stellar spectra. We find the dominant systematic problems in our temperature/reddening maps to be zodiacal light on large angular scales and the cosmic infrared background anisotropy on small angular scales

  15. Emission quantification using the tracer gas dispersion method: The influence of instrument, tracer gas species and source simulation

    DEFF Research Database (Denmark)

    Delre, Antonio; Mønster, Jacob; Samuelsson, Jerker

    2018-01-01

    The tracer gas dispersion method (TDM) is a remote sensing method used for quantifying fugitive emissions by relying on the controlled release of a tracer gas at the source, combined with concentration measurements of the tracer and target gas plumes. The TDM was tested at a wastewater treatment...... plant for plant-integrated methane emission quantification, using four analytical instruments simultaneously and four different tracer gases. Measurements performed using a combination of an analytical instrument and a tracer gas, with a high ratio between the tracer gas release rate and instrument...... precision (a high release-precision ratio), resulted in well-defined plumes with a high signal-to-noise ratio and a high methane-to-tracer gas correlation factor. Measured methane emission rates differed by up to 18% from the mean value when measurements were performed using seven different instrument...

  16. When probabilistic seismic hazard climbs volcanoes: the Mt. Etna case, Italy – Part 1: Model components for sources parameterization

    Directory of Open Access Journals (Sweden)

    R. Azzaro

    2017-11-01

    Full Text Available The volcanic region of Mt. Etna (Sicily, Italy represents a perfect lab for testing innovative approaches to seismic hazard assessment. This is largely due to the long record of historical and recent observations of seismic and tectonic phenomena, the high quality of various geophysical monitoring and particularly the rapid geodynamics clearly demonstrate some seismotectonic processes. We present here the model components and the procedures adopted for defining seismic sources to be used in a new generation of probabilistic seismic hazard assessment (PSHA, the first results and maps of which are presented in a companion paper, Peruzza et al. (2017. The sources include, with increasing complexity, seismic zones, individual faults and gridded point sources that are obtained by integrating geological field data with long and short earthquake datasets (the historical macroseismic catalogue, which covers about 3 centuries, and a high-quality instrumental location database for the last decades. The analysis of the frequency–magnitude distribution identifies two main fault systems within the volcanic complex featuring different seismic rates that are controlled essentially by volcano-tectonic processes. We discuss the variability of the mean occurrence times of major earthquakes along the main Etnean faults by using an historical approach and a purely geologic method. We derive a magnitude–size scaling relationship specifically for this volcanic area, which has been implemented into a recently developed software tool – FiSH (Pace et al., 2016 – that we use to calculate the characteristic magnitudes and the related mean recurrence times expected for each fault. Results suggest that for the Mt. Etna area, the traditional assumptions of uniform and Poissonian seismicity can be relaxed; a time-dependent fault-based modeling, joined with a 3-D imaging of volcano-tectonic sources depicted by the recent instrumental seismicity, can therefore be

  17. Meteorological support for aerosol radiometers: special aerosol sources

    Energy Technology Data Exchange (ETDEWEB)

    Belkina, S.K.; Zalmanzon, Yu.E.; Kuznetsov, Yu.V.; Fertman, D.E.

    1988-07-01

    A new method is described for transfer of the measure of unit volume activity of radioactive aerosols from the state special standard to the working instruments in the stage of regular operation. The differences from existing methods are examined. The principal distinction of the new method is the possibility of direct (rather than through the conversion factor) determination and subsequent testing of the fundamental meteorological characteristics of the instrument by means of special aerosol sources, which fosters a significant reduction in individual components of the indicated errors.

  18. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  19. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  20. Demonstration study on shielding safety analysis code (8)

    Energy Technology Data Exchange (ETDEWEB)

    Sawamura, Sadashi [Hokkaido Univ., Sapporo (Japan)

    2001-03-01

    Dose evaluation for direct radiation and skyshine from nuclear fuel facilities is one of the environment evaluation items. This evaluation is carried out by using some shielding calculation codes. Because of extremely few benchmark data of skyshine, the calculation has to be performed very conservatively. Therefore, the benchmark data of skyshine and the well-investigated code for skyshine would be necessary to carry out the rational evaluation of nuclear facilities. The purpose of this study is to obtain the benchmark data of skyshine and to investigate the calculation code for skyshine. In this fiscal year, the followings are investigated. (1) A {sup 3}He detector and some instruments are added to the former detection system to increase the detection sensitivity in pulsed neutron measurements. Using the new detection system, the skyshine of neutrons from 45 MeV LINAC facility are measured in the distance up to 350 m. (2) To estimate the spectrum of leakage neutron from the facility, {sup 3}He detector with moderators is constructed and the response functions of the detector are calculated using the MCNP simulation code. The leakage spectrum in the facility are measured and unfolded using the SAND-II code. (3) Using the EGS code and/or MCNP code, neutron yields by the photo-nuclear reaction in the lead target are calculated. Then, the neutron fluence at some points including the duct (from which neutrons leaks and is considered to be a skyshine source) is simulated by MCNP MONTE CARLO code. (4) In the distance up to 350 m from the facility, neutron fluence due to the skyshine process are calculated and compared with the experimental results. The comparison gives a fairly good agreement. (author)

  1. The Minimum Distance of Graph Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2011-01-01

    We study codes constructed from graphs where the code symbols are associated with the edges and the symbols connected to a given vertex are restricted to be codewords in a component code. In particular we treat such codes from bipartite expander graphs coming from Euclidean planes and other...... geometries. We give results on the minimum distances of the codes....

  2. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  3. Coupled Hydrodynamic and Wave Propagation Modeling for the Source Physics Experiment: Study of Rg Wave Sources for SPE and DAG series.

    Science.gov (United States)

    Larmat, C. S.; Delorey, A.; Rougier, E.; Knight, E. E.; Steedman, D. W.; Bradley, C. R.

    2017-12-01

    This presentation reports numerical modeling efforts to improve knowledge of the processes that affect seismic wave generation and propagation from underground explosions, with a focus on Rg waves. The numerical model is based on the coupling of hydrodynamic simulation codes (Abaqus, CASH and HOSS), with a 3D full waveform propagation code, SPECFEM3D. Validation datasets are provided by the Source Physics Experiment (SPE) which is a series of highly instrumented chemical explosions at the Nevada National Security Site with yields from 100kg to 5000kg. A first series of explosions in a granite emplacement has just been completed and a second series in alluvium emplacement is planned for 2018. The long-term goal of this research is to review and improve current existing seismic sources models (e.g. Mueller & Murphy, 1971; Denny & Johnson, 1991) by providing first principles calculations provided by the coupled codes capability. The hydrodynamic codes, Abaqus, CASH and HOSS, model the shocked, hydrodynamic region via equations of state for the explosive, borehole stemming and jointed/weathered granite. A new material model for unconsolidated alluvium materials has been developed and validated with past nuclear explosions, including the 10 kT 1965 Merlin event (Perret, 1971) ; Perret and Bass, 1975). We use the efficient Spectral Element Method code, SPECFEM3D (e.g. Komatitsch, 1998; 2002), and Geologic Framework Models to model the evolution of wavefield as it propagates across 3D complex structures. The coupling interface is a series of grid points of the SEM mesh situated at the edge of the hydrodynamic code domain. We will present validation tests and waveforms modeled for several SPE tests which provide evidence that the damage processes happening in the vicinity of the explosions create secondary seismic sources. These sources interfere with the original explosion moment and reduces the apparent seismic moment at the origin of Rg waves up to 20%.

  4. Simulation based investigation of source-detector configurations for non-invasive fetal pulse oximetry

    Directory of Open Access Journals (Sweden)

    Böttrich Marcel

    2015-09-01

    Full Text Available Transabdominal fetal pulse oximetry is a method to monitor the oxygen supply of the unborn child non-invasively. Due to the measurement setup, the received signal of the detector is composed of photons coding purely maternal and photons coding mixed fetal-maternal information. To analyze the wellbeing of the fetus, the fetal signal is extracted from the mixed component. In this paper we assess source-detector configurations, such that the mixed fetal-maternal components of the acquired signals are maximized. Monte-Carlo method is used to simulate light propagation and photon distribution in tissue. We use a plane layer and a spherical layer geometry to model the abdomen of a pregnant woman. From the simulations we extracted the fluence at the detector side for several source-detector distances and analyzed the ratio of the mixed fluence component to total fluence. Our simulations showed that the power of the mixed component depends on the source-detector distance as expected. Further we were able to visualize hot spot areas in the spherical layer model where the mixed fluence ratio reaches the highest level. The results are of high importance for sensor design considering signal composition and quality for non-invasive fetal pulse oximetry.

  5. Aeroacoustics of Musical Instruments

    NARCIS (Netherlands)

    Fabre, B.; Gilbert, J.; Hirschberg, Abraham; Pelorson, X.

    2012-01-01

    We are interested in the quality of sound produced by musical instruments and their playability. In wind instruments, a hydrodynamic source of sound is coupled to an acoustic resonator. Linear acoustics can predict the pitch of an instrument. This can significantly reduce the trial-and-error process

  6. On-site meteorological instrumentation requirements to characterize diffusion from point sources: workshop report. Final report Sep 79-Sep 80

    International Nuclear Information System (INIS)

    Strimaitis, D.; Hoffnagle, G.; Bass, A.

    1981-04-01

    Results of a workshop entitled 'On-Site Meteorological Instrumentation Requirements to Characterize Diffusion from Point Sources' are summarized and reported. The workshop was sponsored by the U.S. Environmental Protection Agency in Raleigh, North Carolina, on January 15-17, 1980. Its purpose was to provide EPA with a thorough examination of the meteorological instrumentation and data collection requirements needed to characterize airborne dispersion of air contaminants from point sources and to recommend, based on an expert consensus, specific measurement technique and accuracies. Secondary purposes of the workshop were to (1) make recommendations to the National Weather Service (NWS) about collecting and archiving meteorological data that would best support air quality dispersion modeling objectives and (2) make recommendations on standardization of meteorological data reporting and quality assurance programs

  7. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  8. Preserving Envelope Efficiency in Performance Based Code Compliance

    Energy Technology Data Exchange (ETDEWEB)

    Thornton, Brian A. [Thornton Energy Consulting (United States); Sullivan, Greg P. [Efficiency Solutions (United States); Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Baechler, Michael C. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-06-20

    The City of Seattle 2012 Energy Code (Seattle 2014), one of the most progressive in the country, is under revision for its 2015 edition. Additionally, city personnel participate in the development of the next generation of the Washington State Energy Code and the International Energy Code. Seattle has pledged carbon neutrality by 2050 including buildings, transportation and other sectors. The United States Department of Energy (DOE), through Pacific Northwest National Laboratory (PNNL) provided technical assistance to Seattle in order to understand the implications of one potential direction for its code development, limiting trade-offs of long-lived building envelope components less stringent than the prescriptive code envelope requirements by using better-than-code but shorter-lived lighting and heating, ventilation, and air-conditioning (HVAC) components through the total building performance modeled energy compliance path. Weaker building envelopes can permanently limit building energy performance even as lighting and HVAC components are upgraded over time, because retrofitting the envelope is less likely and more expensive. Weaker building envelopes may also increase the required size, cost and complexity of HVAC systems and may adversely affect occupant comfort. This report presents the results of this technical assistance. The use of modeled energy code compliance to trade-off envelope components with shorter-lived building components is not unique to Seattle and the lessons and possible solutions described in this report have implications for other jurisdictions and energy codes.

  9. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  10. Storage, handling and movement of fuel and related components at nuclear power plants

    International Nuclear Information System (INIS)

    1979-01-01

    The report describes in general terms the various operations involved in the handling of fresh fuel, irradiated fuel, and core components such as control rods, neutron sources, burnable poisons and removable instruments. It outlines the principal safety problems in these operations and provides the broad safety criteria which must be observed in the design, operation and maintenance of equipment and facilities for handling, transferring, and storing nuclear fuel and core components at nuclear power reactor sites

  11. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  12. A Fast and Sensitive New Satellite SO2 Retrieval Algorithm based on Principal Component Analysis: Application to the Ozone Monitoring Instrument

    Science.gov (United States)

    Li, Can; Joiner, Joanna; Krotkov, A.; Bhartia, Pawan K.

    2013-01-01

    We describe a new algorithm to retrieve SO2 from satellite-measured hyperspectral radiances. We employ the principal component analysis technique in regions with no significant SO2 to capture radiance variability caused by both physical processes (e.g., Rayleigh and Raman scattering and ozone absorption) and measurement artifacts. We use the resulting principal components and SO2 Jacobians calculated with a radiative transfer model to directly estimate SO2 vertical column density in one step. Application to the Ozone Monitoring Instrument (OMI) radiance spectra in 310.5-340 nm demonstrates that this approach can greatly reduce biases in the operational OMI product and decrease the noise by a factor of 2, providing greater sensitivity to anthropogenic emissions. The new algorithm is fast, eliminates the need for instrument-specific radiance correction schemes, and can be easily adapted to other sensors. These attributes make it a promising technique for producing longterm, consistent SO2 records for air quality and climate research.

  13. UK surplus source disposal programme - 16097

    International Nuclear Information System (INIS)

    John, Gordon H.; Reeves, Nigel; Nisbet, Amy C.; Garnett, Andrew; Williams, Clive R.

    2009-01-01

    The UK Surplus Source Disposal Programme (SSDP), managed by the Environment Agency, was designed to remove redundant radioactive sources from the public domain. The UK Government Department for Environment, Food and Rural Affairs (Defra) was concerned that disused sources were being retained by hospitals, universities and businesses, posing a risk to public health and the environment. AMEC provided a range of technical and administrative services to support the SSDP. A questionnaire was issued to registered source holders and the submitted returns compiled to assess the scale of the project. A member of AMEC staff was seconded to the Environment Agency to provide technical support and liaise directly with source holders during funding applications, which would cover disposal costs. Funding for disposal of different sources was partially based on a sliding scale of risk as determined by the IAEA hazard categorisation system. This funding was also sector dependent. The SSDP was subsequently expanded to include the disposal of luminised aircraft instruments from aviation museums across the UK. These museums often hold significant radiological inventories, with many items being unused and in a poor state of repair. These instruments were fully characterised on site by assessing surface dose rate, dimensions, source integrity and potential contamination issues. Calculations using the Microshield computer code allowed gamma radiation measurements to be converted into total activity estimates for each source. More than 11,000 sources were disposed of under the programme from across the medical, industrial, museum and academic sectors. The total activity disposed of was more than 8.5 E+14 Bq, and the project was delivered under budget. (authors)

  14. A compact time-of-flight SANS instrument optimised for measurements of small sample volumes at the European Spallation Source

    Energy Technology Data Exchange (ETDEWEB)

    Kynde, Søren, E-mail: kynde@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark); Hewitt Klenø, Kaspar [Niels Bohr Institute, University of Copenhagen (Denmark); Nagy, Gergely [SINQ, Paul Scherrer Institute (Switzerland); Mortensen, Kell; Lefmann, Kim [Niels Bohr Institute, University of Copenhagen (Denmark); Kohlbrecher, Joachim, E-mail: Joachim.kohlbrecher@psi.ch [SINQ, Paul Scherrer Institute (Switzerland); Arleth, Lise, E-mail: arleth@nbi.ku.dk [Niels Bohr Institute, University of Copenhagen (Denmark)

    2014-11-11

    The high flux at European Spallation Source (ESS) will allow for performing experiments with relatively small beam-sizes while maintaining a high intensity of the incoming beam. The pulsed nature of the source makes the facility optimal for time-of-flight small-angle neutron scattering (ToF-SANS). We find that a relatively compact SANS instrument becomes the optimal choice in order to obtain the widest possible q-range in a single setting and the best possible exploitation of the neutrons in each pulse and hence obtaining the highest possible flux at the sample position. The instrument proposed in the present article is optimised for performing fast measurements of small scattering volumes, typically down to 2×2×2 mm{sup 3}, while covering a broad q-range from about 0.005 1/Å to 0.5 1/Å in a single instrument setting. This q-range corresponds to that available at a typical good BioSAXS instrument and is relevant for a wide set of biomacromolecular samples. A central advantage of covering the whole q-range in a single setting is that each sample has to be loaded only once. This makes it convenient to use the fully automated high-throughput flow-through sample changers commonly applied at modern synchrotron BioSAXS-facilities. The central drawback of choosing a very compact instrument is that the resolution in terms of δλ/λ obtained with the short wavelength neutrons becomes worse than what is usually the standard at state-of-the-art SANS instruments. Our McStas based simulations of the instrument performance for a set of characteristic biomacromolecular samples show that the resulting smearing effects still have relatively minor effects on the obtained data and can be compensated for in the data analysis. However, in cases where a better resolution is required in combination with the large simultaneous q-range characteristic of the instrument, we show that this can be obtained by inserting a set of choppers.

  15. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    Science.gov (United States)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  16. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  17. Seismic instrumentation for nuclear power plants

    International Nuclear Information System (INIS)

    Senne Junior, M.

    1983-07-01

    A seismic instrumentation system used in Nuclear Power Plants to monitor the design parameters of systems, structures and components, needed to provide safety to those plants, against the action of earth quarks is described. The instrumentation is based on the nuclear standards and other components used, as well as their general localization is indicated. The operation of the instrumentation system as a whole and the handling of the recovered data are dealt with accordingly. The accelerometer is described in detail. (Author) [pt

  18. Lifetime analysis of fusion-reactor components

    International Nuclear Information System (INIS)

    Mattas, R.F.

    1983-01-01

    A one-dimensional computer code has been developed to examine the lifetime of first-wall and impurity-control components. The code incorporates the operating and design parameters, the material characteristics, and the appropriate failure criteria for the individual components. The major emphasis of the modelling effort has been to calculate the temperature-stress-strain-radiation effects history of a component so that the synergystic effects between sputtering erosion, swelling, creep, fatigue, and crack growth can be examined. The general forms of the property equations are the same for all materials in order to provide the greatest flexibility for materials selection in the code. The code is capable of determining the behavior of a plate, composed of either a single or dual material structure, that is either totally constrained or constrained from bending but not from expansion. The code has been utilized to analyze the first walls for FED/INTOR and DEMO

  19. The neutron instrument simulation package, NISP

    International Nuclear Information System (INIS)

    Seeger, P.A.; Daemen, L.L.

    2004-01-01

    The Neutron Instrument Simulation Package (NISP) performs complete source-to-detector simulations of neutron instruments, including neutrons that do not follow the expected path. The original user interface (MC( ) Web) is a web-based application, http://strider.lansce.lanl.gov/NISP/Welcome.html. This report describes in detail the newer standalone Windows version, NISP( ) Win. Instruments are assembled from menu-selected elements, including neutron sources, collimation and transport elements, samples, analyzers, and detectors. Magnetic field regions may also be specified for the propagation of polarized neutrons including spin precession. Either interface writes a geometry file that is used as input to the Monte Carlo engine (MC( ) Run) in the user's computer. Both the interface and the engine rely on a subroutine library, MCLIB. The package is completely open source. New features include capillary optics, temperature dependence of Al and Be, revised source files for ISIS, and visualization of neutron trajectories at run time. Also, a single-crystal sample type has been successfully imported from McStas (with more generalized geometry), demonstrating the capability of including algorithms from other sources, and NISP( ) Win may render the instrument in a virtual reality file. Results are shown for two instruments under development.

  20. Physical, taxonomic code, and other data from current meter and other instruments in New York Bight from DOLPHIN and other platforms; 14 March 1971 to 03 August 1975 (NODC Accession 7601385)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Physical, taxonomic code, and other data were collected using current meter and other instruments from DOLPHIN and other platforms in New York Bight. Data were...

  1. UPTF test instrumentation. Measurement system identification, engineering units and computed parameters

    International Nuclear Information System (INIS)

    Sarkar, J.; Liebert, J.; Laeufer, R.

    1992-11-01

    This updated version of the previous report /1/ contains, besides additional instrumentation needed for 2D/3D Programme, the supplementary instrumentation in the inlet plenum of SG simulator and hot and cold leg of broken loop, the cold leg of intact loops and the upper plenum to meet the requirements (Test Phase A) of the UPTF Programme, TRAM, sponsored by the Federal Minister of Research and Technology (BMFT) of the Federal Republic of Germany. For understanding, the derivation and the description of the identification codes for the entire conventional and advanced measurement systems classifying the function, and the equipment unit, key, as adopted in the conventional power plants, have been included. Amendments have also been made to the appendices. In particular, the list of measurement systems covering the measurement identification code, instrument, measured quantity, measuring range, band width, uncertainty and sensor location has been updated and extended to include the supplementary instrumentation. Beyond these amendments, the uncertainties of measurements have been precisely specified. The measurement identification codes which also stand for the identification of the corresponding measured quantities in engineering units and the identification codes derived therefrom for the computed parameters have been adequately detailed. (orig.)

  2. Design and optimization of components and processes for plasma sources in advanced material treatments

    OpenAIRE

    Rotundo, Fabio

    2012-01-01

    The research activities described in the present thesis have been oriented to the design and development of components and technological processes aimed at optimizing the performance of plasma sources in advanced in material treatments. Consumables components for high definition plasma arc cutting (PAC) torches were studied and developed. Experimental activities have in particular focussed on the modifications of the emissive insert with respect to the standard electrode configuration, whi...

  3. Comparison of economic instruments to reduce PM_2_._5 from industrial and residential sources

    International Nuclear Information System (INIS)

    Mardones, Cristian; Saavedra, Andrés

    2016-01-01

    In the literature, it is possible to find different studies that compare economic instruments performance applied to the industrial sources regulation; however, evidence about pollution from residential sources is scarce. For this reason, the present study simulates and compares an emission permit system (EPS) and an ambient permit system (APS) when fine particulate matter pollution (PM_2_._5) is generated from industrial and residential sources. Thus, this research contributes to the spatial, economic and environmental assessment of industrial and residential emissions. The options to reduce pollution include replacement of heating devices in residential sources and installing end-of-pipe technologies in industrial sources. The results in terms of total cost and technological chosen options are similar under an APS and EPS for targets lesser to 80%. This is explained because it is more cost-effective to reduce emissions in residential sources than in industrial sources, and additionally, residential pollution has only local impact. However, some industrial sources should install abatement technologies for more demanding targets; in this case as industrial pollution are scattered in different areas, the total cost of an APS are lower than the total cost of an EPS. - Highlights: • The impact of wood burning on air quality can be significant in urban areas. • Residential and industrial sources in regulatory schemes to PM_2_._5 are analyzed. • Wood smoke pollution can be reduced by changing to more efficient heating devices. • Wood heater replacement is more cost-effective than abatement technologies. • The results are similar under APS and EPS for targets lesser to 80%.

  4. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  5. Reduction of PM emissions from specific sources reflected on key components concentrations of ambient PM10

    Science.gov (United States)

    Minguillon, M. C.; Querol, X.; Monfort, E.; Alastuey, A.; Escrig, A.; Celades, I.; Miro, J. V.

    2009-04-01

    The relationship between specific particulate emission control and ambient levels of some PM10 components (Zn, As, Pb, Cs, Tl) was evaluated. To this end, the industrial area of Castellón (Eastern Spain) was selected, where around 40% of the EU glazed ceramic tiles and a high proportion of EU ceramic frits (middle product for the manufacture of ceramic glaze) are produced. The PM10 emissions from the ceramic processes were calculated over the period 2000 to 2007 taking into account the degree of implementation of corrective measures throughout the study period. Abatement systems (mainly bag filters) were implemented in the majority of the fusion kilns for frit manufacture in the area as a result of the application of the Directive 1996/61/CE, leading to a marked decrease in PM10 emissions. On the other hand, ambient PM10 sampling was carried out from April 2002 to July 2008 at three urban sites and one suburban site of the area and a complete chemical analysis was made for about 35 % of the collected samples, by means of different techniques (ICP-AES, ICP-MS, Ion Chromatography, selective electrode and elemental analyser). The series of chemical composition of PM10 allowed us to apply a source contribution model (Principal Component Analysis), followed by a multilinear regression analysis, so that PM10 sources were identified and their contribution to bulk ambient PM10 was quantified on a daily basis, as well as the contribution to bulk ambient concentrations of the identified key components (Zn, As, Pb, Cs, Tl). The contribution of the sources identified as the manufacture and use of ceramic glaze components, including the manufacture of ceramic frits, accounted for more than 65, 75, 58, 53, and 53% of ambient Zn, As, Pb, Cs and Tl levels, respectively (with the exception of Tl contribution at one of the sites). The important emission reductions of these sources during the study period had an impact on ambient key components levels, such that there was a high

  6. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  7. Results of molten salt panel and component experiments for solar central receivers: Cold fill, freeze/thaw, thermal cycling and shock, and instrumentation tests

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco, J.E.; Ralph, M.E.; Chavez, J.M.; Dunkin, S.R.; Rush, E.E.; Ghanbari, C.M.; Matthews, M.W.

    1995-01-01

    Experiments have been conducted with a molten salt loop at Sandia National Laboratories in Albuquerque, NM to resolve issues associated with the operation of the 10MW{sub e} Solar Two Central Receiver Power Plant located near Barstow, CA. The salt loop contained two receiver panels, components such as flanges and a check valve, vortex shedding and ultrasonic flow meters, and an impedance pressure transducer. Tests were conducted on procedures for filling and thawing a panel, and assessing components and instrumentation in a molten salt environment. Four categories of experiments were conducted: (1) cold filling procedures, (2) freeze/thaw procedures, (3) component tests, and (4) instrumentation tests. Cold-panel and -piping fill experiments are described, in which the panels and piping were preheated to temperatures below the salt freezing point prior to initiating flow, to determine the feasibility of cold filling the receiver and piping. The transient thermal response was measured, and heat transfer coefficients and transient stresses were calculated from the data. Freeze/thaw experiments were conducted with the panels, in which the salt was intentionally allowed to freeze in the receiver tubes, then thawed with heliostat beams. Slow thermal cycling tests were conducted to measure both how well various designs of flanges (e.g., tapered flanges or clamp type flanges) hold a seal under thermal conditions typical of nightly shut down, and the practicality of using these flanges on high maintenance components. In addition, the flanges were thermally shocked to simulate cold starting the system. Instrumentation such as vortex shedding and ultrasonic flow meters were tested alongside each other, and compared with flow measurements from calibration tanks in the flow loop.

  8. IAEA's experience in compiling a generic component reliability data base

    International Nuclear Information System (INIS)

    Tomic, B.; Lederman, L.

    1991-01-01

    Reliability data are essential in probabilistic safety assessment, with component reliability parameters being particularly important. Component failure data which is plant specific would be most appropriate but this is rather limited. However, similar components are used in different designs. Generic data, that is all data that is not plant specific to the plant being analyzed but which relates to components more generally, is important. The International Atomic Energy Agency has compiled the Generic Component Reliability Data Base from data available in the open literature. It is part of the IAEA computer code package for fault/event tree analysis. The Data Base contains 1010 different records including most of the components used in probabilistic safety analyses of nuclear power plants. The data base input was quality controlled and data sources noted. The data compilation procedure and problems associated with using generic data are explained. (UK)

  9. Web- and system-code based, interactive, nuclear power plant simulators

    International Nuclear Information System (INIS)

    Kim, K. D.; Jain, P.; Rizwan, U.

    2006-01-01

    Using two different approaches, on-line, web- and system-code based graphical user interfaces have been developed for reactor system analysis. Both are LabVIEW (graphical programming language developed by National Instruments) based systems that allow local users as well as those at remote sites to run, interact and view the results of the system code in a web browser. In the first approach, only the data written by the system code in a tab separated ASCII output file is accessed and displayed graphically. In the second approach, LabVIEW virtual instruments are coupled with the system code as dynamic link libraries (DLL). RELAP5 is used as the system code to demonstrate the capabilities of these approaches. From collaborative projects between teams in geographically remote locations to providing system code experience to distance education students, these tools can be very beneficial in many areas of teaching and R and D. (authors)

  10. Provenance study of obsidian samples by using portable and conventional X ray fluorescence spectrometers. Performance comparison of both instrumentations

    International Nuclear Information System (INIS)

    Cristina Vazquez

    2012-01-01

    The potentiality of portable instrumentation lies on the possibility of the in situ determinations. Sampling, packaging and transport of samples from the site to the laboratory are avoided and the analysis becomes non destructive at all. However, detection limits for light elements are, in most cases, a limitation for quantification purposes. In this work a comparison between the results obtained with an X ray fluorescence spectrometer laboratory based and a portable instrument is performed. A set of 76 obsidian archaeological specimens from northwest Patagonia, Argentina was used to carry out the study. Samples were collected in the area of the middle and high basin of the Limay River. The analytical information obtained with both instrumentations was complemented with Principal Component Analysis in order to define groups and identify provenance sources. The information from both instruments allows arriving to the same conclusion about sample provenance and mobility of hunter-gatherer groups. Three groups of sources were identified in both cases matching with the geographical information. Also, same sets of outlier samples or not associated to these sources were found. Artifact samples were associated mainly to the closest sources, but some of them are related to sources located more than three hundred kilometers, evidencing the large mobility of the hunter-gatherers by the obsidian interchange. No significant differences between concentrations values obtained by laboratory based instrument and portable one were found. (author)

  11. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  12. Mobile Instruments Measure Atmospheric Pollutants

    Science.gov (United States)

    2009-01-01

    As a part of NASA's active research of the Earth s atmosphere, which has included missions such as the Atmospheric Laboratory of Applications and Science (ATLAS, launched in 1992) and the Total Ozone Mapping Spectrometer (TOMS, launched on the Earth Probe satellite in 1996), the Agency also performs ground-based air pollution research. The ability to measure trace amounts of airborne pollutants precisely and quickly is important for determining natural patterns and human effects on global warming and air pollution, but until recent advances in field-grade spectroscopic instrumentation, this rapid, accurate data collection was limited and extremely difficult. In order to understand causes of climate change and airborne pollution, NASA has supported the development of compact, low power, rapid response instruments operating in the mid-infrared "molecular fingerprint" portion of the electromagnetic spectrum. These instruments, which measure atmospheric trace gases and airborne particles, can be deployed in mobile laboratories - customized ground vehicles, typically - to map distributions of pollutants in real time. The instruments must be rugged enough to operate rapidly and accurately, despite frequent jostling that can misalign, damage, or disconnect sensitive components. By measuring quickly while moving through an environment, a mobile laboratory can correlate data and geographic points, revealing patterns in the environment s pollutants. Rapid pollutant measurements also enable direct determination of pollutant sources and sinks (mechanisms that remove greenhouse gases and pollutants), providing information critical to understanding and managing atmospheric greenhouse gas and air pollutant concentrations.

  13. Two-Component Structure of the Radio Source 0014+813 from VLBI Observations within the CONT14 Program

    Science.gov (United States)

    Titov, O. A.; Lopez, Yu. R.

    2018-03-01

    We consider a method of reconstructing the structure delay of extended radio sources without constructing their radio images. The residuals derived after the adjustment of geodetic VLBI observations are used for this purpose. We show that the simplest model of a radio source consisting of two point components can be represented by four parameters (the angular separation of the components, the mutual orientation relative to the poleward direction, the flux-density ratio, and the spectral index difference) that are determined for each baseline of a multi-baseline VLBI network. The efficiency of this approach is demonstrated by estimating the coordinates of the radio source 0014+813 observed during the two-week CONT14 program organized by the International VLBI Service (IVS) in May 2014. Large systematic deviations have been detected in the residuals of the observations for the radio source 0014+813. The averaged characteristics of the radio structure of 0014+813 at a frequency of 8.4 GHz can be calculated from these deviations. Our modeling using four parameters has confirmed that the source consists of two components at an angular separation of 0.5 mas in the north-south direction. Using the structure delay when adjusting the CONT14 observations leads to a correction of the average declination estimate for the radio source 0014+813 by 0.070 mas.

  14. Fundamental principles for a nuclear design and structural analysis code for HTR components operating at temperatures above 8000C

    International Nuclear Information System (INIS)

    Nickel, H.; Schubert, F.

    1985-01-01

    With reference to the special characteristics of an HTR plant for the supply of nuclear process heat, the investigation of the fundamental principles to form the basis for a high temperature nuclear structural design code has been described. As examples, preliminary design values are proposed for the creep rupture and fatigue behaviour. The linear damage accumulation rule is for practical reasons proposed for the determination of service life, and the difficulties in using this rule are discussed. Finally, using the data obtained in structural analysis, the main areas of investigation which will lead to improvements in the utilization of the materials are discussed. Based on the current information, the working group ''Design Code'' believes that a service life of 70000 h for the heat-exchanging components operating at above 800 0 C can be. (orig.)

  15. CERCA's fuel elements instrumentation manufacturing

    International Nuclear Information System (INIS)

    Harbonnier, G.; Jarousse, C.; Pin, T.; Febvre, M.; Colomb, P.

    2005-01-01

    When research and test reactors wish to further understand the Fuel Elements behavior when operating as well as mastering their irradiation conditions, operators carry out neutron and thermo hydraulic analysis. For thermal calculation, the codes used have to be preliminary validated, at least in the range of the reactor safety operational limits. When some further investigations are requested either by safety authorities or for its own reactor needs, instrumented tools are the ultimate solution for providing representative measurements. Such measurements can be conducted for validating thermal calculation codes, at nominal operating condition as well as during transients ones, or for providing numerous and useful data in the frame of a new products qualification program. CERCA, with many years of experience for implanting thermocouples in various products design, states in this poster his manufacturing background on instrumented elements, plates or targets. (author)

  16. Fetal source extraction from magnetocardiographic recordings by dependent component analysis

    Energy Technology Data Exchange (ETDEWEB)

    Araujo, Draulio B de [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Barros, Allan Kardec [Department of Electrical Engineering, Federal University of Maranhao, Sao Luis, Maranhao (Brazil); Estombelo-Montesco, Carlos [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Zhao, Hui [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Filho, A C Roque da Silva [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Baffa, Oswaldo [Department of Physics and Mathematics, FFCLRP, University of Sao Paulo, Ribeirao Preto, SP (Brazil); Wakai, Ronald [Department of Medical Physics, University of Wisconsin, Madison, WI (United States); Ohnishi, Noboru [Department of Information Engineering, Nagoya University (Japan)

    2005-10-07

    Fetal magnetocardiography (fMCG) has been extensively reported in the literature as a non-invasive, prenatal technique that can be used to monitor various functions of the fetal heart. However, fMCG signals often have low signal-to-noise ratio (SNR) and are contaminated by strong interference from the mother's magnetocardiogram signal. A promising, efficient tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). Herein we propose an algorithm based on a variation of ICA, where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We model the system using autoregression, and identify the signal component of interest from the poles of the autocorrelation function. We show that the method is effective in removing the maternal signal, and is computationally efficient. We also compare our results to more established ICA methods, such as FastICA.

  17. A Monte Carlo multiple source model applied to radiosurgery narrow photon beams

    International Nuclear Information System (INIS)

    Chaves, A.; Lopes, M.C.; Alves, C.C.; Oliveira, C.; Peralta, L.; Rodrigues, P.; Trindade, A.

    2004-01-01

    Monte Carlo (MC) methods are nowadays often used in the field of radiotherapy. Through successive steps, radiation fields are simulated, producing source Phase Space Data (PSD) that enable a dose calculation with good accuracy. Narrow photon beams used in radiosurgery can also be simulated by MC codes. However, the poor efficiency in simulating these narrow photon beams produces PSD whose quality prevents calculating dose with the required accuracy. To overcome this difficulty, a multiple source model was developed that enhances the quality of the reconstructed PSD, reducing also the time and storage capacities. This multiple source model was based on the full MC simulation, performed with the MC code MCNP4C, of the Siemens Mevatron KD2 (6 MV mode) linear accelerator head and additional collimators. The full simulation allowed the characterization of the particles coming from the accelerator head and from the additional collimators that shape the narrow photon beams used in radiosurgery treatments. Eight relevant photon virtual sources were identified from the full characterization analysis. Spatial and energy distributions were stored in histograms for the virtual sources representing the accelerator head components and the additional collimators. The photon directions were calculated for virtual sources representing the accelerator head components whereas, for the virtual sources representing the additional collimators, they were recorded into histograms. All these histograms were included in the MC code, DPM code and using a sampling procedure that reconstructed the PSDs, dose distributions were calculated in a water phantom divided in 20000 voxels of 1x1x5 mm 3 . The model accurately calculates dose distributions in the water phantom for all the additional collimators; for depth dose curves, associated errors at 2σ were lower than 2.5% until a depth of 202.5 mm for all the additional collimators and for profiles at various depths, deviations between measured

  18. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  19. Twenty-fifth water reactor safety information meeting: Proceedings. Volume 3: Thermal hydraulic research and codes; Digital instrumentation and control; Structural performance

    International Nuclear Information System (INIS)

    Monteleone, S.

    1998-04-01

    This three-volume report contains papers presented at the conference. The papers are printed in the order of their presentation in each session and describe progress and results of programs in nuclear safety research conducted in this country and abroad. Foreign participation in the meeting included papers presented by researchers from France, Japan, Norway, and Russia. The titles of the papers and the names of the authors have been updated and may differ from those that appeared in the final program of the meeting. This volume contains the following: (1) thermal hydraulic research and codes; (2) digital instrumentation and control; (3) structural performance

  20. Development and validation of corium oxidation model for the VAPEX code

    International Nuclear Information System (INIS)

    Blinkov, V.N.; Melikhov, V.I.; Davydov, M.V.; Melikhov, O.I.; Borovkova, E.M.

    2011-01-01

    In light water reactor core melt accidents, the molten fuel (corium) can be brought into contact with coolant water in the course of the melt relocation in-vessel and ex-vessel as well as in an accident mitigation action of water addition. Mechanical energy release from such an interaction is of interest in evaluating the structural integrity of the reactor vessel as well as of the containment. Usually, the source for the energy release is considered to be the rapid transfer of heat from the molten fuel to the water ('vapor explosion'). When the fuel contains a chemically reactive metal component, there could be an additional source for the energy release, which is the heat release and hydrogen production due to the metal-water chemical reaction. In Electrogorsk Research and Engineering Center the computer code VAPEX (VAPor EXplosion) has been developed for analysis of the molten fuel coolant interaction. Multifield approach is used for modeling of dynamics of following phases: water, steam, melt jet, melt droplets, debris. The VAPEX code was successfully validated on FARO experimental data. Hydrogen generation was observed in FARO tests even though corium didn't contain metal component. The reason for hydrogen generation was not clear, so, simplified empirical model of hydrogen generation was implemented in the VAPEX code to take into account input of hydrogen into pressure increase. This paper describes new more detailed model of hydrogen generation due to the metal-water chemical reaction and results of its validation on ZREX experiments. (orig.)

  1. Version 4. 00 of the MINTEQ geochemical code

    Energy Technology Data Exchange (ETDEWEB)

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  2. Version 4.00 of the MINTEQ geochemical code

    Energy Technology Data Exchange (ETDEWEB)

    Eary, L.E.; Jenne, E.A.

    1992-09-01

    The MINTEQ code is a thermodynamic model that can be used to calculate solution equilibria for geochemical applications. Included in the MINTEQ code are formulations for ionic speciation, ion exchange, adsorption, solubility, redox, gas-phase equilibria, and the dissolution of finite amounts of specified solids. Since the initial development of the MINTEQ geochemical code, a number of undocumented versions of the source code and data files have come into use at the Pacific Northwest Laboratory (PNL). This report documents these changes, describes source code modifications made for the Aquifer Thermal Energy Storage (ATES) program, and provides comprehensive listings of the data files. A version number of 4.00 has been assigned to the MINTEQ source code and the individual data files described in this report.

  3. Simulation of electromagnetic aggression on components

    International Nuclear Information System (INIS)

    Adoun, G.; Coumar, O.

    1997-01-01

    Numerical simulation techniques can be used for studying the behaviour of electronic components exposed to electromagnetic aggression. This article discusses CW analysis on a CMOS-technology logic NAND gate under electromagnetic aggression of different amplitudes (2.5 V 5 V), frequencies (100 MHz and 1 GHz) and phase. Numerical simulations were conducted using three codes: Spice code was used for solving electronic circuits, and the Atlas and Dessis codes were used for examining internal component behaviour. (authors)

  4. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  5. Novel mass spectrometric instrument for gaseous and particulate characterization and monitoring

    International Nuclear Information System (INIS)

    Coggiola, M.J.

    1994-02-01

    SRI International will develop a unique new instrument that will be capable of providing real-time (<1 minute), quantitative, chemical characterization of gaseous and particulate pollutants generated from DOE waste cleanup activities. The instrument will be capable of detecting and identifying volatile organic compounds, polynuclear aromatic hydrocarbons, heavy metals, and transuranic species released during waste cleanup activities. The instrument will be unique in its ability to detect and quantify in real-time these diverse pollutants in both vapor and particulate form. The instrument to be developed under this program will consist of several major components: (1) an isokinetic sampler capable of operating over a wide range of temperatures (up to 500 K) and flow rates; (2) a high pressure to low pressure transition and sampling region that efficiently separates particles from vapor-phase components for separate, parallel analyses; (3) two small mass spectrometers, one optimized for organic analysis using a unique field ionization source and one optimized for particulate characterization using thermal pyrolysis and electron-impact ionization (EI); and (4) a powerful personal computer for control and data acquisition. Initially, the instrument will be developed for targeted use in conjunction with the K-1435 Toxic Substances Control Act (TSCA) incinerator at the Oak Ridge National Laboratory K-25 site. Ultimately, the instrument will be designed to operate in the field at any cleanup site, located close to the stack or process vent, providing the plant operations personnel with real-time information and alarm capabilities. In addition, this instrument will be very broadly applicable for cleanup or sampling, for example, any time contaminated soil is moved or disturbed

  6. New generation low power radiation survey instruments

    International Nuclear Information System (INIS)

    Waechter, D.A.; Bjarke, G.O.; Trujillo, F.; Umbarger, C.J.; Wolf, M.A.

    1984-01-01

    A number of new, ultra-low-powered radiation instruments have recently been developed at Los Alamos. Among these are two instruments which use a novel power source to eliminate costly batteries. The newly developed gamma detecting radiac, nicknamed the Firefly, and the alpha particle detecting instrument, called the Simple Cordless Alpha Monitor, both use recent advances in miniaturization and powersaving electronics to yield devices which are small, rugged, and very power-frugal. The two instruments consume so little power that the need for batteries to run them is eliminated. They are, instead, powered by a charged capacitor which will operate the instruments for an hour or more. Use of a capacitor as a power source eliminates many problems commonly associated with battery-operated instruments, such as having to open the case to change batteries, battery storage life, availability of batteries in the field, and some savings in weight. Both line power and mechanical sources are used to charge the storage capacitors which power the instruments

  7. Network coding for multi-resolution multicast

    DEFF Research Database (Denmark)

    2013-01-01

    A method, apparatus and computer program product for utilizing network coding for multi-resolution multicast is presented. A network source partitions source content into a base layer and one or more refinement layers. The network source receives a respective one or more push-back messages from one...... or more network destination receivers, the push-back messages identifying the one or more refinement layers suited for each one of the one or more network destination receivers. The network source computes a network code involving the base layer and the one or more refinement layers for at least one...... of the one or more network destination receivers, and transmits the network code to the one or more network destination receivers in accordance with the push-back messages....

  8. Data Release Report for Source Physics Experiment 1 (SPE-1), Nevada National Security Site

    Energy Technology Data Exchange (ETDEWEB)

    Townsend, Margaret [NSTec; Mercadente, Jennifer [NSTec

    2014-04-28

    The first Source Physics Experiment shot (SPE-1) was conducted in May 2011. The explosive source was a ~100-kilogram TNT-equivalent chemical set at a depth of 60 meters. It was recorded by an extensive set of instrumentation that includes sensors both at near-field (less than 100 meters) and far-field (more than 100 meters) distances. The near-field instruments consisted of three-component accelerometers deployed in boreholes around the shot and a set of singlecomponent vertical accelerometers on the surface. The far-field network comprised a variety of seismic and acoustic sensors, including short-period geophones, broadband seismometers, three-component accelerometers, and rotational seismometers at distances of 100 meters to 25 kilometers. This report coincides with the release of these data for analysts and organizations that are not participants in this program. This report describes the first Source Physics Experiment and the various types of near-field and far-field data that are available.

  9. Anthropomorphic Coding of Speech and Audio: A Model Inversion Approach

    Directory of Open Access Journals (Sweden)

    W. Bastiaan Kleijn

    2005-06-01

    Full Text Available Auditory modeling is a well-established methodology that provides insight into human perception and that facilitates the extraction of signal features that are most relevant to the listener. The aim of this paper is to provide a tutorial on perceptual speech and audio coding using an invertible auditory model. In this approach, the audio signal is converted into an auditory representation using an invertible auditory model. The auditory representation is quantized and coded. Upon decoding, it is then transformed back into the acoustic domain. This transformation converts a complex distortion criterion into a simple one, thus facilitating quantization with low complexity. We briefly review past work on auditory models and describe in more detail the components of our invertible model and its inversion procedure, that is, the method to reconstruct the signal from the output of the auditory model. We summarize attempts to use the auditory representation for low-bit-rate coding. Our approach also allows the exploitation of the inherent redundancy of the human auditory system for the purpose of multiple description (joint source-channel coding.

  10. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  11. Assessment and management of ageing of major nuclear power plant components important to safety: In-containment instrumentation and control cables. Volume I

    International Nuclear Information System (INIS)

    2000-12-01

    and technical support organizations dealing with specific plant components addressed in the reports. The component addressed in the present report is the in-containment instrumentation and control (I and C) cables. The report presents, in two volumes, results of a Co-ordinated Research Project (CRP) on the Management of Ageing of In-containment Instrumentation and Control cables. Part I, Volume 1 presents information on current methods for assessing and managing ageing degradation of Instrumentation and Control cables in real NPP environments prepared by the CRP team. An important complement of this information is user perspectives on the application of these methods which are presented in Part II, Volume 1. Volume 2 contains annexes supporting the guidance of Volume 1 with more detailed information and examples provided by individual CRP participants. For a quick overview, readers should see Section 8 of Part I, Volume 1, which describes a systematic ageing management programme for Instrumentation and Control cables utilizing methods presented in the report; Section 9 of Part I, Volume 1, which presents CRP conclusions and recommendations; and Part II providing the application guidance from the user's perspective

  12. Assessment and management of ageing of major nuclear power plant components important to safety: In-containment instrumentation and control cables. Volume II

    International Nuclear Information System (INIS)

    2000-12-01

    and technical support organizations dealing with specific plant components addressed in the reports. The component addressed in the present report is the in-containment instrumentation and control (I and C) cables. The report presents, in two volumes, results of a Co-ordinated Research Project (CRP) on the Management of Ageing of In-containment Instrumentation and Control cables. Part I, Volume 1 presents information on current methods for assessing and managing ageing degradation of Instrumentation and Control cables in real NPP environments prepared by the CRP team. An important complement of this information is user perspectives on the application of these methods which are presented in Part II, Volume 1. Volume 2 contains annexes supporting the guidance of Volume 1 with more detailed information and examples provided by individual CRP participants. For a quick overview, readers should see Section 8 of Part I, Volume 1, which describes a systematic ageing management programme for Instrumentation and Control cables utilizing methods presented in the report; Section 9 of Part I, Volume 1, which presents CRP conclusions and recommendations; and Part II providing the application guidance from the user's perspective

  13. Separation of radiated sound field components from waves scattered by a source under non-anechoic conditions

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn

    2010-01-01

    to the source. Thus the radiated free-field component is estimated simultaneously with solving the inverse problem of reconstructing the sound field near the source. The method is particularly suited to cases in which the overall contribution of reflected sound in the measurement plane is significant....

  14. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    Science.gov (United States)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  15. Orthogonal transformations for change detection, Matlab code

    DEFF Research Database (Denmark)

    2005-01-01

    Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data.......Matlab code to do multivariate alteration detection (MAD) analysis, maximum autocorrelation factor (MAF) analysis, canonical correlation analysis (CCA) and principal component analysis (PCA) on image data....

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  17. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  18. Validation of containment thermal hydraulic computer codes for VVER reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jiri Macek; Lubomir Denk [Nuclear Research Institute Rez plc Thermal-Hydraulic Analyses Department CZ 250 68 Husinec-Rez (Czech Republic)

    2005-07-01

    Full text of publication follows: The Czech Republic operates 4 VVER-440 units, two VVER-1000 units are being finalized (one of them is undergoing commissioning). Thermal-hydraulics Department of the Nuclear Research Institute Rez performs accident analyses for these plants using a number of computer codes. To model the primary and secondary circuits behaviour the system codes ATHLET, CATHARE, RELAP, TRAC are applied. Containment and pressure-suppression system are modelled with COCOSYS and MELCOR codes, the reactor power calculations (point and space-neutron kinetics) are made with DYN3D, NESTLE and CDF codes (FLUENT, TRIO) are used for some specific problems.An integral part of the current Czech project 'New Energy Sources' is selection of a new nuclear source. Within this and the preceding projects financed by the Czech Ministry of Industry and Trade and the EU PHARE, the Department carries and has carried out the systematic validation of thermal-hydraulic and reactor physics computer codes applying data obtained on several experimental facilities as well as the real operational data. One of the important components of the VVER 440/213 NPP is its containment with pressure suppression system (bubble condenser). For safety analyses of this system, computer codes of the type MELCOR and COCOSYS are used in the Czech Republic. These codes were developed for containments of classic PWRs or BWRs. In order to apply these codes for VVER 440 systems, their validation on experimental facilities must be performed.The paper provides concise information on these activities of the NRI and its Thermal-hydraulics Department. The containment system of the VVER 440/213, its functions and approaches to solution of its safety is described with definition of acceptance criteria. A detailed example of the containment code validation on EREC Test facility (LOCA and MSLB) and the consequent utilisation of the results for a real NPP purposes is included. An approach to

  19. A Code Generator for Software Component Services in Smart Devices

    OpenAIRE

    Ahmad, Manzoor

    2010-01-01

    A component is built to be reused and reusability has significant impact on component generality and flexibility requirement. A component model plays a critical role in reusability of software component and defines a set of standards for component implementation, evolution, composition, deployment and standardization of the run-time environment for execution of component. In component based development (CBD), standardization of the runtime environment includes specification of component’s int...

  20. Application and analysis of performance of dqpsk advanced modulation format in spectral amplitude coding ocdma

    International Nuclear Information System (INIS)

    Memon, A.

    2015-01-01

    SAC (Spectral Amplitude Coding) is a technique of OCDMA (Optical Code Division Multiple Access) to encode and decode data bits by utilizing spectral components of the broadband source. Usually OOK (ON-Off-Keying) modulation format is used in this encoding scheme. To make SAC OCDMA network spectrally efficient, advanced modulation format of DQPSK (Differential Quaternary Phase Shift Keying) is applied, simulated and analyzed, m-sequence code is encoded in the simulated setup. Performance regarding various lengths of m-sequence code is also analyzed and displayed in the pictorial form. The results of the simulation are evaluated with the help of electrical constellation diagram, eye diagram and bit error rate graph. All the graphs indicate better transmission quality in case of advanced modulation format of DQPSK used in SAC OCDMA network as compared with OOK. (author)