WorldWideScience

Sample records for source code implementation

  1. Remodularizing Java Programs for Improved Locality of Feature Implementations in Source Code

    DEFF Research Database (Denmark)

    Olszak, Andrzej; Jørgensen, Bo Nørregaard

    2011-01-01

    Explicit traceability between features and source code is known to help programmers to understand and modify programs during maintenance tasks. However, the complex relations between features and their implementations are not evident from the source code of object-oriented Java programs....... Consequently, the implementations of individual features are difficult to locate, comprehend, and modify in isolation. In this paper, we present a novel remodularization approach that improves the representation of features in the source code of Java programs. Both forward- and reverse restructurings...... are supported through on-demand bidirectional restructuring between feature-oriented and object-oriented decompositions. The approach includes a feature location phase based of tracing program execution, a feature representation phase that reallocates classes into a new package structure based on single...

  2. A proposed metamodel for the implementation of object oriented software through the automatic generation of source code

    Directory of Open Access Journals (Sweden)

    CARVALHO, J. S. C.

    2008-12-01

    Full Text Available During the development of software one of the most visible risks and perhaps the biggest implementation obstacle relates to the time management. All delivery deadlines software versions must be followed, but it is not always possible, sometimes due to delay in coding. This paper presents a metamodel for software implementation, which will rise to a development tool for automatic generation of source code, in order to make any development pattern transparent to the programmer, significantly reducing the time spent in coding artifacts that make up the software.

  3. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  4. Implementation of inter-unit analysis for C and C++ languages in a source-based static code analyzer

    Directory of Open Access Journals (Sweden)

    A. V. Sidorin

    2015-01-01

    Full Text Available The proliferation of automated testing capabilities arises a need for thorough testing of large software systems, including system inter-component interfaces. The objective of this research is to build a method for inter-procedural inter-unit analysis, which allows us to analyse large and complex software systems including multi-architecture projects (like Android OS as well as to support complex assembly systems of projects. Since the selected Clang Static Analyzer uses source code directly as input data, we need to develop a special technique to enable inter-unit analysis for such analyzer. This problem is of special nature because of C and C++ language features that assume and encourage the separate compilation of project files. We describe the build and analysis system that was implemented around Clang Static Analyzer to enable inter-unit analysis and consider problems related to support of complex projects. We also consider the task of merging abstract source trees of translation units and its related problems such as handling conflicting definitions, complex build systems and complex projects support, including support for multi-architecture projects, with examples. We consider both issues related to language design and human-related mistakes (that may be intentional. We describe some heuristics that were used for this work to make the merging process faster. The developed system was tested using Android OS as the input to show it is applicable even for such complicated projects. This system does not depend on the inter-procedural analysis method and allows the arbitrary change of its algorithm.

  5. Cost reducing code implementation strategies

    International Nuclear Information System (INIS)

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  6. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  7. Study of cold neutron sources: Implementation and validation of a complete computation scheme for research reactor using Monte Carlo codes TRIPOLI-4.4 and McStas

    International Nuclear Information System (INIS)

    Campioni, Guillaume; Mounier, Claude

    2006-01-01

    The main goal of the thesis about studies of cold neutrons sources (CNS) in research reactors was to create a complete set of tools to design efficiently CNS. The work raises the problem to run accurate simulations of experimental devices inside reactor reflector valid for parametric studies. On one hand, deterministic codes have reasonable computation times but introduce problems for geometrical description. On the other hand, Monte Carlo codes give the possibility to compute on precise geometry, but need computation times so important that parametric studies are impossible. To decrease this computation time, several developments were made in the Monte Carlo code TRIPOLI-4.4. An uncoupling technique is used to isolate a study zone in the complete reactor geometry. By recording boundary conditions (incoming flux), further simulations can be launched for parametric studies with a computation time reduced by a factor 60 (case of the cold neutron source of the Orphee reactor). The short response time allows to lead parametric studies using Monte Carlo code. Moreover, using biasing methods, the flux can be recorded on the surface of neutrons guides entries (low solid angle) with a further gain of running time. Finally, the implementation of a coupling module between TRIPOLI- 4.4 and the Monte Carlo code McStas for research in condensed matter field gives the possibility to obtain fluxes after transmission through neutrons guides, thus to have the neutron flux received by samples studied by scientists of condensed matter. This set of developments, involving TRIPOLI-4.4 and McStas, represent a complete computation scheme for research reactors: from nuclear core, where neutrons are created, to the exit of neutrons guides, on samples of matter. This complete calculation scheme is tested against ILL4 measurements of flux in cold neutron guides. (authors)

  8. Implementation of LT codes based on chaos

    International Nuclear Information System (INIS)

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  9. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  10. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  11. Measuring Modularity in Open Source Code Bases

    Directory of Open Access Journals (Sweden)

    Roberto Milev

    2009-03-01

    Full Text Available Modularity of an open source software code base has been associated with growth of the software development community, the incentives for voluntary code contribution, and a reduction in the number of users who take code without contributing back to the community. As a theoretical construct, modularity links OSS to other domains of research, including organization theory, the economics of industry structure, and new product development. However, measuring the modularity of an OSS design has proven difficult, especially for large and complex systems. In this article, we describe some preliminary results of recent research at Carleton University that examines the evolving modularity of large-scale software systems. We describe a measurement method and a new modularity metric for comparing code bases of different size, introduce an open source toolkit that implements this method and metric, and provide an analysis of the evolution of the Apache Tomcat application server as an illustrative example of the insights gained from this approach. Although these results are preliminary, they open the door to further cross-discipline research that quantitatively links the concerns of business managers, entrepreneurs, policy-makers, and open source software developers.

  12. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  13. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  14. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  15. Evaluation of Code Blue Implementation Outcomes

    Directory of Open Access Journals (Sweden)

    Bengü Özütürk

    2015-09-01

    Full Text Available Aim: In this study, we aimed to emphasize the importance of Code Blue implementation and to determine deficiencies in this regard. Methods: After obtaining the ethics committee approval, 225 patient’s code blue call data between 2012 and 2014 January were retrospectively analyzed. Age and gender of the patients, date and time of the call and the clinics giving Code Blue, the time needed for the Code Blue team to arrive, the rates of false Code Blue calls, reasons for Code Blue calls and patient outcomes were investigated. Results: A total of 225 patients (149 male, 76 female were evaluated in the study. The mean age of the patients was 54.1 years. 142 (67.2% Code Blue calls occurred after hours and by emergency unit. The mean time for the Code Blue team to arrive was 1.10 minutes. Spontaneous circulation was provided in 137 patients (60.8%; 88 (39.1% died. The most commonly identified possible causes were of cardiac origin. Conclusion: This study showed that Code Blue implementation with a professional team within an efficient and targeted time increase the survival rate. Therefore, we conclude that the application of Code Blue carried out by a trained team is an essential standard in hospitals. (The Medical Bulletin of Haseki 2015; 53:204-8

  16. Transmission imaging with a coded source

    International Nuclear Information System (INIS)

    Stoner, W.W.; Sage, J.P.; Braun, M.; Wilson, D.T.; Barrett, H.H.

    1976-01-01

    The conventional approach to transmission imaging is to use a rotating anode x-ray tube, which provides the small, brilliant x-ray source needed to cast sharp images of acceptable intensity. Stationary anode sources, although inherently less brilliant, are more compatible with the use of large area anodes, and so they can be made more powerful than rotating anode sources. Spatial modulation of the source distribution provides a way to introduce detailed structure in the transmission images cast by large area sources, and this permits the recovery of high resolution images, in spite of the source diameter. The spatial modulation is deliberately chosen to optimize recovery of image structure; the modulation pattern is therefore called a ''code.'' A variety of codes may be used; the essential mathematical property is that the code possess a sharply peaked autocorrelation function, because this property permits the decoding of the raw image cast by th coded source. Random point arrays, non-redundant point arrays, and the Fresnel zone pattern are examples of suitable codes. This paper is restricted to the case of the Fresnel zone pattern code, which has the unique additional property of generating raw images analogous to Fresnel holograms. Because the spatial frequency of these raw images are extremely coarse compared with actual holograms, a photoreduction step onto a holographic plate is necessary before the decoded image may be displayed with the aid of coherent illumination

  17. Code Forking, Governance, and Sustainability in Open Source Software

    Directory of Open Access Journals (Sweden)

    Juho Lindman

    2013-01-01

    Full Text Available The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibility of forking code, affects the governance and sustainability of open source initiatives on three distinct levels: software, community, and ecosystem. On the software level, the right to fork makes planned obsolescence, versioning, vendor lock-in, end-of-support issues, and similar initiatives all but impossible to implement. On the community level, forking impacts both sustainability and governance through the power it grants the community to safeguard against unfavourable actions by corporations or project leaders. On the business-ecosystem level forking can serve as a catalyst for innovation while simultaneously promoting better quality software through natural selection. Thus, forking helps keep open source initiatives relevant and presents opportunities for the development and commercialization of current and abandoned programs.

  18. Present state of the SOURCES computer code

    International Nuclear Information System (INIS)

    Shores, Erik F.

    2002-01-01

    In various stages of development for over two decades, the SOURCES computer code continues to calculate neutron production rates and spectra from four types of problems: homogeneous media, two-region interfaces, three-region interfaces and that of a monoenergetic alpha particle beam incident on a slab of target material. Graduate work at the University of Missouri - Rolla, in addition to user feedback from a tutorial course, provided the impetus for a variety of code improvements. Recently upgraded to version 4B, initial modifications to SOURCES focused on updates to the 'tape5' decay data library. Shortly thereafter, efforts focused on development of a graphical user interface for the code. This paper documents the Los Alamos SOURCES Tape1 Creator and Library Link (LASTCALL) and describes additional library modifications in more detail. Minor improvements and planned enhancements are discussed.

  19. Importance biasing scheme implemented in the PRIZMA code

    International Nuclear Information System (INIS)

    Kandiev, I.Z.; Malyshkin, G.N.

    1997-01-01

    PRIZMA code is intended for Monte Carlo calculations of linear radiation transport problems. The code has wide capabilities to describe geometry, sources, material composition, and to obtain parameters specified by user. There is a capability to calculate path of particle cascade (including neutrons, photons, electrons, positrons and heavy charged particles) taking into account possible transmutations. Importance biasing scheme was implemented to solve the problems which require calculation of functionals related to small probabilities (for example, problems of protection against radiation, problems of detection, etc.). The scheme enables to adapt trajectory building algorithm to problem peculiarities

  20. Image authentication using distributed source coding.

    Science.gov (United States)

    Lin, Yao-Chung; Varodayan, David; Girod, Bernd

    2012-01-01

    We present a novel approach using distributed source coding for image authentication. The key idea is to provide a Slepian-Wolf encoded quantized image projection as authentication data. This version can be correctly decoded with the help of an authentic image as side information. Distributed source coding provides the desired robustness against legitimate variations while detecting illegitimate modification. The decoder incorporating expectation maximization algorithms can authenticate images which have undergone contrast, brightness, and affine warping adjustments. Our authentication system also offers tampering localization by using the sum-product algorithm.

  1. Network Coding Applications and Implementations on Mobile Devices

    DEFF Research Database (Denmark)

    Fitzek, Frank; Pedersen, Morten Videbæk; Heide, Janus

    2010-01-01

    Network coding has attracted a lot of attention lately. The goal of this paper is to demonstrate that the implementation of network coding is feasible on mobile platforms. The paper will guide the reader through some examples and demonstrate uses for network coding. Furthermore the paper will also...... show that the implementation of network coding is feasible today on commercial mobile platforms....

  2. Code Forking, Governance, and Sustainability in Open Source Software

    OpenAIRE

    Juho Lindman; Linus Nyman

    2013-01-01

    The right to fork open source code is at the core of open source licensing. All open source licenses grant the right to fork their code, that is to start a new development effort using an existing code as its base. Thus, code forking represents the single greatest tool available for guaranteeing sustainability in open source software. In addition to bolstering program sustainability, code forking directly affects the governance of open source initiatives. Forking, and even the mere possibilit...

  3. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    Science.gov (United States)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  4. Implementing and Testing the LINTAB, HEATER and PLOTTAB code package

    International Nuclear Information System (INIS)

    Cullen, D.E.; Smith, J.J.

    1987-07-01

    Enclosed is a description of the magnetic tape or floppy diskette containing the LINTAB, HEATER and PLOTTAB code package. In addition detailed information is provided on implementation and testing of these codes. These codes are documented in IAEA-NDS-84. (author)

  5. On the Combination of Multi-Layer Source Coding and Network Coding for Wireless Networks

    DEFF Research Database (Denmark)

    Krigslund, Jeppe; Fitzek, Frank; Pedersen, Morten Videbæk

    2013-01-01

    quality is developed. A linear coding structure designed to gracefully encapsulate layered source coding provides both low complexity of the utilised linear coding while enabling robust erasure correction in the form of fountain coding capabilities. The proposed linear coding structure advocates efficient...

  6. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  7. Implementation of Energy Code Controls Requirements in New Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Rosenberg, Michael I. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hart, Philip R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hatten, Mike [Solarc Energy Group, LLC, Seattle, WA (United States); Jones, Dennis [Group 14 Engineering, Inc., Denver, CO (United States); Cooper, Matthew [Group 14 Engineering, Inc., Denver, CO (United States)

    2017-03-24

    Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research is to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.

  8. Research on Primary Shielding Calculation Source Generation Codes

    Science.gov (United States)

    Zheng, Zheng; Mei, Qiliang; Li, Hui; Shangguan, Danhua; Zhang, Guangchun

    2017-09-01

    Primary Shielding Calculation (PSC) plays an important role in reactor shielding design and analysis. In order to facilitate PSC, a source generation code is developed to generate cumulative distribution functions (CDF) for the source particle sample code of the J Monte Carlo Transport (JMCT) code, and a source particle sample code is deveoped to sample source particle directions, types, coordinates, energy and weights from the CDFs. A source generation code is developed to transform three dimensional (3D) power distributions in xyz geometry to source distributions in r θ z geometry for the J Discrete Ordinate Transport (JSNT) code. Validation on PSC model of Qinshan No.1 nuclear power plant (NPP), CAP1400 and CAP1700 reactors are performed. Numerical results show that the theoretical model and the codes are both correct.

  9. The Visual Code Navigator : An Interactive Toolset for Source Code Investigation

    NARCIS (Netherlands)

    Lommerse, Gerard; Nossin, Freek; Voinea, Lucian; Telea, Alexandru

    2005-01-01

    We present the Visual Code Navigator, a set of three interrelated visual tools that we developed for exploring large source code software projects from three different perspectives, or views: The syntactic view shows the syntactic constructs in the source code. The symbol view shows the objects a

  10. Complete permutation Gray code implemented by finite state machine

    Directory of Open Access Journals (Sweden)

    Li Peng

    2014-09-01

    Full Text Available An enumerating method of complete permutation array is proposed. The list of n! permutations based on Gray code defined over finite symbol set Z(n = {1, 2, …, n} is implemented by finite state machine, named as n-RPGCF. An RPGCF can be used to search permutation code and provide improved lower bounds on the maximum cardinality of a permutation code in some cases.

  11. Source Code Stylometry Improvements in Python

    Science.gov (United States)

    2017-12-14

    grant (Caliskan-Islam et al. 2015) ............. 1 Fig. 2 Corresponding abstract syntax tree from de-anonymizing programmers’ paper (Caliskan-Islam et...person can be identified via their handwriting or an author identified by their style or prose, programmers can be identified by their code...Provided a labelled training set of code samples (example in Fig. 1), the techniques used in stylometry can identify the author of a piece of code or even

  12. Bit rates in audio source coding

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.

    1992-01-01

    The goal is to introduce and solve the audio coding optimization problem. Psychoacoustic results such as masking and excitation pattern models are combined with results from rate distortion theory to formulate the audio coding optimization problem. The solution of the audio optimization problem is a

  13. Open Source Wifi Hotspot Implementation

    Directory of Open Access Journals (Sweden)

    Tyler Sondag

    2007-06-01

    Full Text Available The goal of this paper is to describe a design—including the hardware, software, and configuration––for an open source wireless network. The network designed will require authentication. While care will be taken to keep the authentication exchange secure, the network will otherwise transmit data without encryption.

  14. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    . Zulfikar

    2012-10-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  15. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    Zulfikar .

    2015-05-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  16. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  17. Implementation of the kinetics in the transport code AZTRAN

    International Nuclear Information System (INIS)

    Duran G, J. A.; Del Valle G, E.; Gomez T, A. M.

    2017-09-01

    This paper shows the implementation of the time dependence in the three-dimensional transport code AZTRAN (AZtlan TRANsport), which belongs to the AZTLAN platform, for the analysis of nuclear reactors (currently under development). The AZTRAN code with this implementation is able to numerically solve the time-dependent transport equation in XYZ geometry, for several energy groups, using the discrete ordinate method S n for the discretization of the angular variable, the nodal method RTN-0 for spatial discretization and method 0 for discretization in time. Initially, the code only solved the neutrons transport equation in steady state, so the implementation of the temporal part was made integrating the neutrons transport equation with respect to time and balance equations corresponding to the concentrations of delayed neutron precursors, for which method 0 was applied. After having directly implemented code kinetics, the improved quasi-static method was implemented, which is a tool for reducing computation time, where the angular flow is factored by the product of two functions called shape function and amplitude function, where the first is calculated for long time steps, called macro-steps and the second is resolved for small time steps called micro-steps. In the new version of AZTRAN several Benchmark problems that were taken from the literature were simulated, the problems used are of two and three dimensions which allowed corroborating the accuracy and stability of the code, showing in general in the reference tests a good behavior. (Author)

  18. Data processing with microcode designed with source coding

    Science.gov (United States)

    McCoy, James A; Morrison, Steven E

    2013-05-07

    Programming for a data processor to execute a data processing application is provided using microcode source code. The microcode source code is assembled to produce microcode that includes digital microcode instructions with which to signal the data processor to execute the data processing application.

  19. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  20. Fault tree analysis. Implementation of the WAM-codes

    International Nuclear Information System (INIS)

    Bento, J.P.; Poern, K.

    1979-07-01

    The report describes work going on at Studsvik at the implementation of the WAM code package for fault tree analysis. These codes originally developed under EPRI contract by Sciences Applications Inc, allow, in contrast with other fault tree codes, all Boolean operations, thus allowing modeling of ''NOT'' conditions and dependent components. To concretize the implementation of these codes, the auxiliary feed-water system of the Swedish BWR Oskarshamn 2 was chosen for the reliability analysis. For this system, both the mean unavailability and the probability density function of the top event - undesired event - of the system fault tree were calculated, the latter using a Monte-Carlo simulation technique. The present study is the first part of a work performed under contract with the Swedish Nuclear Power Inspectorate. (author)

  1. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    OpenAIRE

    Zulfikar, Z

    2012-01-01

    A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared....

  2. An Implementation of Interfacial Transport Equation into the CUPID code

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Jeong, Jae Jun

    2009-11-15

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components for a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted a three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAS, semi-implicit ICE, SIMPLE. The governing equations for a 2-phase flow are composed of mass, momentum, and energy conservation equations for each phase. These equation sets are closed by the interfacial transfer rate of mass, momentum, and energy. The interfacial transfer of mass, momentum, and energy occurs through the interfacial area, and this area plays an important role in the transfer rate. The flow regime based correlations are used for calculating the interracial area in the traditional style 2-phase flow model. This is dependent upon the flow regime and is limited to the fully developed 2-phase flow region. Its application to the multi-dimensional 2-phase flow has some limitation because it adopts the measured results of 2-phase flow in the 1-dimensional tube. The interfacial area concentration transport equation had been suggested in order to calculate the interfacial area without the interfacial area correlations. The source terms to close the interfacial area transport equation should be further developed for a wide ranger usage of it. In this study, the one group interfacial area concentration transport equation has been implemented into the CUPID code. This interfacial area concentration transport equation can be used instead of the interfacial area concentration correlations for the bubbly flow region.

  3. An Implementation of Interfacial Transport Equation into the CUPID code

    International Nuclear Information System (INIS)

    Park, Ik Kyu; Cho, Heong Kyu; Yoon, Han Young; Jeong, Jae Jun

    2009-11-01

    A component scale thermal hydraulic analysis code, CUPID (Component Unstructured Program for Interfacial Dynamics), is being developed for the analysis of components for a nuclear reactor, such as reactor vessel, steam generator, containment, etc. It adopted a three-dimensional, transient, two phase and three-field model. In order to develop the numerical schemes for the three-field model, various numerical schemes have been examined including the SMAS, semi-implicit ICE, SIMPLE. The governing equations for a 2-phase flow are composed of mass, momentum, and energy conservation equations for each phase. These equation sets are closed by the interfacial transfer rate of mass, momentum, and energy. The interfacial transfer of mass, momentum, and energy occurs through the interfacial area, and this area plays an important role in the transfer rate. The flow regime based correlations are used for calculating the interracial area in the traditional style 2-phase flow model. This is dependent upon the flow regime and is limited to the fully developed 2-phase flow region. Its application to the multi-dimensional 2-phase flow has some limitation because it adopts the measured results of 2-phase flow in the 1-dimensional tube. The interfacial area concentration transport equation had been suggested in order to calculate the interfacial area without the interfacial area correlations. The source terms to close the interfacial area transport equation should be further developed for a wide ranger usage of it. In this study, the one group interfacial area concentration transport equation has been implemented into the CUPID code. This interfacial area concentration transport equation can be used instead of the interfacial area concentration correlations for the bubbly flow region

  4. Iterative List Decoding of Concatenated Source-Channel Codes

    Directory of Open Access Journals (Sweden)

    Hedayat Ahmadreza

    2005-01-01

    Full Text Available Whenever variable-length entropy codes are used in the presence of a noisy channel, any channel errors will propagate and cause significant harm. Despite using channel codes, some residual errors always remain, whose effect will get magnified by error propagation. Mitigating this undesirable effect is of great practical interest. One approach is to use the residual redundancy of variable length codes for joint source-channel decoding. In this paper, we improve the performance of residual redundancy source-channel decoding via an iterative list decoder made possible by a nonbinary outer CRC code. We show that the list decoding of VLC's is beneficial for entropy codes that contain redundancy. Such codes are used in state-of-the-art video coders, for example. The proposed list decoder improves the overall performance significantly in AWGN and fully interleaved Rayleigh fading channels.

  5. The Astrophysics Source Code Library by the numbers

    Science.gov (United States)

    Allen, Alice; Teuben, Peter; Berriman, G. Bruce; DuPrie, Kimberly; Mink, Jessica; Nemiroff, Robert; Ryan, PW; Schmidt, Judy; Shamir, Lior; Shortridge, Keith; Wallin, John; Warmels, Rein

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) was founded in 1999 by Robert Nemiroff and John Wallin. ASCL editors seek both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and add entries for the found codes to the library. Software authors can submit their codes to the ASCL as well. This ensures a comprehensive listing covering a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL is indexed by both NASA’s Astrophysics Data System (ADS) and Web of Science, making software used in research more discoverable. This presentation covers the growth in the ASCL’s number of entries, the number of citations to its entries, and in which journals those citations appear. It also discusses what changes have been made to the ASCL recently, and what its plans are for the future.

  6. Solving Semantic Searches for Source Code

    Science.gov (United States)

    2012-11-01

    but of input and expected output pairs. In this domain, those inputs take the form of strings and outputs could be one of sev- eral datatypes ...for some relaxation of CPi that yields C ′ Pi . Encoding weakening is performed by systematically making the constraints on a particular datatype ...the datatypes that can hold concrete or symbolic values: integers, characters, booleans, and strings. The Java implementation uses all the data types

  7. Parallel and vector implementation of APROS simulator code

    International Nuclear Information System (INIS)

    Niemi, J.; Tommiska, J.

    1990-01-01

    In this paper the vector and parallel processing implementation of a general purpose simulator code is discussed. In this code the utilization of vector processing is straightforward. In addition to the loop level parallel processing, the functional decomposition and the domain decomposition have been considered. Results represented for a PWR-plant simulation illustrate the potential speed-up factors of the alternatives. It turns out that the loop level parallelism and the domain decomposition are the most promising alternative to employ the parallel processing. (author)

  8. Application and Implementation of Network Coding for Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk

    2012-01-01

    the initial development of systems and protocols and show that the potential is there. However, I also find that network coding needs to be implemented with care and protocols have to be designed with consideration to make use of this novel technique. 2) The final aspect of this PhD investigates different...... ways that cooperative models may be implemented to cover a wide range of applications. This addresses the development of user cooperative protocols and how we in Device To Device (D2D) communication may reward users that contribute more to the network than they gain. In this area I suggest the use...

  9. Improving radiopharmaceutical supply chain safety by implementing bar code technology.

    Science.gov (United States)

    Matanza, David; Hallouard, François; Rioufol, Catherine; Fessi, Hatem; Fraysse, Marc

    2014-11-01

    The aim of this study was to describe and evaluate an approach for improving radiopharmaceutical supply chain safety by implementing bar code technology. We first evaluated the current situation of our radiopharmaceutical supply chain and, by means of the ALARM protocol, analysed two dispensing errors that occurred in our department. Thereafter, we implemented a bar code system to secure selected key stages of the radiopharmaceutical supply chain. Finally, we evaluated the cost of this implementation, from overtime, to overheads, to additional radiation exposure to workers. An analysis of the events that occurred revealed a lack of identification of prepared or dispensed drugs. Moreover, the evaluation of the current radiopharmaceutical supply chain showed that the dispensation and injection steps needed to be further secured. The bar code system was used to reinforce product identification at three selected key stages: at usable stock entry; at preparation-dispensation; and during administration, allowing to check conformity between the labelling of the delivered product (identity and activity) and the prescription. The extra time needed for all these steps had no impact on the number and successful conduct of examinations. The investment cost was reduced (2600 euros for new material and 30 euros a year for additional supplies) because of pre-existing computing equipment. With regard to the radiation exposure to workers there was an insignificant overexposure for hands with this new organization because of the labelling and scanning processes of radiolabelled preparation vials. Implementation of bar code technology is now an essential part of a global securing approach towards optimum patient management.

  10. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  11. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  12. An efficient chaotic source coding scheme with variable-length blocks

    International Nuclear Information System (INIS)

    Lin Qiu-Zhen; Wong Kwok-Wo; Chen Jian-Yong

    2011-01-01

    An efficient chaotic source coding scheme operating on variable-length blocks is proposed. With the source message represented by a trajectory in the state space of a chaotic system, data compression is achieved when the dynamical system is adapted to the probability distribution of the source symbols. For infinite-precision computation, the theoretical compression performance of this chaotic coding approach attains that of optimal entropy coding. In finite-precision implementation, it can be realized by encoding variable-length blocks using a piecewise linear chaotic map within the precision of register length. In the decoding process, the bit shift in the register can track the synchronization of the initial value and the corresponding block. Therefore, all the variable-length blocks are decoded correctly. Simulation results show that the proposed scheme performs well with high efficiency and minor compression loss when compared with traditional entropy coding. (general)

  13. Distributed coding of multiview sparse sources with joint recovery

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Deligiannis, Nikos; Forchhammer, Søren

    2016-01-01

    In support of applications involving multiview sources in distributed object recognition using lightweight cameras, we propose a new method for the distributed coding of sparse sources as visual descriptor histograms extracted from multiview images. The problem is challenging due to the computati...... transform (SIFT) descriptors extracted from multiview images shows that our method leads to bit-rate saving of up to 43% compared to the state-of-the-art distributed compressed sensing method with independent encoding of the sources....

  14. Development of in-vessel source term analysis code, tracer

    International Nuclear Information System (INIS)

    Miyagi, K.; Miyahara, S.

    1996-01-01

    Analyses of radionuclide transport in fuel failure accidents (generally referred to source terms) are considered to be important especially in the severe accident evaluation. The TRACER code has been developed to realistically predict the time dependent behavior of FPs and aerosols within the primary cooling system for wide range of fuel failure events. This paper presents the model description, results of validation study, the recent model advancement status of the code, and results of check out calculations under reactor conditions. (author)

  15. Tokamak equilibrium reconstruction code LIUQE and its real time implementation

    International Nuclear Information System (INIS)

    Moret, J.-M.; Duval, B.P.; Le, H.B.; Coda, S.; Felici, F.; Reimerdes, H.

    2015-01-01

    Highlights: • Algorithm vertical stabilisation using a linear parametrisation of the current density. • Experimentally derived model of the vacuum vessel to account for vessel currents. • Real-time contouring algorithm for flux surface averaged 1.5 D transport equations. • Full real time implementation coded in SIMULINK runs in less than 200 μs. • Applications: shape control, safety factor profile control, coupling with RAPTOR. - Abstract: Equilibrium reconstruction consists in identifying, from experimental measurements, a distribution of the plasma current density that satisfies the pressure balance constraint. The LIUQE code adopts a computationally efficient method to solve this problem, based on an iterative solution of the Poisson equation coupled with a linear parametrisation of the plasma current density. This algorithm is unstable against vertical gross motion of the plasma column for elongated shapes and its application to highly shaped plasmas on TCV requires a particular treatment of this instability. TCV's continuous vacuum vessel has a low resistance designed to enhance passive stabilisation of the vertical position. The eddy currents in the vacuum vessel have a sizeable influence on the equilibrium reconstruction and must be taken into account. A real time version of LIUQE has been implemented on TCV's distributed digital control system with a cycle time shorter than 200 μs for a full spatial grid of 28 by 65, using all 133 experimental measurements and including the flux surface average of quantities necessary for the real time solution of 1.5 D transport equations. This performance was achieved through a thoughtful choice of numerical methods and code optimisation techniques at every step of the algorithm, and was coded in MATLAB and SIMULINK for the off-line and real time version respectively

  16. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories

  17. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-01-01

    The objectives of the Code of Conduct are, through the development, harmonization and implementation of national policies, laws and regulations, and through the fostering of international co-operation, to: (i) achieve and maintain a high level of safety and security of radioactive sources; (ii) prevent unauthorized access or damage to, and loss, theft or unauthorized transfer of, radioactive sources, so as to reduce the likelihood of accidental harmful exposure to such sources or the malicious use of such sources to cause harm to individuals, society or the environment; and (iii) mitigate or minimize the radiological consequences of any accident or malicious act involving a radioactive source. These objectives should be achieved through the establishment of an adequate system of regulatory control of radioactive sources, applicable from the stage of initial production to their final disposal, and a system for the restoration of such control if it has been lost. This Code relies on existing international standards relating to nuclear, radiation, radioactive waste and transport safety and to the control of radioactive sources. It is intended to complement existing international standards in these areas. The Code of Conduct serves as guidance in general issues, legislation and regulations, regulatory bodies as well as import and export of radioactive sources. A list of radioactive sources covered by the code is provided which includes activities corresponding to thresholds of categories.

  18. Implementation of burnup in FERM nodal computer code

    International Nuclear Information System (INIS)

    Yoriyaz, H.; Nakata, H.

    1986-01-01

    In this work a spatial burnup scheme and feedback effects has been implemented into the FERM [1] ('Finite Element Response Matrix') program. The spatially dependent neutronic parameters have been considered in three levels: zonewise calculation, assemblywise calculation and pointwise calculation. The results have been compared with the results obtained by CITATION [2] program and showed that the processing time in the FERM code has been hundred of times shorter and no significant difference has been observed in the assembly average power distribution. (Author) [pt

  19. Open-Source Development of the Petascale Reactive Flow and Transport Code PFLOTRAN

    Science.gov (United States)

    Hammond, G. E.; Andre, B.; Bisht, G.; Johnson, T.; Karra, S.; Lichtner, P. C.; Mills, R. T.

    2013-12-01

    Open-source software development has become increasingly popular in recent years. Open-source encourages collaborative and transparent software development and promotes unlimited free redistribution of source code to the public. Open-source development is good for science as it reveals implementation details that are critical to scientific reproducibility, but generally excluded from journal publications. In addition, research funds that would have been spent on licensing fees can be redirected to code development that benefits more scientists. In 2006, the developers of PFLOTRAN open-sourced their code under the U.S. Department of Energy SciDAC-II program. Since that time, the code has gained popularity among code developers and users from around the world seeking to employ PFLOTRAN to simulate thermal, hydraulic, mechanical and biogeochemical processes in the Earth's surface/subsurface environment. PFLOTRAN is a massively-parallel subsurface reactive multiphase flow and transport simulator designed from the ground up to run efficiently on computing platforms ranging from the laptop to leadership-class supercomputers, all from a single code base. The code employs domain decomposition for parallelism and is founded upon the well-established and open-source parallel PETSc and HDF5 frameworks. PFLOTRAN leverages modern Fortran (i.e. Fortran 2003-2008) in its extensible object-oriented design. The use of this progressive, yet domain-friendly programming language has greatly facilitated collaboration in the code's software development. Over the past year, PFLOTRAN's top-level data structures were refactored as Fortran classes (i.e. extendible derived types) to improve the flexibility of the code, ease the addition of new process models, and enable coupling to external simulators. For instance, PFLOTRAN has been coupled to the parallel electrical resistivity tomography code E4D to enable hydrogeophysical inversion while the same code base can be used as a third

  20. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  1. Implementation of three-dimension AFEN module to STAR code

    International Nuclear Information System (INIS)

    Kim, Young Il; Jeong, Hyung Kuk; Noh, Jae Man; Kim, Taek Kyum; Ju, Hyung Kuk; Kim, Young Jin.

    1997-05-01

    Recently, the AFEN method has been developed to overcome limitations caused by the transverse integration. The method solves the multi-dimensional diffusion equation directly by expanding its solution into non-separable analytic basis functions. The non-separable analytic function expansion satisfying the multi-dimensional diffusion equation at any points in node makes it possible to model accurately the strong flux gradient near the interface of two fuel assemblies with quite different neutronic properties. In this study, we developed the three-dimensional AFEN formulations, and implemented them into the Static/Transient Core Analysis computer code STAR. The accuracy of the implemented AFEN scheme was tested against two benchmark problems: IAEA benchmark problem and a small session problem composed of MOX and UO 2 fuel assemblies. In these tests the superiority of the AFEN method in predicting the neutron flux distribution and the effective core multiplication factor was verified. (author). 4 figs., 5 refs

  2. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    . However, it comes with the price of several challenges, including the limited power and bandwidth resources for wireless transmission of audio recordings. In such a setup, we study the problem of source coding for the compression of the audio recordings before the transmission in order to reduce the power...... consumption and/or transmission bandwidth by reduction in the transmission rates. Source coding for wireless microphones in reverberant environments has several special characteristics which make it more challenging in comparison with regular audio coding. The signals which are acquired by the microphones......Modern multimedia systems are more and more shifting toward distributed and networked structures. This includes audio systems, where networks of wireless distributed microphones are replacing the traditional microphone arrays. This allows for flexibility of placement and high spatial diversity...

  3. Plagiarism Detection Algorithm for Source Code in Computer Science Education

    Science.gov (United States)

    Liu, Xin; Xu, Chan; Ouyang, Boyu

    2015-01-01

    Nowadays, computer programming is getting more necessary in the course of program design in college education. However, the trick of plagiarizing plus a little modification exists among some students' home works. It's not easy for teachers to judge if there's plagiarizing in source code or not. Traditional detection algorithms cannot fit this…

  4. Automating RPM Creation from a Source Code Repository

    Science.gov (United States)

    2012-02-01

    apps/usr --with- libpq=/apps/ postgres make rm -rf $RPM_BUILD_ROOT umask 0077 mkdir -p $RPM_BUILD_ROOT/usr/local/bin mkdir -p $RPM_BUILD_ROOT...from a source code repository. %pre %prep %setup %build ./autogen.sh ; ./configure --with-db=/apps/db --with-libpq=/apps/ postgres make

  5. Source Coding in Networks with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2016-01-01

    results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design...

  6. IBFAN Africa training initiatives: code implementation and lactation management.

    Science.gov (United States)

    Mbuli, A

    1994-01-01

    As part of an ongoing effort to halt the decline of breast feeding rates in Africa, 35 representatives of 12 different African countries met in Mangochi, Malawi, in February 1994. The Code of Marketing of Breastmilk Substitutes was scrutinized. National codes were drafted based on the "Model Law" of the IBFAN Code Documentation Centre (ICDC), Penang. Mechanisms of implementation, specific to each country, were developed. Strategies for the promotion, protection, and support of breast feeding, which is very important to child survival in Africa, were discussed. The training course was organized by ICDC, in conjunction with IBFAN Africa, and with the support of the United Nations Children's Fund (UNICEF) and the World Health Organization (WHO). Countries in eastern, central, and southern Africa were invited to send participants, who included professors, pediatricians, nutritionists, MCH personnel, nurses, and lawyers. IBFAN Africa has also been conducting lactation management workshops for a number of years in African countries. 26 health personnel (pediatricians, nutritionists, senior nursing personnel, and MCH workers), representing 7 countries in the southern African region, attended a training of trainers lactation management workshop in Swaziland in August, 1993 with the support of their UNICEF country offices. The workshop included lectures, working sessions, discussions, and slide and video presentations. Topics covered included national nutrition statuses, the importance of breast feeding, the anatomy and physiology of breast feeding, breast feeding problems, the International Code of Marketing, counseling skills, and training methods. The field trip to a training course covering primary health care that was run by the Traditional Healers Organization (THO) in Swaziland was of particular interest because of the strong traditional medicine sector in many African countries. IBFAN Africa encourages use of community workers (traditional healers, Rural Health

  7. Coded aperture imaging of alpha source spatial distribution

    International Nuclear Information System (INIS)

    Talebitaher, Alireza; Shutler, Paul M.E.; Springham, Stuart V.; Rawat, Rajdeep S.; Lee, Paul

    2012-01-01

    The Coded Aperture Imaging (CAI) technique has been applied with CR-39 nuclear track detectors to image alpha particle source spatial distributions. The experimental setup comprised: a 226 Ra source of alpha particles, a laser-machined CAI mask, and CR-39 detectors, arranged inside a vacuum enclosure. Three different alpha particle source shapes were synthesized by using a linear translator to move the 226 Ra source within the vacuum enclosure. The coded mask pattern used is based on a Singer Cyclic Difference Set, with 400 pixels and 57 open square holes (representing ρ = 1/7 = 14.3% open fraction). After etching of the CR-39 detectors, the area, circularity, mean optical density and positions of all candidate tracks were measured by an automated scanning system. Appropriate criteria were used to select alpha particle tracks, and a decoding algorithm applied to the (x, y) data produced the de-coded image of the source. Signal to Noise Ratio (SNR) values obtained for alpha particle CAI images were found to be substantially better than those for corresponding pinhole images, although the CAI-SNR values were below the predictions of theoretical formulae. Monte Carlo simulations of CAI and pinhole imaging were performed in order to validate the theoretical SNR formulae and also our CAI decoding algorithm. There was found to be good agreement between the theoretical formulae and SNR values obtained from simulations. Possible reasons for the lower SNR obtained for the experimental CAI study are discussed.

  8. Distributed Source Coding Techniques for Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Barni Mauro

    2007-01-01

    Full Text Available This paper deals with the application of distributed source coding (DSC theory to remote sensing image compression. Although DSC exhibits a significant potential in many application fields, up till now the results obtained on real signals fall short of the theoretical bounds, and often impose additional system-level constraints. The objective of this paper is to assess the potential of DSC for lossless image compression carried out onboard a remote platform. We first provide a brief overview of DSC of correlated information sources. We then focus on onboard lossless image compression, and apply DSC techniques in order to reduce the complexity of the onboard encoder, at the expense of the decoder's, by exploiting the correlation of different bands of a hyperspectral dataset. Specifically, we propose two different compression schemes, one based on powerful binary error-correcting codes employed as source codes, and one based on simpler multilevel coset codes. The performance of both schemes is evaluated on a few AVIRIS scenes, and is compared with other state-of-the-art 2D and 3D coders. Both schemes turn out to achieve competitive compression performance, and one of them also has reduced complexity. Based on these results, we highlight the main issues that are still to be solved to further improve the performance of DSC-based remote sensing systems.

  9. DIANA Code: Design and implementation of an analytic core calculus code by two group, two zone diffusion

    International Nuclear Information System (INIS)

    Mochi, Ignacio

    2005-01-01

    The principal parameters of nuclear reactors are determined in the conceptual design stage.For that purpose, it is necessary to have flexible calculation tools that represent the principal dependencies of such parameters.This capability is of critical importance in the design of innovative nuclear reactors.In order to have a proper tool that could assist the conceptual design of innovative nuclear reactors, we developed and implemented a neutronic core calculus code: DIANA (Diffusion Integral Analytic Neutron Analysis).To calculate the required parameters, this code generates its own cross sections using an analytic two group, two zones diffusion scheme based only on a minimal set of data (i.e. 2200 m/s and fission averaged microscopic cross sections, Wescott factors and Effective Resonance Integrals).Both to calculate cross sections and core parameters, DIANA takes into account heterogeneity effects that are included when it evaluates each zone.Among them lays the disadvantage factor of each energy group.DIANA was totally implemented through Object Oriented Programming using C++ language. This eases source code understanding and would allow a quick expansion of its capabilities if needed.The final product is a versatile and easy-to-use code that allows core calculations with a minimal amount of data.It also contains the required tools needed to perform many variational calculations such as the parameterisation of effective multiplication factors for different radii of the core.The diffusion scheme s simplicity allows an easy following of the involved phenomena, making DIANA the most suitable tool to design reactors whose physics lays beyond the parameters of present reactors.All this reasons make DIANA a good candidate for future innovative reactor analysis

  10. Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code

    Directory of Open Access Journals (Sweden)

    Marinkovic Slavica

    2006-01-01

    Full Text Available Quantized frame expansions based on block transforms and oversampled filter banks (OFBs have been considered recently as joint source-channel codes (JSCCs for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC or a fixed-length code (FLC. This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an -ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.

  11. Implementation of collisions on GPU architecture in the Vorpal code

    Science.gov (United States)

    Leddy, Jarrod; Averkin, Sergey; Cowan, Ben; Sides, Scott; Werner, Greg; Cary, John

    2017-10-01

    The Vorpal code contains a variety of collision operators allowing for the simulation of plasmas containing multiple charge species interacting with neutrals, background gas, and EM fields. These existing algorithms have been improved and reimplemented to take advantage of the massive parallelization allowed by GPU architecture. The use of GPUs is most effective when algorithms are single-instruction multiple-data, so particle collisions are an ideal candidate for this parallelization technique due to their nature as a series of independent processes with the same underlying operation. This refactoring required data memory reorganization and careful consideration of device/host data allocation to minimize memory access and data communication per operation. Successful implementation has resulted in an order of magnitude increase in simulation speed for a test-case involving multiple binary collisions using the null collision method. Work supported by DARPA under contract W31P4Q-16-C-0009.

  12. The Astrophysics Source Code Library: Supporting software publication and citation

    Science.gov (United States)

    Allen, Alice; Teuben, Peter

    2018-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net), established in 1999, is a free online registry for source codes used in research that has appeared in, or been submitted to, peer-reviewed publications. The ASCL is indexed by the SAO/NASA Astrophysics Data System (ADS) and Web of Science and is citable by using the unique ascl ID assigned to each code. In addition to registering codes, the ASCL can house archive files for download and assign them DOIs. The ASCL advocations for software citation on par with article citation, participates in multidiscipinary events such as Force11, OpenCon, and the annual Workshop on Sustainable Software for Science, works with journal publishers, and organizes Special Sessions and Birds of a Feather meetings at national and international conferences such as Astronomical Data Analysis Software and Systems (ADASS), European Week of Astronomy and Space Science, and AAS meetings. In this presentation, I will discuss some of the challenges of gathering credit for publishing software and ideas and efforts from other disciplines that may be useful to astronomy.

  13. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  14. Verification test calculations for the Source Term Code Package

    International Nuclear Information System (INIS)

    Denning, R.S.; Wooton, R.O.; Alexander, C.A.; Curtis, L.A.; Cybulskis, P.; Gieseke, J.A.; Jordan, H.; Lee, K.W.; Nicolosi, S.L.

    1986-07-01

    The purpose of this report is to demonstrate the reasonableness of the Source Term Code Package (STCP) results. Hand calculations have been performed spanning a wide variety of phenomena within the context of a single accident sequence, a loss of all ac power with late containment failure, in the Peach Bottom (BWR) plant, and compared with STCP results. The report identifies some of the limitations of the hand calculation effort. The processes involved in a core meltdown accident are complex and coupled. Hand calculations by their nature must deal with gross simplifications of these processes. Their greatest strength is as an indicator that a computer code contains an error, for example that it doesn't satisfy basic conservation laws, rather than in showing the analysis accurately represents reality. Hand calculations are an important element of verification but they do not satisfy the need for code validation. The code validation program for the STCP is a separate effort. In general the hand calculation results show that models used in the STCP codes (e.g., MARCH, TRAP-MELT, VANESA) obey basic conservation laws and produce reasonable results. The degree of agreement and significance of the comparisons differ among the models evaluated. 20 figs., 26 tabs

  15. Tangent: Automatic Differentiation Using Source Code Transformation in Python

    OpenAIRE

    van Merriënboer, Bart; Wiltschko, Alexander B.; Moldovan, Dan

    2017-01-01

    Automatic differentiation (AD) is an essential primitive for machine learning programming systems. Tangent is a new library that performs AD using source code transformation (SCT) in Python. It takes numeric functions written in a syntactic subset of Python and NumPy as input, and generates new Python functions which calculate a derivative. This approach to automatic differentiation is different from existing packages popular in machine learning, such as TensorFlow and Autograd. Advantages ar...

  16. Revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources

    International Nuclear Information System (INIS)

    Wheatley, J. S.

    2004-01-01

    The revised Code of Conduct on the Safety and Security of Radioactive Sources is aimed primarily at Governments, with the objective of achieving and maintaining a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations; and through the fostering of international co-operation. It focuses on sealed radioactive sources and provides guidance on legislation, regulations and the regulatory body, and import/export controls. Nuclear materials (except for sources containing 239Pu), as defined in the Convention on the Physical Protection of Nuclear Materials, are not covered by the revised Code, nor are radioactive sources within military or defence programmes. An earlier version of the Code was published by IAEA in 2001. At that time, agreement was not reached on a number of issues, notably those relating to the creation of comprehensive national registries for radioactive sources, obligations of States exporting radioactive sources, and the possibility of unilateral declarations of support. The need to further consider these and other issues was highlighted by the events of 11th September 2001. Since then, the IAEA's Secretariat has been working closely with Member States and relevant International Organizations to achieve consensus. The text of the revised Code was finalized at a meeting of technical and legal experts in August 2003, and it was submitted to IAEA's Board of Governors for approval in September 2003, with a recommendation that the IAEA General Conference adopt it and encourage its wide implementation. The IAEA General Conference, in September 2003, endorsed the revised Code and urged States to work towards following the guidance contained within it. This paper summarizes the history behind the revised Code, its content and the outcome of the discussions within the IAEA Board of Governors and General Conference. (Author) 8 refs

  17. The cost of implementing inpatient bar code medication administration.

    Science.gov (United States)

    Sakowski, Julie Ann; Ketchel, Alan

    2013-02-01

    To calculate the costs associated with implementing and operating an inpatient bar-code medication administration (BCMA) system in the community hospital setting and to estimate the cost per harmful error prevented. This is a retrospective, observational study. Costs were calculated from the hospital perspective and a cost-consequence analysis was performed to estimate the cost per preventable adverse drug event averted. Costs were collected from financial records and key informant interviews at 4 not-for profit community hospitals. Costs included direct expenditures on capital, infrastructure, additional personnel, and the opportunity costs of time for existing personnel working on the project. The number of adverse drug events prevented using BCMA was estimated by multiplying the number of doses administered using BCMA by the rate of harmful errors prevented by interventions in response to system warnings. Our previous work found that BCMA identified and intercepted medication errors in 1.1% of doses administered, 9% of which potentially could have resulted in lasting harm. The cost of implementing and operating BCMA including electronic pharmacy management and drug repackaging over 5 years is $40,000 (range: $35,600 to $54,600) per BCMA-enabled bed and $2000 (range: $1800 to $2600) per harmful error prevented. BCMA can be an effective and potentially cost-saving tool for preventing the harm and costs associated with medication errors.

  18. Implementation of IAEA Code of Conduct and Guidance – Exporting State Perspective

    International Nuclear Information System (INIS)

    Hayes, T.

    2010-01-01

    Canadian Nuclear Safety Commission (CNSC) is a federal agency reporting to Parliament through Natural Resources Minister It Regulates all nuclear facilities and activities to protect the health, safety and security of persons and the environment, assure that Canada meets its international commitments and obligations on the peaceful use of nuclear energy by Implementation of the IAEA Code and Guidance. There were 99 States committed to the IAEA Code of Conduct (as of July 2010) while 59 States committed to the IAEA Guidance on Import and Export (as of July 2010) Use of risk-informed regulatory processes to optimize resource allocation and decision-making. Canadian Nuclear Safety Commission Control of Radioactive Sources. As such, the Canadian Government is a strong proponent of the establishment and maintenance of an effective, efficient and harmonized international regime for ensuring the safety and security of such sources

  19. Asymmetric Joint Source-Channel Coding for Correlated Sources with Blind HMM Estimation at the Receiver

    Directory of Open Access Journals (Sweden)

    Ser Javier Del

    2005-01-01

    Full Text Available We consider the case of two correlated sources, and . The correlation between them has memory, and it is modelled by a hidden Markov chain. The paper studies the problem of reliable communication of the information sent by the source over an additive white Gaussian noise (AWGN channel when the output of the other source is available as side information at the receiver. We assume that the receiver has no a priori knowledge of the correlation statistics between the sources. In particular, we propose the use of a turbo code for joint source-channel coding of the source . The joint decoder uses an iterative scheme where the unknown parameters of the correlation model are estimated jointly within the decoding process. It is shown that reliable communication is possible at signal-to-noise ratios close to the theoretical limits set by the combination of Shannon and Slepian-Wolf theorems.

  20. Towards Holography via Quantum Source-Channel Codes

    Science.gov (United States)

    Pastawski, Fernando; Eisert, Jens; Wilming, Henrik

    2017-07-01

    While originally motivated by quantum computation, quantum error correction (QEC) is currently providing valuable insights into many-body quantum physics, such as topological phases of matter. Furthermore, mounting evidence originating from holography research (AdS/CFT) indicates that QEC should also be pertinent for conformal field theories. With this motivation in mind, we introduce quantum source-channel codes, which combine features of lossy compression and approximate quantum error correction, both of which are predicted in holography. Through a recent construction for approximate recovery maps, we derive guarantees on its erasure decoding performance from calculations of an entropic quantity called conditional mutual information. As an example, we consider Gibbs states of the transverse field Ising model at criticality and provide evidence that they exhibit nontrivial protection from local erasure. This gives rise to the first concrete interpretation of a bona fide conformal field theory as a quantum error correcting code. We argue that quantum source-channel codes are of independent interest beyond holography.

  1. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  2. Time-dependent anisotropic external sources in transient 3-D transport code TORT-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    This paper describes the implementation of a time-dependent distributed external source in TORT-TD by explicitly considering the external source in the ''fixed-source'' term of the implicitly time-discretised 3-D discrete ordinates transport equation. Anisotropy of the external source is represented by a spherical harmonics series expansion similar to the angular fluxes. The YALINA-Thermal subcritical assembly serves as a test case. The configuration with 280 fuel rods has been analysed with TORT-TD using cross sections in 18 energy groups and P1 scattering order generated by the KAPROS code system. Good agreement is achieved concerning the multiplication factor. The response of the system to an artificial time-dependent source consisting of two square-wave pulses demonstrates the time-dependent external source capability of TORT-TD. The result is physically plausible as judged from validation calculations. (orig.)

  3. Implementation of international code of marketing breast-milk substitutes in China.

    Science.gov (United States)

    Liu, Aihua; Dai, Yaohua; Xie, Xiaohua; Chen, Li

    2014-11-01

    Breastmilk is the best source of nourishment for infants and young children, and breastfeeding is one of the most effective ways to ensure child health and survival. In May 1981, the World Health Assembly adopted the International Code of Marketing Breast-Milk Substitutes. Since then several subsequent resolutions have been adopted by the World Health Assembly, which both update and clarify the articles within the International Code (herein after the term "Code" refers to both the International Code and all subsequent resolutions). The Code is designed to regulate "inappropriate sales promotion" of breastmilk substitutes and instructs signatory governments to ensure the implementation of its aims through legislation. The Chinese Regulations of the Code were adopted by six government sectors in 1995. However, challenges in promotion, protection, and support of breastfeeding remain. This study aimed to monitor the implementation of the Code in China. Six cities were selected with considerable geographic coverage. In each city three hospitals and six stores were surveyed. The International Baby Food Action Network Interview Form was adapted, and direct observations were made. Research assistants administered the questionnaires to a random sample of mothers of infants under 6 months old who were in the outpatient department of the hospitals. In total, 291 mothers of infants, 35 stores, 17 hospitals, and 26 companies were surveyed. From the whole sample of 291 mothers, the proportion who reported exclusively breastfeeding their infant was 30.9%; 69.1% of mothers reported feeding their infant with commercially available formula. Regarding violations of the Code, 40.2% of the mothers reported receiving free formula samples. Of these, 76.1% received the free samples in or near hospitals. Among the stores surveyed, 45.7% were found promoting products in a way that violates the Code. Also, 69.0% of the labeling on the formula products did not comply with the regulations set

  4. Health physics source document for codes of practice

    International Nuclear Information System (INIS)

    Pearson, G.W.; Meggitt, G.C.

    1989-05-01

    Personnel preparing codes of practice often require basic Health Physics information or advice relating to radiological protection problems and this document is written primarily to supply such information. Certain technical terms used in the text are explained in the extensive glossary. Due to the pace of change in the field of radiological protection it is difficult to produce an up-to-date document. This document was compiled during 1988 however, and therefore contains the principle changes brought about by the introduction of the Ionising Radiations Regulations (1985). The paper covers the nature of ionising radiation, its biological effects and the principles of control. It is hoped that the document will provide a useful source of information for both codes of practice and wider areas and stimulate readers to study radiological protection issues in greater depth. (author)

  5. Running the source term code package in Elebra MX-850

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-01-01

    The source term package (STCP) is one of the main tools applied in calculations of behavior of fission products from nuclear power plants. It is a set of computer codes to assist the calculations of the radioactive materials leaving from the metallic containment of power reactors to the environment during a severe reactor accident. The original version of STCP runs in SDC computer systems, but as it has been written in FORTRAN 77, is possible run it in others systems such as IBM, Burroughs, Elebra, etc. The Elebra MX-8500 version of STCP contains 5 codes:March 3, Trapmelt, Tcca, Vanessa and Nava. The example presented in this report has taken into consideration a small LOCA accident into a PWR type reactor. (M.I.)

  6. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  7. An Implementation of Error Minimization Data Transmission in OFDM using Modified Convolutional Code

    Directory of Open Access Journals (Sweden)

    Hendy Briantoro

    2016-04-01

    Full Text Available This paper presents about error minimization in OFDM system. In conventional system, usually using channel coding such as BCH Code or Convolutional Code. But, performance BCH Code or Convolutional Code is not good in implementation of OFDM System. Error bits of OFDM system without channel coding is 5.77%. Then, we used convolutional code with code rate 1/2, it can reduce error bitsonly up to 3.85%. So, we proposed OFDM system with Modified Convolutional Code. In this implementation, we used Software Define Radio (SDR, namely Universal Software Radio Peripheral (USRP NI 2920 as the transmitter and receiver. The result of OFDM system using Modified Convolutional Code with code rate is able recover all character received so can decrease until 0% error bit. Increasing performance of Modified Convolutional Code is about 1 dB in BER of 10-4 from BCH Code and Convolutional Code. So, performance of Modified Convolutional better than BCH Code or Convolutional Code. Keywords: OFDM, BCH Code, Convolutional Code, Modified Convolutional Code, SDR, USRP

  8. Nifty Native Implemented Functions: low-level meets high-level code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Erlang Native Implemented Functions (NIFs) allow developers to implement functions in C (or C++) rather than Erlang. NIFs are useful for integrating high performance or legacy code in Erlang applications. The talk will cover how to implement NIFs, use cases, and common pitfalls when employing them. Further, we will discuss how and why Erlang applications, such as Riak, use NIFs. About the speaker Ian Plosker is the Technical Lead, International Operations at Basho Technologies, the makers of the open source database Riak. He has been developing software professionally for 10 years and programming since childhood. Prior to working at Basho, he developed everything from CMS to bioinformatics platforms to corporate competitive intelligence management systems. At Basho, he's been helping customers be incredibly successful using Riak.

  9. NRC needs and their implementation-ASME Section IX code

    International Nuclear Information System (INIS)

    Liaw, B.D.

    1985-01-01

    The guiding principle from the onset of government regulation for the peaceful use of nuclear energy has been to prescribe only the minimum requirements that are needed for safety. In the pioneer regulators' collective mind, the technical details could be left to the regulated industry through its agents like NSSS vendors and A/E's and their surrogate organizations like ASME, ANS, AIF, etc. However, it has evolved through the years, due either to the bureaucratic momentum or the vacuum in industry leadership, into a situation where one sees an ever increasing number of detailed ''requirements'' prescribed by the regulators. Within the scope of activities covered by Section XI, there is no exception: e.g., NUREG-067, -0531 -1061; NUREG-0313 Rev. 0, Rev. 1, and now Rev. 2; IE Bulletins 82-03, 83-02; and Generic Letters 84-11, and 84-07, etc. for one issue of pipe crack alone; and there are more to come. There appears a consensus among all concerned parties including regulators that this is not a desirable situation and that something must be done to reverse this trend. The purpose of this discussion is, therefore, to explore the areas where the Section XI Code can be restructured to meet this need, and to seek ideas from the representatives of the regulated industry on the methods of implementation that are effective, efficient, and acceptable to all concerned parties

  10. Implementation of probabilistic safety concepts in international codes

    International Nuclear Information System (INIS)

    Borges, J.F.

    1977-01-01

    Recent progress in the implementation of safety concepts in international structure codes is briefly presented. Special attention is paid to the work of the Joint-Committee on Structural Safety. The discussion is centered on some problems such as: safety differentiation, definition and combination of actions, spaces for checking safety and non-linear structural behaviour. When discussing safety differentiation it should be considered that the total probability of failure derives from a theoretical probability of failure and a probability of failure due to error and gross negligence. Optimization of design criteria should take into account both causes of failure. The quantification of reliability implies a probabilistic idealization of all basic variables. Steps taken to obtain an improved definition of different types of actions and rules for their combination are described. Safety checking can be carried out in terms of basic variables, action-effects, or any other suitable variable. However, the advantages and disadvantages of the different types of formulation should be discussed, particularly in the case of non-linear structural behaviour. (orig.) [de

  11. COMPASS: A source term code for investigating capillary barrier performance

    International Nuclear Information System (INIS)

    Zhou, Wei; Apted, J.J.

    1996-01-01

    A computer code COMPASS based on compartment model approach is developed to calculate the near-field source term of the High-Level-Waste repository under unsaturated conditions. COMPASS is applied to evaluate the expected performance of Richard's (capillary) barriers as backfills to divert infiltrating groundwater at Yucca Mountain. Comparing the release rates of four typical nuclides with and without the Richard's barrier, it is shown that the Richard's barrier significantly decreases the peak release rates from the Engineered-Barrier-System (EBS) into the host rock

  12. Vectorization, parallelization and implementation of Quantum molecular dynamics codes (QQQF, MONTEV)

    Energy Technology Data Exchange (ETDEWEB)

    Kato, Kaori [High Energy Accelerator Research Organization, Tsukuba, Ibaraki (Japan); Kunugi, Tomoaki; Kotake, Susumu; Shibahara, Masahiko

    1998-03-01

    This report describes parallelization, vectorization and implementation for two simulation codes, Quantum molecular dynamics simulation code QQQF and Photon montecalro molecular dynamics simulation code MONTEV, that have been developed for the analysis of the thermalization of photon energies in the molecule or materials. QQQF has been vectorized and parallelized on Fujitsu VPP and has been implemented from VPP to Intel Paragon XP/S and parallelized. MONTEV has been implemented from VPP to Paragon and parallelized. (author)

  13. A plug-in to Eclipse for VHDL source codes: functionalities

    Science.gov (United States)

    Niton, B.; Poźniak, K. T.; Romaniuk, R. S.

    The paper presents an original application, written by authors, which supports writing and edition of source codes in VHDL language. It is a step towards fully automatic, augmented code writing for photonic and electronic systems, also systems based on FPGA and/or DSP processors. An implementation is described, based on VEditor. VEditor is a free license program. Thus, the work presented in this paper supplements and extends this free license. The introduction characterizes shortly available tools on the market which serve for aiding the design processes of electronic systems in VHDL. Particular attention was put on plug-ins to the Eclipse environment and Emacs program. There are presented detailed properties of the written plug-in such as: programming extension conception, and the results of the activities of formatter, re-factorizer, code hider, and other new additions to the VEditor program.

  14. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  15. Corporate Governance Scorecards : Assessing and Promoting the Implementation of Codes of Corporate Governance

    OpenAIRE

    International Finance Corporation

    2014-01-01

    This is a supplement to second IFC's toolkit: developing Corporate Governance codes of best practice. The focus of second toolkit is the development of codes of corporate governance. This supplement focuses narrowly on how to use scorecards to measure the observance and implementation of such codes. It does not cover the full panoply of governance assessment tools. This supplement provides...

  16. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  17. A Comparison of Source Code Plagiarism Detection Engines

    Science.gov (United States)

    Lancaster, Thomas; Culwin, Fintan

    2004-06-01

    Automated techniques for finding plagiarism in student source code submissions have been in use for over 20 years and there are many available engines and services. This paper reviews the literature on the major modern detection engines, providing a comparison of them based upon the metrics and techniques they deploy. Generally the most common and effective techniques are seen to involve tokenising student submissions then searching pairs of submissions for long common substrings, an example of what is defined to be a paired structural metric. Computing academics are recommended to use one of the two Web-based detection engines, MOSS and JPlag. It is shown that whilst detection is well established there are still places where further research would be useful, particularly where visual support of the investigation process is possible.

  18. Source Code Verification for Embedded Systems using Prolog

    Directory of Open Access Journals (Sweden)

    Frank Flederer

    2017-01-01

    Full Text Available System relevant embedded software needs to be reliable and, therefore, well tested, especially for aerospace systems. A common technique to verify programs is the analysis of their abstract syntax tree (AST. Tree structures can be elegantly analyzed with the logic programming language Prolog. Moreover, Prolog offers further advantages for a thorough analysis: On the one hand, it natively provides versatile options to efficiently process tree or graph data structures. On the other hand, Prolog's non-determinism and backtracking eases tests of different variations of the program flow without big effort. A rule-based approach with Prolog allows to characterize the verification goals in a concise and declarative way. In this paper, we describe our approach to verify the source code of a flash file system with the help of Prolog. The flash file system is written in C++ and has been developed particularly for the use in satellites. We transform a given abstract syntax tree of C++ source code into Prolog facts and derive the call graph and the execution sequence (tree, which then are further tested against verification goals. The different program flow branching due to control structures is derived by backtracking as subtrees of the full execution sequence. Finally, these subtrees are verified in Prolog. We illustrate our approach with a case study, where we search for incorrect applications of semaphores in embedded software using the real-time operating system RODOS. We rely on computation tree logic (CTL and have designed an embedded domain specific language (DSL in Prolog to express the verification goals.

  19. GRHydro: a new open-source general-relativistic magnetohydrodynamics code for the Einstein toolkit

    International Nuclear Information System (INIS)

    Mösta, Philipp; Haas, Roland; Ott, Christian D; Reisswig, Christian; Mundim, Bruno C; Faber, Joshua A; Noble, Scott C; Bode, Tanja; Löffler, Frank; Schnetter, Erik

    2014-01-01

    We present the new general-relativistic magnetohydrodynamics (GRMHD) capabilities of the Einstein toolkit, an open-source community-driven numerical relativity and computational relativistic astrophysics code. The GRMHD extension of the toolkit builds upon previous releases and implements the evolution of relativistic magnetized fluids in the ideal MHD limit in fully dynamical spacetimes using the same shock-capturing techniques previously applied to hydrodynamical evolution. In order to maintain the divergence-free character of the magnetic field, the code implements both constrained transport and hyperbolic divergence cleaning schemes. We present test results for a number of MHD tests in Minkowski and curved spacetimes. Minkowski tests include aligned and oblique planar shocks, cylindrical explosions, magnetic rotors, Alfvén waves and advected loops, as well as a set of tests designed to study the response of the divergence cleaning scheme to numerically generated monopoles. We study the code’s performance in curved spacetimes with spherical accretion onto a black hole on a fixed background spacetime and in fully dynamical spacetimes by evolutions of a magnetized polytropic neutron star and of the collapse of a magnetized stellar core. Our results agree well with exact solutions where these are available and we demonstrate convergence. All code and input files used to generate the results are available on http://einsteintoolkit.org. This makes our work fully reproducible and provides new users with an introduction to applications of the code. (paper)

  20. Implementation of GNASH and auxiliary codes on the Harwell CRAY-1

    International Nuclear Information System (INIS)

    Muir, D.W.

    1985-07-01

    The report describes a version of the preequilibrium, statistical nuclear-model code GNASH which has been implemented, along with a set of small auxiliary codes, on the CRAY-1 at AERE Harwell. GNASH provides a flexible tool for calculating cross sections, isomer ratios and emission spectra. A detailed description of the current user input is provided along with a full listing of the actual FORTRAN code, as modified for this implementation. (author)

  1. Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

    DEFF Research Database (Denmark)

    Simonsen, Kent Inge; Kristensen, Lars Michael

    2014-01-01

    with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real...... protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show...

  2. Implementation of computer codes for performance assessment of the Republic repository of LLW/ILW Mochovce

    International Nuclear Information System (INIS)

    Hanusik, V.; Kopcani, I.; Gedeon, M.

    2000-01-01

    This paper describes selection and adaptation of computer codes required to assess the effects of radionuclide release from Mochovce Radioactive Waste Disposal Facility. The paper also demonstrates how these codes can be integrated into performance assessment methodology. The considered codes include DUST-MS for source term release, MODFLOW for ground-water flow and BS for transport through biosphere and dose assessment. (author)

  3. Source-term model for the SYVAC3-NSURE performance assessment code

    International Nuclear Information System (INIS)

    Rowat, J.H.; Rattan, D.S.; Dolinar, G.M.

    1996-11-01

    Radionuclide contaminants in wastes emplaced in disposal facilities will not remain in those facilities indefinitely. Engineered barriers will eventually degrade, allowing radioactivity to escape from the vault. The radionuclide release rate from a low-level radioactive waste (LLRW) disposal facility, the source term, is a key component in the performance assessment of the disposal system. This report describes the source-term model that has been implemented in Ver. 1.03 of the SYVAC3-NSURE (Systems Variability Analysis Code generation 3-Near Surface Repository) code. NSURE is a performance assessment code that evaluates the impact of near-surface disposal of LLRW through the groundwater pathway. The source-term model described here was developed for the Intrusion Resistant Underground Structure (IRUS) disposal facility, which is a vault that is to be located in the unsaturated overburden at AECL's Chalk River Laboratories. The processes included in the vault model are roof and waste package performance, and diffusion, advection and sorption of radionuclides in the vault backfill. The model presented here was developed for the IRUS vault; however, it is applicable to other near-surface disposal facilities. (author). 40 refs., 6 figs

  4. Code cases for implementing risk-based inservice testing in the ASME OM code

    Energy Technology Data Exchange (ETDEWEB)

    Rowley, C.W.

    1996-12-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices.

  5. Code cases for implementing risk-based inservice testing in the ASME OM code

    International Nuclear Information System (INIS)

    Rowley, C.W.

    1996-01-01

    Historically inservice testing has been reasonably effective, but quite costly. Recent applications of plant PRAs to the scope of the IST program have demonstrated that of the 30 pumps and 500 valves in the typical plant IST program, less than half of the pumps and ten percent of the valves are risk significant. The way the ASME plans to tackle this overly-conservative scope for IST components is to use the PRA and plant expert panels to create a two tier IST component categorization scheme. The PRA provides the quantitative risk information and the plant expert panel blends the quantitative and deterministic information to place the IST component into one of two categories: More Safety Significant Component (MSSC) or Less Safety Significant Component (LSSC). With all the pumps and valves in the IST program placed in MSSC or LSSC categories, two different testing strategies will be applied. The testing strategies will be unique for the type of component, such as centrifugal pump, positive displacement pump, MOV, AOV, SOV, SRV, PORV, HOV, CV, and MV. A series of OM Code Cases are being developed to capture this process for a plant to use. One Code Case will be for Component Importance Ranking. The remaining Code Cases will develop the MSSC and LSSC testing strategy for type of component. These Code Cases are planned for publication in early 1997. Later, after some industry application of the Code Cases, the alternative Code Case requirements will gravitate to the ASME OM Code as appendices

  6. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  7. Topological color codes on Union Jack lattices: a stable implementation of the whole Clifford group

    International Nuclear Information System (INIS)

    Katzgraber, Helmut G.; Bombin, H.; Andrist, Ruben S.; Martin-Delgado, M. A.

    2010-01-01

    We study the error threshold of topological color codes on Union Jack lattices that allow for the full implementation of the whole Clifford group of quantum gates. After mapping the error-correction process onto a statistical mechanical random three-body Ising model on a Union Jack lattice, we compute its phase diagram in the temperature-disorder plane using Monte Carlo simulations. Surprisingly, topological color codes on Union Jack lattices have a similar error stability to color codes on triangular lattices, as well as to the Kitaev toric code. The enhanced computational capabilities of the topological color codes on Union Jack lattices with respect to triangular lattices and the toric code combined with the inherent robustness of this implementation show good prospects for future stable quantum computer implementations.

  8. 78 FR 41731 - Source Specific Federal Implementation Plan for Implementing Best Available Retrofit Technology...

    Science.gov (United States)

    2013-07-11

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 49 [EPA-R09-OAR-2013-0489; FRL-9830-5] Source Specific Federal Implementation Plan for Implementing Best Available Retrofit Technology for Four Corners Power... Implementation Plan (FIP) to implement the Best Available Retrofit Technology (BART) requirement of the Regional...

  9. Importance sampling implemented in the code PRIZMA for deep penetration and detection problems in reactor physics

    International Nuclear Information System (INIS)

    Kandiev, Y.Z.; Zatsepin, O.V.

    2013-01-01

    At RFNC-VNIITF, the PRIZMA code which has been developed for more than 30 years, is used to model radiation transport by the Monte Carlo method. The code implements individual and coupled tracking of neutrons, photons, electrons, positrons and ions in one dimensional (1D), 2D or 3D geometry. Attendance estimators are used for tallying, i.e., the estimators whose scores are only nonzero from particles which cross a region or surface of interest. Importance sampling is used to make deep penetration and detection calculations more effective. However, its application to reactor analysis appeared peculiar and required further development. The paper reviews methods used for deep penetration and detection calculations by PRIZMA. It describes in what these calculations differ when applied to reactor analysis and how we compute approximated importance functions and parameters for biased distributions. Methods to control the statistical weight of particles are also discussed. A number of test and applied calculations which were done for the purpose of verification are provided. They are shown to agree either with asymptotic solutions if exist, or with results of analog calculations or predictions by other codes. The applied calculations include the estimation of ex-core detector response from neutron sources arranged in the core, and the estimation of in-core detector response. (authors)

  10. Implementation of the chemical PbLi/water reaction in the SIMMER code

    Energy Technology Data Exchange (ETDEWEB)

    Eboli, Marica, E-mail: marica.eboli@for.unipi.it [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Forgione, Nicola [DICI—University of Pisa, Largo Lucio Lazzarino 2, 56122 Pisa (Italy); Del Nevo, Alessandro [ENEA FSN-ING-PAN, CR Brasimone, 40032 Camugnano, BO (Italy)

    2016-11-01

    Highlights: • Updated predictive capabilities of SIMMER-III code. • Verification of the implemented PbLi/Water chemical reactions. • Identification of code capabilities in modelling phenomena relevant to safety. • Validation against BLAST Test No. 5 experimental data successfully completed. • Need for new experimental campaign in support of code validation on LIFUS5/Mod3. - Abstract: The availability of a qualified system code for the deterministic safety analysis of the in-box LOCA postulated accident is of primary importance. Considering the renewed interest for the WCLL breeding blanket, such code shall be multi-phase, shall manage the thermodynamic interaction among the fluids, and shall include the exothermic chemical reaction between lithium-lead and water, generating oxides and hydrogen. The paper presents the implementation of the chemical correlations in SIMMER-III code, the verification of the code model in simple geometries and the first validation activity based on BLAST Test N°5 experimental data.

  11. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  12. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  13. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  14. Implementation of JAERI's reflood model into TRAC-PF1/MOD1 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Ohnuki, Akira; Murao, Yoshio

    1993-02-01

    Selected physical models of REFLA code, that is a reflood analysis code developed at JAERI, were implemented into the TRAC-PF1/MOD1 code in order to improve the predictive capability of the TRAC-PF1/MOD1 code for the core thermal hydraulic behaviors during the reflood phase in a PWR LOCA. Through comparisons of physical models between both codes, (1) Murao-Iguchi void fraction correlation, (2) the drag coefficient correlation acting to drops, (3) the correlation for wall heat transfer coefficient in the film boiling regime, (4) the quench velocity correlation and (5) heat transfer correlations for the dispersed flow regime were selected from the REFLA code to be implemented into the TRAC-PF1/MOD1 code. A method for the transformation of the void fraction correlation to the equivalent interfacial friction model was developed and the effect of the transformation method on the stability of the solution was discussed. Through assessment calculation using data from CCTF (Cylindrical Core Test Facility) flat power test, it was confirmed that the predictive capability of the TRAC code for the core thermal hydraulic behaviors during the reflood can be improved by the implementation of selected physical models of the REFLA code. Several user guidelines for the modified TRAC code were proposed based on the sensitivity studies on fluid cell number in the hydraulic calculation and on node number and effect of axial heat conduction in the heat conduction calculation of fuel rod. (author)

  15. Schroedinger’s Code: A Preliminary Study on Research Source Code Availability and Link Persistence in Astrophysics

    Science.gov (United States)

    Allen, Alice; Teuben, Peter J.; Ryan, P. Wesley

    2018-05-01

    We examined software usage in a sample set of astrophysics research articles published in 2015 and searched for the source codes for the software mentioned in these research papers. We categorized the software to indicate whether the source code is available for download and whether there are restrictions to accessing it, and if the source code is not available, whether some other form of the software, such as a binary, is. We also extracted hyperlinks from one journal’s 2015 research articles, as links in articles can serve as an acknowledgment of software use and lead to the data used in the research, and tested them to determine which of these URLs are still accessible. For our sample of 715 software instances in the 166 articles we examined, we were able to categorize 418 records as according to whether source code was available and found that 285 unique codes were used, 58% of which offered the source code for download. Of the 2558 hyperlinks extracted from 1669 research articles, at best, 90% of them were available over our testing period.

  16. Implementation of an OAIS Repository Using Free, Open Source Software

    Science.gov (United States)

    Flathers, E.; Gessler, P. E.; Seamon, E.

    2015-12-01

    The Northwest Knowledge Network (NKN) is a regional data repository located at the University of Idaho that focuses on the collection, curation, and distribution of research data. To support our home institution and others in the region, we offer services to researchers at all stages of the data lifecycle—from grant application and data management planning to data distribution and archive. In this role, we recognize the need to work closely with other data management efforts at partner institutions and agencies, as well as with larger aggregation efforts such as our state geospatial data clearinghouses, data.gov, DataONE, and others. In the past, one of our challenges with monolithic, prepackaged data management solutions is that customization can be difficult to implement and maintain, especially as new versions of the software are released that are incompatible with our local codebase. Our solution is to break the monolith up into its constituent parts, which offers us several advantages. First, any customizations that we make are likely to fall into areas that can be accessed through Application Program Interfaces (API) that are likely to remain stable over time, so our code stays compatible. Second, as components become obsolete or insufficient to meet new demands that arise, we can replace the individual components with minimal effect on the rest of the infrastructure, causing less disruption to operations. Other advantages include increased system reliability, staggered rollout of new features, enhanced compatibility with legacy systems, reduced dependence on a single software company as a point of failure, and the separation of development into manageable tasks. In this presentation, we describe our application of the Service Oriented Architecture (SOA) design paradigm to assemble a data repository that conforms to the Open Archival Information System (OAIS) Reference Model primarily using a collection of free and open-source software. We detail the design

  17. OSSMETER D3.4 – Language-Specific Source Code Quality Analysis

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim); H.J.S. Basten (Bas)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and prototypes of the tools that are needed for source code quality analysis in open source software projects. It builds upon the results of: • Deliverable 3.1 where infra-structure and

  18. An FPGA Implementation of (3,6-Regular Low-Density Parity-Check Code Decoder

    Directory of Open Access Journals (Sweden)

    Tong Zhang

    2003-05-01

    Full Text Available Because of their excellent error-correcting performance, low-density parity-check (LDPC codes have recently attracted a lot of attention. In this paper, we are interested in the practical LDPC code decoder hardware implementations. The direct fully parallel decoder implementation usually incurs too high hardware complexity for many real applications, thus partly parallel decoder design approaches that can achieve appropriate trade-offs between hardware complexity and decoding throughput are highly desirable. Applying a joint code and decoder design methodology, we develop a high-speed (3,k-regular LDPC code partly parallel decoder architecture based on which we implement a 9216-bit, rate-1/2(3,6-regular LDPC code decoder on Xilinx FPGA device. This partly parallel decoder supports a maximum symbol throughput of 54 Mbps and achieves BER 10−6 at 2 dB over AWGN channel while performing maximum 18 decoding iterations.

  19. ENDF/B Pre-Processing Codes: Implementing and testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskettes containing the ENDF/B Pre-Processing codes by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a series of 7 diskettes. (author)

  20. Using National Drug Codes and drug knowledge bases to organize prescription records from multiple sources.

    Science.gov (United States)

    Simonaitis, Linas; McDonald, Clement J

    2009-10-01

    The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.

  1. Implementing the Netherlands Code of Conduct for Scientific Practice : A Case Study

    NARCIS (Netherlands)

    Schuurbiers, D.; Osseweijer, P.; Kinderlerer, J.

    2009-01-01

    Widespread enthusiasm for establishing scientific codes of conduct notwithstanding, the utility of such codes in influencing scientific practice is not self-evident. It largely depends on the implementation phase following their establishment—a phase which often receives little attention. The aim of

  2. Implementation of the critical points model in a SFM-FDTD code working in oblique incidence

    Energy Technology Data Exchange (ETDEWEB)

    Hamidi, M; Belkhir, A; Lamrous, O [Laboratoire de Physique et Chimie Quantique, Universite Mouloud Mammeri, Tizi-Ouzou (Algeria); Baida, F I, E-mail: omarlamrous@mail.ummto.dz [Departement d' Optique P.M. Duffieux, Institut FEMTO-ST UMR 6174 CNRS Universite de Franche-Comte, 25030 Besancon Cedex (France)

    2011-06-22

    We describe the implementation of the critical points model in a finite-difference-time-domain code working in oblique incidence and dealing with dispersive media through the split field method. Some tests are presented to validate our code in addition to an application devoted to plasmon resonance of a gold nanoparticles grating.

  3. 78 FR 60700 - Source Specific Federal Implementation Plan for Implementing Best Available Retrofit Technology...

    Science.gov (United States)

    2013-10-02

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 49 [EPA-R09-OAR-2013-0489; FRL-9901-58-Region 9] Source Specific Federal Implementation Plan for Implementing Best Available Retrofit Technology for Four... Plan (FIP) to implement the Best Available Retrofit Technology (BART) requirement of the Regional Haze...

  4. New Source Term Model for the RESRAD-OFFSITE Code Version 3

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Charley [Argonne National Lab. (ANL), Argonne, IL (United States); Gnanapragasam, Emmanuel [Argonne National Lab. (ANL), Argonne, IL (United States); Cheng, Jing-Jy [Argonne National Lab. (ANL), Argonne, IL (United States); Kamboj, Sunita [Argonne National Lab. (ANL), Argonne, IL (United States); Chen, Shih-Yew [Argonne National Lab. (ANL), Argonne, IL (United States)

    2013-06-01

    This report documents the new source term model developed and implemented in Version 3 of the RESRAD-OFFSITE code. This new source term model includes: (1) "first order release with transport" option, in which the release of the radionuclide is proportional to the inventory in the primary contamination and the user-specified leach rate is the proportionality constant, (2) "equilibrium desorption release" option, in which the user specifies the distribution coefficient which quantifies the partitioning of the radionuclide between the solid and aqueous phases, and (3) "uniform release" option, in which the radionuclides are released from a constant fraction of the initially contaminated material during each time interval and the user specifies the duration over which the radionuclides are released.

  5. Implementation of generalized quantum measurements: Superadditive quantum coding, accessible information extraction, and classical capacity limit

    International Nuclear Information System (INIS)

    Takeoka, Masahiro; Fujiwara, Mikio; Mizuno, Jun; Sasaki, Masahide

    2004-01-01

    Quantum-information theory predicts that when the transmission resource is doubled in quantum channels, the amount of information transmitted can be increased more than twice by quantum-channel coding technique, whereas the increase is at most twice in classical information theory. This remarkable feature, the superadditive quantum-coding gain, can be implemented by appropriate choices of code words and corresponding quantum decoding which requires a collective quantum measurement. Recently, an experimental demonstration was reported [M. Fujiwara et al., Phys. Rev. Lett. 90, 167906 (2003)]. The purpose of this paper is to describe our experiment in detail. Particularly, a design strategy of quantum-collective decoding in physical quantum circuits is emphasized. We also address the practical implication of the gain on communication performance by introducing the quantum-classical hybrid coding scheme. We show how the superadditive quantum-coding gain, even in a small code length, can boost the communication performance of conventional coding techniques

  6. Neutron spallation source and the Dubna cascade code

    CERN Document Server

    Kumar, V; Goel, U; Barashenkov, V S

    2003-01-01

    Neutron multiplicity per incident proton, n/p, in collision of high energy proton beam with voluminous Pb and W targets has been estimated from the Dubna cascade code and compared with the available experimental data for the purpose of benchmarking of the code. Contributions of various atomic and nuclear processes for heat production and isotopic yield of secondary nuclei are also estimated to assess the heat and radioactivity conditions of the targets. Results obtained from the code show excellent agreement with the experimental data at beam energy, E < 1.2 GeV and differ maximum up to 25% at higher energy. (author)

  7. Stars with shell energy sources. Part 1. Special evolutionary code

    International Nuclear Information System (INIS)

    Rozyczka, M.

    1977-01-01

    A new version of the Henyey-type stellar evolution code is described and tested. It is shown, as a by-product of the tests, that the thermal time scale of the core of a red giant approaching the helium flash is of the order of the evolutionary time scale. The code itself appears to be a very efficient tool for investigations of the helium flash, carbon flash and the evolution of a white dwarf accreting mass. (author)

  8. Implementation of the International Code of Marketing of Breastmilk Substitutes in the Eastern Mediterranean Region.

    Science.gov (United States)

    Al Jawaldeh, Ayoub; Sayed, Ghada

    2018-04-05

    Optimal breastfeeding practices and appropriate complementary feeding improve child health, survival and development. The countries of the Eastern Mediterranean Region have made significant strides in formulation and implementation of legislation to protect and promote breastfeeding based on The International Code of Marketing of Breast-milk Substitutes (the Code) and subsequent relevant World Health Assembly resolutions. To assess the implementation of the Code in the Region. Assessment was conducted by the World Health Organization (WHO) Regional Office for the Eastern Mediterranean using a WHO standard questionnaire. Seventeen countries in the Region have enacted legislation to protect breastfeeding. Only 6 countries have comprehensive legislation or other legal measures reflecting all or most provisions of the Code; 4 countries have legal measures incorporating many provisions of the Code; 7 countries have legal measures that contain a few provisions of the Code; 4 countries are currently studying the issue; and only 1 country has no measures in place. Further analysis of the legislation found that the text of articles in the laws fully reflected the Code articles in only 6 countries. Most countries need to revisit and amend existing national legislation to implement fully the Code and relevant World Health Assembly resolutions, supported by systematic monitoring and reporting. Copyright © World Health Organization (WHO) 2018. Some rights reserved. This work is available under the CC BY-NC-SA 3.0 IGO license (https://creativecommons.org/licenses/by-nc-sa/3.0/igo).

  9. Implementation of Fast Emulator-based Code Calibration

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, Nathaniel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis; Denman, Matthew R [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Risk & Reliability Analysis

    2016-08-01

    Calibration is the process of using experimental data to gain more precise knowledge of simulator inputs. This process commonly involves the use of Markov-chain Monte Carlo, which requires running a simulator thousands of times. If we can create a faster program, called an emulator, that mimics the outputs of the simulator for an input range of interest, then we can speed up the process enough to make it feasible for expensive simulators. To this end, we implement a Gaussian-process emulator capable of reproducing the behavior of various long-running simulators to within acceptable tolerance. This fast emulator can be used in place of a simulator to run Markov-chain Monte Carlo in order to calibrate simulation parameters to experimental data. As a demonstration, this emulator is used to calibrate the inputs of an actual simulator against two sodium-fire experiments.

  10. Status of SFR Codes and Methods QA Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Brunett, Acacia J. [Argonne National Lab. (ANL), Argonne, IL (United States); Briggs, Laural L. [Argonne National Lab. (ANL), Argonne, IL (United States); Fanning, Thomas H. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-01-31

    This report details development of the SAS4A/SASSYS-1 SQA Program and describes the initial stages of Program implementation planning. The provisional Program structure, which is largely focused on the establishment of compliant SQA documentation, is outlined in detail, and Program compliance with the appropriate SQA requirements is highlighted. Additional program activities, such as improvements to testing methods and Program surveillance, are also described in this report. Given that the programmatic resources currently granted to development of the SAS4A/SASSYS-1 SQA Program framework are not sufficient to adequately address all SQA requirements (e.g. NQA-1, NUREG/BR-0167, etc.), this report also provides an overview of the gaps that remain the SQA program, and highlights recommendations on a path forward to resolution of these issues. One key finding of this effort is the identification of the need for an SQA program sustainable over multiple years within DOE annual R&D funding constraints.

  11. Report on the Implementation of the Code and the Guidance in Burkina Faso: Experiences and Lessons Learned

    International Nuclear Information System (INIS)

    Nabayaogo, Delwendé

    2015-01-01

    Burkina Faso started the implementation of the code of conduct since 2008 as a member state of the International Atomic Energy Agency (IAEA). The process went through several steps and actions. The first step was the implementation of a regulatory infrastructure with the development of legislation and regulation framework and the establishment of a regulatory body (National Authority for Radiation protection and Nuclear Safety). The key legislation is the law n°032-2012/AN on nuclear safety and safeguards. Then, the country undertook actions for the enforcement of the regulations through licensing and inspections regime. This leads to the recovering of orphan sources and the establishment of a national register of radioactive sources using RAIS software. In 2009 and 2010, the regulatory body proceeded to a wide spread inventory of radioactive sources and search of orphan sources in the country covering the thirteen regions according to the administrative division. The registered sources belong mainly to the categories III and IV. There is a low quantity of categories I and II sources. As Burkina Faso has no facility for disused sources and waste management, a contract of return is requested for their importation during the licensing process. The objective is to set a good management of the sources and assure their security. Some sources imported many years ago have no more suppliers and no return contract. An action is currently running for the repatriation of some of them with the support of IAEA. In supporting the government effort for safety and security of sources, an integrated Nuclear Security Plan has been developed by an INSERV mission taking account the radioactive material. The good implementation of these activities and the principles of the code needs a well trained staff. ARSN developed a program of training for its regulatory staff and make it participate to IAEA trainings. Our success stories are likely the well drafted law on nuclear safety and

  12. A multi-GPU implementation of a D2Q37 lattice Boltzmann code

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Scagliarini, Andrea; Schifano, S.F.; Toschi, F.; Tripiccione, R.; Wyrzykowski, R.; Dongarra, J.; Karczewski, K.; Wasniewski, J.

    2012-01-01

    We describe a parallel implementation of a compressible Lattice Boltzmann code on a multi-GPU cluster based on Nvidia Fermi processors. We analyze how to optimize the algorithm for GP-GPU architectures, describe the implementation choices that we have adopted and compare our performance results with

  13. Process Model Improvement for Source Code Plagiarism Detection in Student Programming Assignments

    Science.gov (United States)

    Kermek, Dragutin; Novak, Matija

    2016-01-01

    In programming courses there are various ways in which students attempt to cheat. The most commonly used method is copying source code from other students and making minimal changes in it, like renaming variable names. Several tools like Sherlock, JPlag and Moss have been devised to detect source code plagiarism. However, for larger student…

  14. OSSMETER D3.2 – Report on Source Code Activity Metrics

    NARCIS (Netherlands)

    J.J. Vinju (Jurgen); A. Shahi (Ashim)

    2014-01-01

    htmlabstractThis deliverable is part of WP3: Source Code Quality and Activity Analysis. It provides descriptions and initial prototypes of the tools that are needed for source code activity analysis. It builds upon the Deliverable 3.1 where infra-structure and a domain analysis have been

  15. Tuning iteration space slicing based tiled multi-core code implementing Nussinov's RNA folding.

    Science.gov (United States)

    Palkowski, Marek; Bielecki, Wlodzimierz

    2018-01-15

    RNA folding is an ongoing compute-intensive task of bioinformatics. Parallelization and improving code locality for this kind of algorithms is one of the most relevant areas in computational biology. Fortunately, RNA secondary structure approaches, such as Nussinov's recurrence, involve mathematical operations over affine control loops whose iteration space can be represented by the polyhedral model. This allows us to apply powerful polyhedral compilation techniques based on the transitive closure of dependence graphs to generate parallel tiled code implementing Nussinov's RNA folding. Such techniques are within the iteration space slicing framework - the transitive dependences are applied to the statement instances of interest to produce valid tiles. The main problem at generating parallel tiled code is defining a proper tile size and tile dimension which impact parallelism degree and code locality. To choose the best tile size and tile dimension, we first construct parallel parametric tiled code (parameters are variables defining tile size). With this purpose, we first generate two nonparametric tiled codes with different fixed tile sizes but with the same code structure and then derive a general affine model, which describes all integer factors available in expressions of those codes. Using this model and known integer factors present in the mentioned expressions (they define the left-hand side of the model), we find unknown integers in this model for each integer factor available in the same fixed tiled code position and replace in this code expressions, including integer factors, with those including parameters. Then we use this parallel parametric tiled code to implement the well-known tile size selection (TSS) technique, which allows us to discover in a given search space the best tile size and tile dimension maximizing target code performance. For a given search space, the presented approach allows us to choose the best tile size and tile dimension in

  16. Implementation of a database for the management of radioactive sources

    International Nuclear Information System (INIS)

    MOHAMAD, M.

    2012-01-01

    In Madagascar, the application of nuclear technology continues to develop. In order to protect the human health and his environment against the harmful effects of the ionizing radiation, each user of radioactive sources has to implement a program of nuclear security and safety and to declare their sources at Regulatory Authority. This Authority must have access to all the informations relating to all the sources and their uses. This work is based on the elaboration of a software using python as programming language and SQlite as database. It makes possible to computerize the radioactive sources management.This application unifies the various existing databases and centralizes the activities of the radioactive sources management.The objective is to follow the movement of each source in the Malagasy territory in order to avoid the risks related on the use of the radioactive sources and the illicit traffic. [fr

  17. Security of Radioactive Sources. Implementing Guide (French Edition)

    International Nuclear Information System (INIS)

    2012-01-01

    There are concerns that terrorist or criminal groups could gain access to high activity radioactive sources and use these sources maliciously. The IAEA is working with Member States to increase control, accounting and security of radioactive sources to prevent their malicious use and the associated potential consequences. Based on extensive input from technical and legal experts, this implementation guide sets forth guidance on the security of sources and will serve as a useful tool for legislators and regulators, physical protection specialists and facility and transport operators, as well as for law enforcement officers.

  18. Implementation of reactor safety analysis code CATHARE and its use on FACOM M-380

    International Nuclear Information System (INIS)

    Ishiguro, Misako; Shinozawa, Naohisa; Tomiyama, Mineyoshi; Fujisaki, Masahide

    1986-05-01

    CATHARE is an advanced safety analysis code developed at the Nuclear Research Center of Grenoble in France. The code simulates thermohydraulic phenomena involved in loss of coolant accidents in pressurized water reactors. The code has been introduced into JAERI as a part of the technical exchange between the JAERI ROSA-IV Program and the French BETHSY-CATHARE Program. The code was delivered in the form of 23 files containing 115,000 statements in total. A large part of CATHARE code has been written in an extended Fortran language 'Esope' which is mainly used for managing dynamic memory allocation. The JAERI version is created from the IBM version which has been used on Amdhal computer at ISPRA. Some modifications are required in order to implement the CATHARE code at JAERI because of difference in softwares. In this report, the overview of the code structure, the JAERI usage, the implementation method, the error correction method, the problems special to install the code in JAERI, and the distribution of computing time are described. (author)

  19. Open Genetic Code: on open source in the life sciences

    OpenAIRE

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first ...

  20. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  1. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  2. Open Genetic Code : On open source in the life sciences

    NARCIS (Netherlands)

    Deibel, E.

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life

  3. Challenges to implementation of the WHO Global Code of Practice on International Recruitment of Health Personnel: the case of Sudan.

    Science.gov (United States)

    Abuagla, Ayat; Badr, Elsheikh

    2016-06-30

    The WHO Global Code of Practice on the International Recruitment of Health Personnel (hereafter the WHO Code) was adopted by the World Health Assembly in 2010 as a voluntary instrument to address challenges of health worker migration worldwide. To ascertain its relevance and effectiveness, the implementation of the WHO Code needs to be assessed based on country experience; hence, this case study on Sudan. This qualitative study depended mainly on documentary sources in addition to key informant interviews. Experiences of the authors has informed the analysis. Migration of Sudanese health workers represents a major health system challenge. Over half of Sudanese physicians practice abroad and new trends are showing involvement of other professions and increased feminization. Traditional destinations include Gulf States, especially Saudi Arabia and Libya, as well as the United Kingdom and the Republic of Ireland. Low salaries, poor work environment, and a lack of adequate professional development are the leading push factors. Massive emigration of skilled health workers has jeopardized coverage and quality of healthcare and health professional education. Poor evidence, lack of a national policy, and active recruitment in addition to labour market problems were barriers for effective migration management in Sudan. Response of destination countries in relation to cooperative arrangements with Sudan as a source country has always been suboptimal, demonstrating less attention to solidarity and ethical dimensions. The WHO Code boosted Sudan's efforts to address health worker migration and health workforce development in general. Improving migration evidence, fostering a national dialogue, and promoting bilateral agreements in addition to catalysing health worker retention strategies are some of the benefits accrued. There are, however, limitations in publicity of the WHO Code and its incorporation into national laws and regulatory frameworks for ethical recruitment. The

  4. Real time implementation of a linear predictive coding algorithm on digital signal processor DSP32C

    International Nuclear Information System (INIS)

    Sheikh, N.M.; Usman, S.R.; Fatima, S.

    2002-01-01

    Pulse Code Modulation (PCM) has been widely used in speech coding. However, due to its high bit rate. PCM has severe limitations in application where high spectral efficiency is desired, for example, in mobile communication, CD quality broadcasting system etc. These limitation have motivated research in bit rate reduction techniques. Linear predictive coding (LPC) is one of the most powerful complex techniques for bit rate reduction. With the introduction of powerful digital signal processors (DSP) it is possible to implement the complex LPC algorithm in real time. In this paper we present a real time implementation of the LPC algorithm on AT and T's DSP32C at a sampling frequency of 8192 HZ. Application of the LPC algorithm on two speech signals is discussed. Using this implementation , a bit rate reduction of 1:3 is achieved for better than tool quality speech, while a reduction of 1.16 is possible for speech quality required in military applications. (author)

  5. Experimental implementation of the Bacon-Shor code with 10 entangled photons

    Science.gov (United States)

    Gimeno-Segovia, Mercedes; Sanders, Barry C.

    The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).

  6. Open Genetic Code: on open source in the life sciences.

    Science.gov (United States)

    Deibel, Eric

    2014-01-01

    The introduction of open source in the life sciences is increasingly being suggested as an alternative to patenting. This is an alternative, however, that takes its shape at the intersection of the life sciences and informatics. Numerous examples can be identified wherein open source in the life sciences refers to access, sharing and collaboration as informatic practices. This includes open source as an experimental model and as a more sophisticated approach of genetic engineering. The first section discusses the greater flexibly in regard of patenting and the relationship to the introduction of open source in the life sciences. The main argument is that the ownership of knowledge in the life sciences should be reconsidered in the context of the centrality of DNA in informatic formats. This is illustrated by discussing a range of examples of open source models. The second part focuses on open source in synthetic biology as exemplary for the re-materialization of information into food, energy, medicine and so forth. The paper ends by raising the question whether another kind of alternative might be possible: one that looks at open source as a model for an alternative to the commodification of life that is understood as an attempt to comprehensively remove the restrictions from the usage of DNA in any of its formats.

  7. Emergency medicine summary code for reporting CT scan results: implementation and survey results.

    Science.gov (United States)

    Lam, Joanne; Coughlin, Ryan; Buhl, Luce; Herbst, Meghan; Herbst, Timothy; Martillotti, Jared; Coughlin, Bret

    2018-06-01

    The purpose of the study was to assess the emergency department (ED) providers' interest and satisfaction with ED CT result reporting before and after the implementation of a standardized summary code for all CT scan reporting. A summary code was provided at the end of all CTs ordered through the ED from August to October of 2016. A retrospective review was completed on all studies performed during this period. A pre- and post-survey was given to both ED and radiology providers. A total of 3980 CT scans excluding CTAs were ordered with 2240 CTs dedicated to the head and neck, 1685 CTs dedicated to the torso, and 55 CTs dedicated to the extremities. Approximately 74% CT scans were contrast enhanced. Of the 3980 ED CT examination ordered, 69% had a summary code assigned to it. Fifteen percent of the coded CTs had a critical or diagnostic positive result. The introduction of an ED CT summary code did not show a definitive improvement in communication. However, the ED providers are in consensus that radiology reports are crucial their patients' management. There is slightly increased satisfaction with the providers with less than 5 years of experience with the ED CT codes compared to more seasoned providers. The implementation of a user-friendly summary code may allow better analysis of results, practice improvement, and quality measurements in the future.

  8. Implementing particle-in-cell plasma simulation code on the BBN TC2000

    International Nuclear Information System (INIS)

    Sturtevant, J.E.; Maccabe, A.B.

    1990-01-01

    The BBN TC2000 is a multiple instruction, multiple data (MIMD) machine that combines a physically distributed memory with a logically shared memory programming environment using the unique Butterfly switch. Particle-In-Cell (PIC) plasma simulations model the interaction of charged particles with electric and magnetic fields. This paper describes the implementation of both a 1-D electrostatic and a 2 1/2-D electromagnetic PIC (particle-in-cell) plasma simulation code on a BBN TC2000. Performance is compared to implementations of the same code on the shared memory Sequent Balance and distributed memory Intel iPSC hypercube

  9. Measuring the implementation of codes of conduct. An assessment method based on a process approach of the responsible organisation

    NARCIS (Netherlands)

    Nijhof, A.H.J.; Cludts, Stephan; Fisscher, O.A.M.; Laan, Albertus

    2003-01-01

    More and more organisations formulate a code of conduct in order to stimulate responsible behaviour among their members. Much time and energy is usually spent fixing the content of the code but many organisations get stuck in the challenge of implementing and maintaining the code. The code then

  10. Design and implementation of a scene-dependent dynamically selfadaptable wavefront coding imaging system

    Science.gov (United States)

    Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador

    2012-01-01

    A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator

  11. Guidelines for the implementation of an open source information system

    Energy Technology Data Exchange (ETDEWEB)

    Doak, J.; Howell, J.A.

    1995-08-01

    This work was initially performed for the International Atomic Energy Agency (IAEA) to help with the Open Source Task of the 93 + 2 Initiative; however, the information should be of interest to anyone working with open sources. The authors cover all aspects of an open source information system (OSIS) including, for example, identifying relevant sources, understanding copyright issues, and making information available to analysts. They foresee this document as a reference point that implementors of a system could augment for their particular needs. The primary organization of this document focuses on specific aspects, or components, of an OSIS; they describe each component and often make specific recommendations for its implementation. This document also contains a section discussing the process of collecting open source data and a section containing miscellaneous information. The appendix contains a listing of various providers, producers, and databases that the authors have come across in their research.

  12. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  13. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks.

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-07-09

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption.

  14. Source Authentication for Code Dissemination Supporting Dynamic Packet Size in Wireless Sensor Networks †

    Science.gov (United States)

    Kim, Daehee; Kim, Dongwan; An, Sunshin

    2016-01-01

    Code dissemination in wireless sensor networks (WSNs) is a procedure for distributing a new code image over the air in order to update programs. Due to the fact that WSNs are mostly deployed in unattended and hostile environments, secure code dissemination ensuring authenticity and integrity is essential. Recent works on dynamic packet size control in WSNs allow enhancing the energy efficiency of code dissemination by dynamically changing the packet size on the basis of link quality. However, the authentication tokens attached by the base station become useless in the next hop where the packet size can vary according to the link quality of the next hop. In this paper, we propose three source authentication schemes for code dissemination supporting dynamic packet size. Compared to traditional source authentication schemes such as μTESLA and digital signatures, our schemes provide secure source authentication under the environment, where the packet size changes in each hop, with smaller energy consumption. PMID:27409616

  15. On the implementation of a deterministic secure coding protocol using polarization entangled photons

    OpenAIRE

    Ostermeyer, Martin; Walenta, Nino

    2007-01-01

    We demonstrate a prototype-implementation of deterministic information encoding for quantum key distribution (QKD) following the ping-pong coding protocol [K. Bostroem, T. Felbinger, Phys. Rev. Lett. 89 (2002) 187902-1]. Due to the deterministic nature of this protocol the need for post-processing the key is distinctly reduced compared to non-deterministic protocols. In the course of our implementation we analyze the practicability of the protocol and discuss some security aspects of informat...

  16. Building guide : how to build Xyce from source code.

    Energy Technology Data Exchange (ETDEWEB)

    Keiter, Eric Richard; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Mei, Ting; Verley, Jason C.

    2013-08-01

    While Xyce uses the Autoconf and Automake system to configure builds, it is often necessary to perform more than the customary %E2%80%9C./configure%E2%80%9D builds many open source users have come to expect. This document describes the steps needed to get Xyce built on a number of common platforms.

  17. A new hybrid code (CHIEF) implementing the inertial electron fluid equation without approximation

    Science.gov (United States)

    Muñoz, P. A.; Jain, N.; Kilian, P.; Büchner, J.

    2018-03-01

    We present a new hybrid algorithm implemented in the code CHIEF (Code Hybrid with Inertial Electron Fluid) for simulations of electron-ion plasmas. The algorithm treats the ions kinetically, modeled by the Particle-in-Cell (PiC) method, and electrons as an inertial fluid, modeled by electron fluid equations without any of the approximations used in most of the other hybrid codes with an inertial electron fluid. This kind of code is appropriate to model a large variety of quasineutral plasma phenomena where the electron inertia and/or ion kinetic effects are relevant. We present here the governing equations of the model, how these are discretized and implemented numerically, as well as six test problems to validate our numerical approach. Our chosen test problems, where the electron inertia and ion kinetic effects play the essential role, are: 0) Excitation of parallel eigenmodes to check numerical convergence and stability, 1) parallel (to a background magnetic field) propagating electromagnetic waves, 2) perpendicular propagating electrostatic waves (ion Bernstein modes), 3) ion beam right-hand instability (resonant and non-resonant), 4) ion Landau damping, 5) ion firehose instability, and 6) 2D oblique ion firehose instability. Our results reproduce successfully the predictions of linear and non-linear theory for all these problems, validating our code. All properties of this hybrid code make it ideal to study multi-scale phenomena between electron and ion scales such as collisionless shocks, magnetic reconnection and kinetic plasma turbulence in the dissipation range above the electron scales.

  18. Low complexity source and channel coding for mm-wave hybrid fiber-wireless links

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Vegas Olmos, Juan José; Pang, Xiaodan

    2014-01-01

    We report on the performance of channel and source coding applied for an experimentally realized hybrid fiber-wireless W-band link. Error control coding performance is presented for a wireless propagation distance of 3 m and 20 km fiber transmission. We report on peak signal-to-noise ratio perfor...

  19. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  20. Analysis of source term aspects in the experiment Phebus FPT1 with the MELCOR and CFX codes

    Energy Technology Data Exchange (ETDEWEB)

    Martin-Fuertes, F. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)]. E-mail: francisco.martinfuertes@upm.es; Barbero, R. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Martin-Valdepenas, J.M. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Jimenez, M.A. [Universidad Politecnica de Madrid, UPM, Nuclear Engineering Department, Jose Gutierrez Abascal 2, 28006 Madrid (Spain)

    2007-03-15

    Several aspects related to the source term in the Phebus FPT1 experiment have been analyzed with the help of MELCOR 1.8.5 and CFX 5.7 codes. Integral aspects covering circuit thermalhydraulics, fission product and structural material release, vapours and aerosol retention in the circuit and containment were studied with MELCOR, and the strong and weak points after comparison to experimental results are stated. Then, sensitivity calculations dealing with chemical speciation upon release, vertical line aerosol deposition and steam generator aerosol deposition were performed. Finally, detailed calculations concerning aerosol deposition in the steam generator tube are presented. They were obtained by means of an in-house code application, named COCOA, as well as with CFX computational fluid dynamics code, in which several models for aerosol deposition were implemented and tested, while the models themselves are discussed.

  1. Code of conduct on the safety and security of radioactive sources

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost.

  2. Automated Source Code Analysis to Identify and Remove Software Security Vulnerabilities: Case Studies on Java Programs

    OpenAIRE

    Natarajan Meghanathan

    2013-01-01

    The high-level contribution of this paper is to illustrate the development of generic solution strategies to remove software security vulnerabilities that could be identified using automated tools for source code analysis on software programs (developed in Java). We use the Source Code Analyzer and Audit Workbench automated tools, developed by HP Fortify Inc., for our testing purposes. We present case studies involving a file writer program embedded with features for password validation, and ...

  3. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    2001-03-01

    The objective of this Code is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through tile fostering of international co-operation. In particular, this Code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost

  4. Implementation of QR Code and Digital Signature to Determine the Validity of KRS and KHS Documents

    Directory of Open Access Journals (Sweden)

    Fatich Fazlur Rochman

    2017-05-01

    Full Text Available Universitas Airlangga students often find it difficult to verify the mark that came out in the Kartu Hasil Studi (KHS is called Study Result Card or courses taken in the Kartu Rencana Studi (KRS is called Study Plan Card, if there are changes to the data on the system used Universitas Airlangga. This complicated KRS and KHS verification process happened because the KRS and KHS documents that owned by student is easier to counterfeit than the data in the system. Implementation digital signature and QR Code technology as a solution that can prove the validity of KRS or KHS. The KRS and KHS validation system developed by Digital Signature and QR Code. QR Code is a type of matrix code that was developed as a code that allows its contents to be decoded at high speed while the Digital Signature has a function as a marker on the data to ensure that the data is the original data. The verification process was divided into two types are reading the Digital Signature and printing document that works by scanning the data from QR Code. The application of the system is carried out were the addition of the QR Code on KRS and KHS, required a readiness of human resources. 

  5. Distributed Remote Vector Gaussian Source Coding for Wireless Acoustic Sensor Networks

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider the problem of remote vector Gaussian source coding for a wireless acoustic sensor network. Each node receives messages from multiple nodes in the network and decodes these messages using its own measurement of the sound field as side information. The node’s measurement...... and the estimates of the source resulting from decoding the received messages are then jointly encoded and transmitted to a neighboring node in the network. We show that for this distributed source coding scenario, one can encode a so-called conditional sufficient statistic of the sources instead of jointly...

  6. Implementation of thermo-viscoplastic constitutive equations into the finite element code ABAQUS

    International Nuclear Information System (INIS)

    Youn, Sam Son; Lee, Soon Bok; Kim, Jong Bum; Lee, Hyeong Yeon; Yoo, Bong

    1998-01-01

    Sophisticated viscoplatic constitutive laws describing material behavior at high temperature have been implemented in the general-purpose finite element code ABAQUS to predict the viscoplastic response of structures to cyclic loading. Because of the complexity of viscoplastic constitutive equation, the general implementation methods are developed. The solution of the non-linear system of algebraic equations arising from time discretization is determined using line-search and back-tracking in combination with Newton method. The time integration method of the constitutive equations is based on semi-implicit method with efficient time step control. For numerical examples, the viscoplastic model proposed by Chaboche is implemented and several applications are illustrated

  7. Test of Effective Solid Angle code for the efficiency calculation of volume source

    Energy Technology Data Exchange (ETDEWEB)

    Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    It is hard to determine a full energy (FE) absorption peak efficiency curve for an arbitrary volume source by experiment. That's why the simulation and semi-empirical methods have been preferred so far, and many works have progressed in various ways. Moens et al. determined the concept of effective solid angle by considering an attenuation effect of γ-rays in source, media and detector. This concept is based on a semi-empirical method. An Effective Solid Angle code (ESA code) has been developed for years by the Applied Nuclear Physics Group in Seoul National University. ESA code converts an experimental FE efficiency curve determined by using a standard point source to that for a volume source. To test the performance of ESA Code, we measured the point standard sources and voluminous certified reference material (CRM) sources of γ-ray, and compared with efficiency curves obtained in this study. 200∼1500 KeV energy region is fitted well. NIST X-ray mass attenuation coefficient data is used currently to check for the effect of linear attenuation only. We will use the interaction cross-section data obtained from XCOM code to check the each contributing factor like photoelectric effect, incoherent scattering and coherent scattering in the future. In order to minimize the calculation time and code simplification, optimization of algorithm is needed.

  8. RETRANS - A tool to verify the functional equivalence of automatically generated source code with its specification

    International Nuclear Information System (INIS)

    Miedl, H.

    1998-01-01

    Following the competent technical standards (e.g. IEC 880) it is necessary to verify each step in the development process of safety critical software. This holds also for the verification of automatically generated source code. To avoid human errors during this verification step and to limit the cost effort a tool should be used which is developed independently from the development of the code generator. For this purpose ISTec has developed the tool RETRANS which demonstrates the functional equivalence of automatically generated source code with its underlying specification. (author)

  9. Use of source term code package in the ELEBRA MX-850 system

    International Nuclear Information System (INIS)

    Guimaraes, A.C.F.; Goes, A.G.A.

    1988-12-01

    The implantation of source term code package in the ELEBRA-MX850 system is presented. The source term is formed when radioactive materials generated in nuclear fuel leakage toward containment and the external environment to reactor containment. The implantated version in the ELEBRA system are composed of five codes: MARCH 3, TRAPMELT 3, THCCA, VANESA and NAVA. The original example case was used. The example consists of a small loca accident in a PWR type reactor. A sensitivity study for the TRAPMELT 3 code was carried out, modifying the 'TIME STEP' to estimate the processing time of CPU for executing the original example case. (M.C.K.) [pt

  10. Eu-NORSEWInD - Assessment of Viability of Open Source CFD Code for the Wind Industry

    DEFF Research Database (Denmark)

    Stickland, Matt; Scanlon, Tom; Fabre, Sylvie

    2009-01-01

    Part of the overall NORSEWInD project is the use of LiDAR remote sensing (RS) systems mounted on offshore platforms to measure wind velocity profiles at a number of locations offshore. The data acquired from the offshore RS measurements will be fed into a large and novel wind speed dataset suitab...... between the results of simulations created by the commercial code FLUENT and the open source code OpenFOAM. An assessment of the ease with which the open source code can be used is also included....

  11. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  12. Evaluating Open-Source Full-Text Search Engines for Matching ICD-10 Codes.

    Science.gov (United States)

    Jurcău, Daniel-Alexandru; Stoicu-Tivadar, Vasile

    2016-01-01

    This research presents the results of evaluating multiple free, open-source engines on matching ICD-10 diagnostic codes via full-text searches. The study investigates what it takes to get an accurate match when searching for a specific diagnostic code. For each code the evaluation starts by extracting the words that make up its text and continues with building full-text search queries from the combinations of these words. The queries are then run against all the ICD-10 codes until a match indicates the code in question as a match with the highest relative score. This method identifies the minimum number of words that must be provided in order for the search engines choose the desired entry. The engines analyzed include a popular Java-based full-text search engine, a lightweight engine written in JavaScript which can even execute on the user's browser, and two popular open-source relational database management systems.

  13. Recycling source terms for edge plasma fluid models and impact on convergence behaviour of the BRAAMS 'B2' code

    International Nuclear Information System (INIS)

    Maddison, G.P.; Reiter, D.

    1994-02-01

    Predictive simulations of tokamak edge plasmas require the most authentic description of neutral particle recycling sources, not merely the most expedient numerically. Employing a prototypical ITER divertor arrangement under conditions of high recycling, trial calculations with the 'B2' steady-state edge plasma transport code, plus varying approximations or recycling, reveal marked sensitivity of both results and its convergence behaviour to details of sources incorporated. Comprehensive EIRENE Monte Carlo resolution of recycling is implemented by full and so-called 'shot' intermediate cycles between the plasma fluid and statistical neutral particle models. As generally for coupled differencing and stochastic procedures, though, overall convergence properties become more difficult to assess. A pragmatic criterion for the 'B2'/EIRENE code system is proposed to determine its success, proceeding from a stricter condition previously identified for one particular analytic approximation of recycling in 'B2'. Certain procedures are also inferred potentially to improve their convergence further. (orig.)

  14. Implementation of a structural dependent model for the superalloy IN738LC in ABAQUS-code

    International Nuclear Information System (INIS)

    Wolters, J.; Betten, J.; Penkalla, H.J.

    1994-05-01

    Superalloys, mainly consisting of nickel, are used for applications in aerospace as well as in stationary gas turbines. In the temperature range above 800 C the blades, which are manufactured of these superalloys, are subjected to high centrifugal forces and thermal induced loads. For computer based analysis of the thermo-mechanical behaviour of the blades models for the stress-strain behaviour are necessary. These models have to give a reliable description of the stress-strain behaviour, with emphasis on inelastic affects. The implementation of the model in finite element codes requires a numerical treatment of the constitutive equations with respect to the given interface of the used code. In this paper constitutive equations for the superalloy IN738LC are presented and the implementation in the finite element code ABAQUS with the numerical preparation of the model is described. In order to validate the model calculations were performed for simple uniaxial loading conditions as well as for a complete cross section of a turbine blade under combined thermal and mechanical loading. The achieved results were compared with those of additional calculations by using ABAQUS, including Norton's law, which was already implemented in this code. (orig.) [de

  15. Lysimeter data as input to performance assessment source term codes

    International Nuclear Information System (INIS)

    McConnell, J.W. Jr.; Rogers, R.D.; Sullivan, T.

    1992-01-01

    The Field Lysimeter Investigation: Low-Level Waste Data Base Development Program is obtaining information on the performance of radioactive waste in a disposal environment. Waste forms fabricated using ion-exchange resins from EPICOR-II c prefilters employed in the cleanup of the Three Mile Island (TMI) Nuclear Power Station are being tested to develop a low-level waste data base and to obtain information on survivability of waste forms in a disposal environment. In this paper, radionuclide releases from waste forms in the first seven years of sampling are presented and discussed. Application of lysimeter data to be used in performance assessment source term models is presented. Initial results from use of data in two models are discussed

  16. SCATTER: Source and Transport of Emplaced Radionuclides: Code documentation

    International Nuclear Information System (INIS)

    Longsine, D.E.

    1987-03-01

    SCATTER simulated several processes leading to the release of radionuclides to the site subsystem and then simulates transport via the groundwater of the released radionuclides to the biosphere. The processes accounted for to quantify release rates to a ground-water migration path include radioactive decay and production, leaching, solubilities, and the mixing of particles with incoming uncontaminated fluid. Several decay chains of arbitrary length can be considered simultaneously. The release rates then serve as source rates to a numerical technique which solves convective-dispersive transport for each decay chain. The decay chains are allowed to have branches and each member can have a different radioactive factor. Results are cast as radionuclide discharge rates to the accessible environment

  17. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  18. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    International Nuclear Information System (INIS)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C

  19. ANEMOS: A computer code to estimate air concentrations and ground deposition rates for atmospheric nuclides emitted from multiple operating sources

    Energy Technology Data Exchange (ETDEWEB)

    Miller, C.W.; Sjoreen, A.L.; Begovich, C.L.; Hermann, O.W.

    1986-11-01

    This code estimates concentrations in air and ground deposition rates for Atmospheric Nuclides Emitted from Multiple Operating Sources. ANEMOS is one component of an integrated Computerized Radiological Risk Investigation System (CRRIS) developed for the US Environmental Protection Agency (EPA) for use in performing radiological assessments and in developing radiation standards. The concentrations and deposition rates calculated by ANEMOS are used in subsequent portions of the CRRIS for estimating doses and risks to man. The calculations made in ANEMOS are based on the use of a straight-line Gaussian plume atmospheric dispersion model with both dry and wet deposition parameter options. The code will accommodate a ground-level or elevated point and area source or windblown source. Adjustments may be made during the calculations for surface roughness, building wake effects, terrain height, wind speed at the height of release, the variation in plume rise as a function of downwind distance, and the in-growth and decay of daughter products in the plume as it travels downwind. ANEMOS can also accommodate multiple particle sizes and clearance classes, and it may be used to calculate the dose from a finite plume of gamma-ray-emitting radionuclides passing overhead. The output of this code is presented for 16 sectors of a circular grid. ANEMOS can calculate both the sector-average concentrations and deposition rates at a given set of downwind distances in each sector and the average of these quantities over an area within each sector bounded by two successive downwind distances. ANEMOS is designed to be used primarily for continuous, long-term radionuclide releases. This report describes the models used in the code, their computer implementation, the uncertainty associated with their use, and the use of ANEMOS in conjunction with other codes in the CRRIS. A listing of the code is included in Appendix C.

  20. Support of Maritime Education and Training Systems for the Implementation of the ISM Code

    Directory of Open Access Journals (Sweden)

    Elena Maggi

    2012-10-01

    Full Text Available The problem of the safety improvement and pollution preventionin maritime tra11Sp011 has become more and more criticaland urgent to solve. In fact, the number of accidents at seahas increased very quickly over time. Concern is growing aboutpoor qualification on safety and poor management standardsin shipping industry. The ISM Code (Intenzational Safety ManagementCode aims to provide intemational standards for safemanagement of ship operations and for pollution prevention.The paper, which presents a part of the work done by the Universityof Trieste - ISTIEE within the framework of theMETHAR project, aims to identify the expected supp01t fromthe MaJitime Education and Training (MET systems to implemmttheISM Code, consequently improving safety and preventingpollution. The paper desCiibes first the 01igin, the objectivesof the ISM Code and the standard requirements on METidentified by the Code. Secondly, it summarises the opinions ofthe operators collected through questionnaires. Finally, it identifiesthe possible enrichment of the MET system in order tobel/.er optimise the implementation of the Code.

  1. Implementation and evaluation of PM2.5 source contribution ...

    Science.gov (United States)

    Source culpability assessments are useful for developing effective emissions control programs. The Integrated Source Apportionment Method (ISAM) has been implemented in the Community Multiscale Air Quality (CMAQ) model to track contributions from source groups and regions to ambient levels and deposited amounts of primary and secondary inorganic PM2.5. Confidence in this approach is established by comparing ISAM source contribution estimates to emissions zero-out simulations recognizing that these approaches are not always expected to provide the same answer. The comparisons are expected to be most similar for more linear processes such as those involving primary emissions of PM2.5 and most different for non-linear systems like ammonium nitrate formation. Primarily emitted PM2.5 (e.g. elemental carbon), sulfur dioxide, ammonia, and nitrogen oxide contribution estimates compare well to zero-out estimates for ambient concentration and deposition. PM2.5 sulfate ion relationships are strong, but nonlinearity is evident and shown to be related to aqueous phase oxidation reactions in the host model. ISAM and zero-out contribution estimates are less strongly related for PM2.5 ammonium nitrate, resulting from instances of non-linear chemistry and negative responses (increases in PM2.5 due to decreases in emissions). ISAM is demonstrated in the context of an annual simulation tracking well characterized emissions source sectors and boundary conditions shows source contri

  2. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Directory of Open Access Journals (Sweden)

    Pierre Siohan

    2005-05-01

    Full Text Available Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC and variable-length source codes (VLC widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  3. Joint Source-Channel Decoding of Variable-Length Codes with Soft Information: A Survey

    Science.gov (United States)

    Guillemot, Christine; Siohan, Pierre

    2005-12-01

    Multimedia transmission over time-varying wireless channels presents a number of challenges beyond existing capabilities conceived so far for third-generation networks. Efficient quality-of-service (QoS) provisioning for multimedia on these channels may in particular require a loosening and a rethinking of the layer separation principle. In that context, joint source-channel decoding (JSCD) strategies have gained attention as viable alternatives to separate decoding of source and channel codes. A statistical framework based on hidden Markov models (HMM) capturing dependencies between the source and channel coding components sets the foundation for optimal design of techniques of joint decoding of source and channel codes. The problem has been largely addressed in the research community, by considering both fixed-length codes (FLC) and variable-length source codes (VLC) widely used in compression standards. Joint source-channel decoding of VLC raises specific difficulties due to the fact that the segmentation of the received bitstream into source symbols is random. This paper makes a survey of recent theoretical and practical advances in the area of JSCD with soft information of VLC-encoded sources. It first describes the main paths followed for designing efficient estimators for VLC-encoded sources, the key component of the JSCD iterative structure. It then presents the main issues involved in the application of the turbo principle to JSCD of VLC-encoded sources as well as the main approaches to source-controlled channel decoding. This survey terminates by performance illustrations with real image and video decoding systems.

  4. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  5. Comparison of DT neutron production codes MCUNED, ENEA-JSI source subroutine and DDT

    Energy Technology Data Exchange (ETDEWEB)

    Čufar, Aljaž, E-mail: aljaz.cufar@ijs.si [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Lengar, Igor; Kodeli, Ivan [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Milocco, Alberto [Culham Centre for Fusion Energy, Culham Science Centre, Abingdon, OX14 3DB (United Kingdom); Sauvan, Patrick [Departamento de Ingeniería Energética, E.T.S. Ingenieros Industriales, UNED, C/Juan del Rosal 12, 28040 Madrid (Spain); Conroy, Sean [VR Association, Uppsala University, Department of Physics and Astronomy, PO Box 516, SE-75120 Uppsala (Sweden); Snoj, Luka [Reactor Physics Department, Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia)

    2016-11-01

    Highlights: • Results of three codes capable of simulating the accelerator based DT neutron generators were compared on a simple model where only a thin target made of mixture of titanium and tritium is present. Two typical deuteron beam energies, 100 keV and 250 keV, were used in the comparison. • Comparisons of the angular dependence of the total neutron flux and spectrum as well as the neutron spectrum of all the neutrons emitted from the target show general agreement of the results but also some noticeable differences. • A comparison of figures of merit of the calculations using different codes showed that the computational time necessary to achieve the same statistical uncertainty can vary for more than 30× when different codes for the simulation of the DT neutron generator are used. - Abstract: As the DT fusion reaction produces neutrons with energies significantly higher than in fission reactors, special fusion-relevant benchmark experiments are often performed using DT neutron generators. However, commonly used Monte Carlo particle transport codes such as MCNP or TRIPOLI cannot be directly used to analyze these experiments since they do not have the capabilities to model the production of DT neutrons. Three of the available approaches to model the DT neutron generator source are the MCUNED code, the ENEA-JSI DT source subroutine and the DDT code. The MCUNED code is an extension of the well-established and validated MCNPX Monte Carlo code. The ENEA-JSI source subroutine was originally prepared for the modelling of the FNG experiments using different versions of the MCNP code (−4, −5, −X) and was later extended to allow the modelling of both DT and DD neutron sources. The DDT code prepares the DT source definition file (SDEF card in MCNP) which can then be used in different versions of the MCNP code. In the paper the methods for the simulation of the DT neutron production used in the codes are briefly described and compared for the case of a

  6. MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data

    Science.gov (United States)

    Key, Kerry

    2016-10-01

    This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data

  7. IllinoisGRMHD: an open-source, user-friendly GRMHD code for dynamical spacetimes

    International Nuclear Information System (INIS)

    Etienne, Zachariah B; Paschalidis, Vasileios; Haas, Roland; Mösta, Philipp; Shapiro, Stuart L

    2015-01-01

    In the extreme violence of merger and mass accretion, compact objects like black holes and neutron stars are thought to launch some of the most luminous outbursts of electromagnetic and gravitational wave energy in the Universe. Modeling these systems realistically is a central problem in theoretical astrophysics, but has proven extremely challenging, requiring the development of numerical relativity codes that solve Einstein's equations for the spacetime, coupled to the equations of general relativistic (ideal) magnetohydrodynamics (GRMHD) for the magnetized fluids. Over the past decade, the Illinois numerical relativity (ILNR) group's dynamical spacetime GRMHD code has proven itself as a robust and reliable tool for theoretical modeling of such GRMHD phenomena. However, the code was written ‘by experts and for experts’ of the code, with a steep learning curve that would severely hinder community adoption if it were open-sourced. Here we present IllinoisGRMHD, which is an open-source, highly extensible rewrite of the original closed-source GRMHD code of the ILNR group. Reducing the learning curve was the primary focus of this rewrite, with the goal of facilitating community involvement in the code's use and development, as well as the minimization of human effort in generating new science. IllinoisGRMHD also saves computer time, generating roundoff-precision identical output to the original code on adaptive-mesh grids, but nearly twice as fast at scales of hundreds to thousands of cores. (paper)

  8. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Fossorier Marc

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope -ary phase shift key ( -PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded -PSK signaling (with . Then, it is extended to include coded -PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded -PSK signaling performs 3.1 to 5.2 dB better than uncoded -PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  9. Combined Source-Channel Coding of Images under Power and Bandwidth Constraints

    Directory of Open Access Journals (Sweden)

    Marc Fossorier

    2007-01-01

    Full Text Available This paper proposes a framework for combined source-channel coding for a power and bandwidth constrained noisy channel. The framework is applied to progressive image transmission using constant envelope M-ary phase shift key (M-PSK signaling over an additive white Gaussian noise channel. First, the framework is developed for uncoded M-PSK signaling (with M=2k. Then, it is extended to include coded M-PSK modulation using trellis coded modulation (TCM. An adaptive TCM system is also presented. Simulation results show that, depending on the constellation size, coded M-PSK signaling performs 3.1 to 5.2 dB better than uncoded M-PSK signaling. Finally, the performance of our combined source-channel coding scheme is investigated from the channel capacity point of view. Our framework is further extended to include powerful channel codes like turbo and low-density parity-check (LDPC codes. With these powerful codes, our proposed scheme performs about one dB away from the capacity-achieving SNR value of the QPSK channel.

  10. From chain liability to chain responsibility: MNE approaches to implement safety and health codes in international supply chains

    NARCIS (Netherlands)

    van Tulder, R.; van Wijk, J.; Kolk, A.

    2009-01-01

    This article examines whether the involvement of stakeholders in the design of corporate codes of conduct leads to a higher implementation likelihood of the code. The empirical focus is on Occupational Safety and Health (OSH). The article compares the inclusion of OSH issues in the codes of conduct

  11. Same-source parallel implementation of the PSU/NCAR MM5

    Energy Technology Data Exchange (ETDEWEB)

    Michalakes, J.

    1997-12-31

    The Pennsylvania State/National Center for Atmospheric Research Mesoscale Model is a limited-area model of atmospheric systems, now in its fifth generation, MM5. Designed and maintained for vector and shared-memory parallel architectures, the official version of MM5 does not run on message-passing distributed memory (DM) parallel computers. The authors describe a same-source parallel implementation of the PSU/NCAR MM5 using FLIC, the Fortran Loop and Index Converter. The resulting source is nearly line-for-line identical with the original source code. The result is an efficient distributed memory parallel option to MM5 that can be seamlessly integrated into the official version.

  12. Review and Implementation of the Emerging CCSDS Recommended Standard for Multispectral and Hyperspectral Lossless Image Coding

    Science.gov (United States)

    Sanchez, Jose Enrique; Auge, Estanislau; Santalo, Josep; Blanes, Ian; Serra-Sagrista, Joan; Kiely, Aaron

    2011-01-01

    A new standard for image coding is being developed by the MHDC working group of the CCSDS, targeting onboard compression of multi- and hyper-spectral imagery captured by aircraft and satellites. The proposed standard is based on the "Fast Lossless" adaptive linear predictive compressor, and is adapted to better overcome issues of onboard scenarios. In this paper, we present a review of the state of the art in this field, and provide an experimental comparison of the coding performance of the emerging standard in relation to other state-of-the-art coding techniques. Our own independent implementation of the MHDC Recommended Standard, as well as of some of the other techniques, has been used to provide extensive results over the vast corpus of test images from the CCSDS-MHDC.

  13. The implementation of a toroidal limiter model into the gyrokinetic code ELMFIRE

    Energy Technology Data Exchange (ETDEWEB)

    Leerink, S.; Janhunen, S.J.; Kiviniemi, T.P.; Nora, M. [Euratom-Tekes Association, Helsinki University of Technology (Finland); Heikkinen, J.A. [Euratom-Tekes Association, VTT, P.O. Box 1000, FI-02044 VTT (Finland); Ogando, F. [Universidad Nacional de Educacion a Distancia, Madrid (Spain)

    2008-03-15

    The ELMFIRE full nonlinear gyrokinetic simulation code has been developed for calculations of plasma evolution and dynamics of turbulence in tokamak geometry. The code is applicable for calculations of strong perturbations in particle distribution function, rapid transients and steep gradients in plasma. Benchmarking against experimental reflectometry data from the FT2 tokamak is being discussed and in this paper a model for comparison and studying poloidal velocity is presented. To make the ELMFIRE code suitable for scrape-off layer simulations a simplified toroidal limiter model has been implemented. The model is be discussed and first results are presented. (copyright 2008 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim) (orig.)

  14. Implementation of a dry process fuel cycle model into the DYMOND code

    International Nuclear Information System (INIS)

    Park, Joo Hwan; Jeong, Chang Joon; Choi, Hang Bok

    2004-01-01

    For the analysis of a dry process fuel cycle, new modules were implemented into the fuel cycle analysis code DYMOND, which was developed by the Argonne National Laboratory. The modifications were made to the energy demand prediction model, a Canada Deuterium Uranium (CANDU) reactor, direct use of spent Pressurized Water Reactor (PWR) fuel in CANDU reactors (DUPIC) fuel cycle model, the fuel cycle calculation module, and the input/output modules. The performance of the modified DYMOND code was assessed for the postulated once-through fuel cycle models including both the PWR and CANDU reactor. This paper presents modifications of the DYMOND code and the results of sample calculations for the PWR once-through and DUPIC fuel cycles

  15. Efficient data management techniques implemented in the Karlsruhe Monte Carlo code KAMCCO

    International Nuclear Information System (INIS)

    Arnecke, G.; Borgwaldt, H.; Brandl, V.; Lalovic, M.

    1974-01-01

    The Karlsruhe Monte Carlo Code KAMCCO is a forward neutron transport code with an eigenfunction and a fixed source option, including time-dependence. A continuous energy model is combined with a detailed representation of neutron cross sections, based on linear interpolation, Breit-Wigner resonances and probability tables. All input is processed into densely packed, dynamically addressed parameter fields and networks of pointers (addresses). Estimation routines are decoupled from random walk and analyze a storage region with sample records. This technique leads to fast execution with moderate storage requirements and without any I/O-operations except in the input and output stages. 7 references. (U.S.)

  16. ON CODE REFACTORING OF THE DIALOG SUBSYSTEM OF CDSS PLATFORM FOR THE OPEN-SOURCE MIS OPENMRS

    Directory of Open Access Journals (Sweden)

    A. V. Semenets

    2016-08-01

    The open-source MIS OpenMRS developer tools and software API are reviewed. The results of code refactoring of the dialog subsystem of the CDSS platform which is made as module for the open-source MIS OpenMRS are presented. The structure of information model of database of the CDSS dialog subsystem was updated according with MIS OpenMRS requirements. The Model-View-Controller (MVC based approach to the CDSS dialog subsystem architecture was re-implemented with Java programming language using Spring and Hibernate frameworks. The MIS OpenMRS Encounter portlet form for the CDSS dialog subsystem integration is developed as an extension. The administrative module of the CDSS platform is recreated. The data exchanging formats and methods for interaction of OpenMRS CDSS dialog subsystem module and DecisionTree GAE service are re-implemented with help of AJAX technology via jQuery library

  17. Implementation of mathematical phantom of hand and forearm in GEANT4 Monte Carlo code

    International Nuclear Information System (INIS)

    Pessanha, Paula Rocha; Queiroz Filho, Pedro Pacheco de; Santos, Denison de Souza

    2014-01-01

    In this work, the implementation of a hand and forearm Geant4 phantom code, for further evaluation of occupational exposure of ends of the radionuclides decay manipulated during procedures involving the use of injection syringe. The simulation model offered by Geant4 includes a full set of features, with the reconstruction of trajectories, geometries and physical models. For this work, the values calculated in the simulation are compared with the measurements rates by thermoluminescent dosimeters (TLDs) in physical phantom REMAB®. From the analysis of the data obtained through simulation and experimentation, of the 14 points studied, there was a discrepancy of only 8.2% of kerma values found, and these figures are considered compatible. The geometric phantom implemented in Geant4 Monte Carlo code was validated and can be used later for the evaluation of doses at ends

  18. Uniform physical theory of diffraction equivalent edge currents for implementation in general computer codes

    DEFF Research Database (Denmark)

    Johansen, Peter Meincke

    1996-01-01

    New uniform closed-form expressions for physical theory of diffraction equivalent edge currents are derived for truncated incremental wedge strips. In contrast to previously reported expressions, the new expressions are well-behaved for all directions of incidence and observation and take a finite...... value for zero strip length. Consequently, the new equivalent edge currents are, to the knowledge of the author, the first that are well-suited for implementation in general computer codes...

  19. Capturing Energy-Saving Opportunities: Improving Building Efficiency in Rajasthan through Energy Code Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Tan, Qing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Yu, Sha [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Evans, Meredydd [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mathur, Jyotirmay [Malaviya National Institute of Technology, Jaipur (India); Vu, Linh D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-05-01

    India adopted the Energy Conservation Building Code (ECBC) in 2007. Rajasthan is the first state to make ECBC mandatory at the state level. In collaboration with Malaviya National Institute of Technology (MNIT) Jaipur, Pacific Northwest National Laboratory (PNNL) has been working with Rajasthan to facilitate the implementation of ECBC. This report summarizes milestones made in Rajasthan and PNNL's contribution in institutional set-ups, capacity building, compliance enforcement and pilot building construction.

  20. Implementation of the dynamic Monte Carlo method for transient analysis in the general purpose code Tripoli

    Energy Technology Data Exchange (ETDEWEB)

    Sjenitzer, Bart L.; Hoogenboom, J. Eduard, E-mail: B.L.Sjenitzer@TUDelft.nl, E-mail: J.E.Hoogenboom@TUDelft.nl [Delft University of Technology (Netherlands)

    2011-07-01

    A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)

  1. Implementation of the dynamic Monte Carlo method for transient analysis in the general purpose code Tripoli

    International Nuclear Information System (INIS)

    Sjenitzer, Bart L.; Hoogenboom, J. Eduard

    2011-01-01

    A new Dynamic Monte Carlo method is implemented in the general purpose Monte Carlo code Tripoli 4.6.1. With this new method incorporated, a general purpose code can be used for safety transient analysis, such as the movement of a control rod or in an accident scenario. To make the Tripoli code ready for calculating on dynamic systems, the Tripoli scheme had to be altered to incorporate time steps, to include the simulation of delayed neutron precursors and to simulate prompt neutron chains. The modified Tripoli code is tested on two sample cases, a steady-state system and a subcritical system and the resulting neutron fluxes behave just as expected. The steady-state calculation has a constant neutron flux over time and this result shows the stability of the calculation. The neutron flux stays constant with acceptable variance. This also shows that the starting conditions are determined correctly. The sub-critical case shows that the code can also handle dynamic systems with a varying neutron flux. (author)

  2. Implementational Aspects of the Contourlet Filter Bank and Application in Image Coding

    Directory of Open Access Journals (Sweden)

    Truong T. Nguyen

    2009-02-01

    Full Text Available This paper analyzed the implementational aspects of the contourlet filter bank (or the pyramidal directional filter bank (PDFB, and considered its application in image coding. First, details of the binary tree-structured directional filter bank (DFB are presented, including a modification to minimize the phase delay factor and necessary steps for handling rectangular images. The PDFB is viewed as an overcomplete filter bank, and the directional filters are expressed in terms of polyphase components of the pyramidal filter bank and the conventional DFB. The aliasing effect of the conventional DFB and the Laplacian pyramid to the directional filters is then considered, and the conditions for reducing this effect are presented. The new filters obtained by redesigning the PDFBs satisfying these requirements have much better frequency responses. A hybrid multiscale filter bank consisting of the PDFB at higher scales and the traditional maximally decimated wavelet filter bank at lower scales is constructed to provide a sparse image representation. A novel embedded image coding system based on the image decomposition and a morphological dilation algorithm is then presented. The coding algorithm efficiently clusters the significant coefficients using progressive morphological operations. Context models for arithmetic coding are designed to exploit the intraband dependency and the correlation existing among the neighboring directional subbands. Experimental results show that the proposed coding algorithm outperforms the current state-of-the-art wavelet-based coders, such as JPEG2000, for images with directional features.

  3. Transparent ICD and DRG coding using information technology: linking and associating information sources with the eXtensible Markup Language.

    Science.gov (United States)

    Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim

    2003-01-01

    With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.

  4. Code of Conduct on Biosecurity for Biological Resource Centres: procedural implementation.

    Science.gov (United States)

    Rohde, Christine; Smith, David; Martin, Dunja; Fritze, Dagmar; Stalpers, Joost

    2013-07-01

    A globally applicable code of conduct specifically dedicated to biosecurity has been developed together with guidance for its procedural implementation. This is to address the regulations governing potential dual-use of biological materials, associated information and technologies, and reduce the potential for their malicious use. Scientists researching and exchanging micro-organisms have a responsibility to prevent misuse of the inherently dangerous ones, that is, those possessing characters such as pathogenicity or toxin production. The code of conduct presented here is based on best practice principles for scientists and their institutions working with biological resources with a specific focus on micro-organisms. It aims to raise awareness of regulatory needs and to protect researchers, their facilities and stakeholders. It reflects global activities in this area in response to legislation such as that in the USA, the PATRIOT Act of 2001, Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism Act of 2001; the Anti-Terrorism Crime and Security Act 2001 and subsequent amendments in the UK; the EU Dual-Use Regulation; and the recommendations of the Organization for Economic Co-operation and Development (OECD), under their Biological Resource Centre (BRC) Initiative at the beginning of the millennium (OECD, 2001). Two project consortia with international partners came together with experts in the field to draw up a Code of Conduct on Biosecurity for BRCs to ensure that culture collections and microbiologists in general worked in a way that met the requirements of such legislation. A BRC is the modern day culture collection that adds value to its holdings and implements common best practice in the collection and supply of strains for research and development. This code of conduct specifically addresses the work of public service culture collections and describes the issues of importance and the controls or

  5. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  6. Joint source/channel coding of scalable video over noisy channels

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, G.; Zakhor, A. [Department of Electrical Engineering and Computer Sciences University of California Berkeley, California94720 (United States)

    1997-01-01

    We propose an optimal bit allocation strategy for a joint source/channel video codec over noisy channel when the channel state is assumed to be known. Our approach is to partition source and channel coding bits in such a way that the expected distortion is minimized. The particular source coding algorithm we use is rate scalable and is based on 3D subband coding with multi-rate quantization. We show that using this strategy, transmission of video over very noisy channels still renders acceptable visual quality, and outperforms schemes that use equal error protection only. The flexibility of the algorithm also permits the bit allocation to be selected optimally when the channel state is in the form of a probability distribution instead of a deterministic state. {copyright} {ital 1997 American Institute of Physics.}

  7. Code of conduct on the safety and security of radioactive sources

    International Nuclear Information System (INIS)

    Anon.

    2001-01-01

    The objective of the code of conduct is to achieve and maintain a high level of safety and security of radioactive sources through the development, harmonization and enforcement of national policies, laws and regulations, and through the fostering of international co-operation. In particular, this code addresses the establishment of an adequate system of regulatory control from the production of radioactive sources to their final disposal, and a system for the restoration of such control if it has been lost. (N.C.)

  8. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Science.gov (United States)

    Blyth, Taylor S.

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics-based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal-hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  9. Development and Implementation of CFD-Informed Models for the Advanced Subchannel Code CTF

    Energy Technology Data Exchange (ETDEWEB)

    Blyth, Taylor S. [Pennsylvania State Univ., University Park, PA (United States); Avramova, Maria [North Carolina State Univ., Raleigh, NC (United States)

    2017-04-01

    The research described in this PhD thesis contributes to the development of efficient methods for utilization of high-fidelity models and codes to inform low-fidelity models and codes in the area of nuclear reactor core thermal-hydraulics. The objective is to increase the accuracy of predictions of quantities of interests using high-fidelity CFD models while preserving the efficiency of low-fidelity subchannel core calculations. An original methodology named Physics- based Approach for High-to-Low Model Information has been further developed and tested. The overall physical phenomena and corresponding localized effects, which are introduced by the presence of spacer grids in light water reactor (LWR) cores, are dissected in corresponding four building basic processes, and corresponding models are informed using high-fidelity CFD codes. These models are a spacer grid-directed cross-flow model, a grid-enhanced turbulent mixing model, a heat transfer enhancement model, and a spacer grid pressure loss model. The localized CFD-models are developed and tested using the CFD code STAR-CCM+, and the corresponding global model development and testing in sub-channel formulation is performed in the thermal- hydraulic subchannel code CTF. The improved CTF simulations utilize data-files derived from CFD STAR-CCM+ simulation results covering the spacer grid design desired for inclusion in the CTF calculation. The current implementation of these models is examined and possibilities for improvement and further development are suggested. The validation experimental database is extended by including the OECD/NRC PSBT benchmark data. The outcome is an enhanced accuracy of CTF predictions while preserving the computational efficiency of a low-fidelity subchannel code.

  10. Applicability evaluation on the conservative metal-water reaction(MWR) model implemented into the SPACE code

    International Nuclear Information System (INIS)

    Lee, Suk Ho; You, Sung Chang; Kim, Han Gon

    2011-01-01

    The SBLOCA (Small Break Loss-of-Coolant Accident) evaluation methodology for the APR1400 (Advanced Power Reactor 1400) is under development using the SPACE code. The goal of the development of this methodology is to set up a conservative evaluation methodology in accordance with Appendix K of 10CFR50 by the end of 2012. In order to develop the Appendix K version of the SPACE code, the code modification is considered through implementation of the code on the required evaluation models. For the conservative models required in the SPACE code, the metal-water reaction (MWR) model, the critical flow model, the Critical Heat Flux (CHF) model and the post-CHF model must be implemented in the code. At present, the integration of the model to generate the Appendix K version of SPACE is in its preliminary stage. Among them, the conservative MWR model and its code applicability are introduced in this paper

  11. Implementation and testing of the CFDS-FLOW3D code

    International Nuclear Information System (INIS)

    Smith, B.L.

    1994-03-01

    FLOW3D is a multi-purpose, transient fluid dynamics and heat transfer code developed by Computational Fluid Dynamics Services (CFDS), a branch of AEA Technology, based at Harwell. The code is supplied with a SUN-based operating environment consisting of an interactive grid generator SOPHIA and a post-processor JASPER for graphical display of results. Both SOPHIA and JASPER are extensions of the support software originally written for the ASTEC code, also promoted by CFDS. The latest release of FLOW3D contains well-tested turbulence and combustion models and, in a less-developed form, a multi-phase modelling potential. This document describes briefly the modelling capabilities of FLOW3D (Release 3.2) and outlines implementation procedures for the VAX, CRAY and CONVEX computer systems. Additional remarks are made concerning the in-house support programs which have been specially written in order to adapt existing ASTEC input data for use with FLOW3D; these programs operate within a VAX-VMS environment. Three sample calculations have been performed and results compared with those obtained previously using the ASTEC code, and checked against other available data, where appropriate. (author) 35 figs., 3 tabs., 42 refs

  12. An Implementation Of Elias Delta Code And ElGamal Algorithm In Image Compression And Security

    Science.gov (United States)

    Rachmawati, Dian; Andri Budiman, Mohammad; Saffiera, Cut Amalia

    2018-01-01

    In data transmission such as transferring an image, confidentiality, integrity, and efficiency of data storage aspects are highly needed. To maintain the confidentiality and integrity of data, one of the techniques used is ElGamal. The strength of this algorithm is found on the difficulty of calculating discrete logs in a large prime modulus. ElGamal belongs to the class of Asymmetric Key Algorithm and resulted in enlargement of the file size, therefore data compression is required. Elias Delta Code is one of the compression algorithms that use delta code table. The image was first compressed using Elias Delta Code Algorithm, then the result of the compression was encrypted by using ElGamal algorithm. Prime test was implemented using Agrawal Biswas Algorithm. The result showed that ElGamal method could maintain the confidentiality and integrity of data with MSE and PSNR values 0 and infinity. The Elias Delta Code method generated compression ratio and space-saving each with average values of 62.49%, and 37.51%.

  13. Building energy, building leadership : recommendations for the adoption, development, and implementation of a commercial building energy code in Manitoba

    Energy Technology Data Exchange (ETDEWEB)

    Akerstream, T. [Manitoba Hydro, Winnipeg, MB (Canada); Allard, K. [City of Thompson, Thompson, MB (Canada); Anderson, N.; Beacham, D. [Manitoba Office of the Fire Commissioner, Winnipeg, MB (Canada); Andrich, R. [The Forks North Portage Partnership, MB (Canada); Auger, A. [Natural Resources Canada, Ottawa, ON (Canada). Office of Energy Efficiency; Downs, R.G. [Shindico Realty Inc., Winnipeg, MB (Canada); Eastwood, R. [Number Ten Architectural Group, Winnipeg, MB (Canada); Hewitt, C. [SMS Engineering Ltd., Winnipeg, MB (Canada); Joshi, D. [City of Winnipeg, Winnipeg, MB (Canada); Klassen, K. [Manitoba Dept. of Energy Science and Technology, Winnipeg, MB (Canada); Phillips, B. [Unies Ltd., Winnipeg, MB (Canada); Wiebe, R. [Ben Wiebe Construction Ltd., Winnipeg, MB (Canada); Woelk, D. [Bockstael Construction Ltd., Winnipeg, MB (Canada); Ziemski, S. [CREIT Management LLP, Winnipeg, MB (Canada)

    2006-09-15

    This report presented a strategy and a set of recommendations for the adoption, development and implementation of an energy code for new commercial construction in Manitoba. The report was compiled by an advisory committee comprised of industry representatives and government agency representatives. Recommendations were divided into 4 categories: (1) advisory committee recommendations; (2) code adoption recommendations; (3) code development recommendations; and (4) code implementation recommendations. It was suggested that Manitoba should adopt an amended version of the Model National Energy Code for Buildings (1997) as a regulation under the Buildings and Mobile Homes Act. Participation in a national initiative to update the Model National Energy Code for Buildings was also advised. It was suggested that the energy code should be considered as the first step in a longer-term process towards a sustainable commercial building code. However, the code should be adopted within the context of a complete market transformation approach. Other recommendations included: the establishment of a multi-stakeholder energy code task group; the provision of information and technical resources to help build industry capacity; the establishment of a process for energy code compliance; and an ongoing review of the energy code to assess impacts and progress. Supplemental recommendations for future discussion included the need for integrated design by building design teams in Manitoba; the development of a program to provide technical assistance to building design teams; and collaboration between post-secondary institutions to develop and deliver courses on integrated building design to students and professionals. 17 refs.

  14. Content-addressable memory processing: Multilevel coding, logical minimization, and an optical implementation

    International Nuclear Information System (INIS)

    Mirsalehi, M.M.; Gaylord, T.K.

    1986-01-01

    This paper describes the effect of coding scheme on the number of reference patterns that need to be stored in a content-addressable memory. It is shown that residue number system in conjunction with multilevel coding and logical minimization significantly reduces the number of reference patterns required for implementation of an operation. The number of reference patterns and the total amount of information that needs to be stored are determined for practical cases of 16-bit and 32-bit fixed-point addition and multiplication. The storage requirements were found to be achievable with the state-of-the-art memory technologies. An optical holographical processor capable of parallel-input/parallel-output operation is described

  15. Documentation for grants equal to tax model: Volume 3, Source code

    International Nuclear Information System (INIS)

    Boryczka, M.K.

    1986-01-01

    The GETT model is capable of forecasting the amount of tax liability associated with all property owned and all activities undertaken by the US Department of Energy (DOE) in site characterization and repository development. The GETT program is a user-friendly, menu-driven model developed using dBASE III/trademark/, a relational data base management system. The data base for GETT consists primarily of eight separate dBASE III/trademark/ files corresponding to each of the eight taxes (real property, personal property, corporate income, franchise, sales, use, severance, and excise) levied by State and local jurisdictions on business property and activity. Additional smaller files help to control model inputs and reporting options. Volume 3 of the GETT model documentation is the source code. The code is arranged primarily by the eight tax types. Other code files include those for JURISDICTION, SIMULATION, VALIDATION, TAXES, CHANGES, REPORTS, GILOT, and GETT. The code has been verified through hand calculations

  16. Lessons learned from a pilot implementation of the UMLS information sources map.

    Science.gov (United States)

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    OBJECTIVE: To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. DESIGN: A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. RESULTS: The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. CONCLUSION: An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this

  17. Lessons learned from a pilot implementation of the UMLS information sources map.

    Science.gov (United States)

    Miller, P L; Frawley, S J; Wright, L; Roderer, N K; Powsner, S M

    1995-01-01

    To explore the software design issues involved in implementing an operational information sources map (ISM) knowledge base (KB) and system of navigational tools that can help medical users access network-based information sources relevant to a biomedical question. A pilot biomedical ISM KB and associated client-server software (ISM/Explorer) have been developed to help students, clinicians, researchers, and staff access network-based information sources, as part of the National Library of Medicine's (NLM) multi-institutional Unified Medical Language System (UMLS) project. The system allows the user to specify and constrain a search for a biomedical question of interest. The system then returns a list of sources matching the search. At this point the user may request 1) further information about a source, 2) that the list of sources be regrouped by different criteria to allow the user to get a better overall appreciation of the set of retrieved sources as a whole, or 3) automatic connection to a source. The pilot system operates in client-server mode and currently contains coded information for 121 sources. It is in routine use from approximately 40 workstations at the Yale School of Medicine. The lessons that have been learned are that: 1) it is important to make access to different versions of a source as seamless as possible, 2) achieving seamless, cross-platform access to heterogeneous sources is difficult, 3) significant differences exist between coding the subject content of an electronic information resource versus that of an article or a book, 4) customizing the ISM to multiple institutions entails significant complexities, and 5) there are many design trade-offs between specifying searches and viewing sets of retrieved sources that must be taken into consideration. An ISM KB and navigational tools have been constructed. In the process, much has been learned about the complexities of development and evaluation in this new environment, which are different

  18. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  19. Implementation of the probability table method in a continuous-energy Monte Carlo code system

    International Nuclear Information System (INIS)

    Sutton, T.M.; Brown, F.B.

    1998-10-01

    RACER is a particle-transport Monte Carlo code that utilizes a continuous-energy treatment for neutrons and neutron cross section data. Until recently, neutron cross sections in the unresolved resonance range (URR) have been treated in RACER using smooth, dilute-average representations. This paper describes how RACER has been modified to use probability tables to treat cross sections in the URR, and the computer codes that have been developed to compute the tables from the unresolved resonance parameters contained in ENDF/B data files. A companion paper presents results of Monte Carlo calculations that demonstrate the effect of the use of probability tables versus the use of dilute-average cross sections for the URR. The next section provides a brief review of the probability table method as implemented in the RACER system. The production of the probability tables for use by RACER takes place in two steps. The first step is the generation of probability tables from the nuclear parameters contained in the ENDF/B data files. This step, and the code written to perform it, are described in Section 3. The tables produced are at energy points determined by the ENDF/B parameters and/or accuracy considerations. The tables actually used in the RACER calculations are obtained in the second step from those produced in the first. These tables are generated at energy points specific to the RACER calculation. Section 4 describes this step and the code written to implement it, as well as modifications made to RACER to enable it to use the tables. Finally, some results and conclusions are presented in Section 5

  20. Rascal: A domain specific language for source code analysis and manipulation

    NARCIS (Netherlands)

    P. Klint (Paul); T. van der Storm (Tijs); J.J. Vinju (Jurgen); A. Walenstein; S. Schuppe

    2009-01-01

    htmlabstractMany automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This

  1. RASCAL : a domain specific language for source code analysis and manipulationa

    NARCIS (Netherlands)

    Klint, P.; Storm, van der T.; Vinju, J.J.

    2009-01-01

    Many automated software engineering tools require tight integration of techniques for source code analysis and manipulation. State-of-the-art tools exist for both, but the domains have remained notoriously separate because different computational paradigms fit each domain best. This impedance

  2. From system requirements to source code: transitions in UML and RUP

    Directory of Open Access Journals (Sweden)

    Stanisław Wrycza

    2011-06-01

    Full Text Available There are many manuals explaining language specification among UML-related books. Only some of books mentioned concentrate on practical aspects of using the UML language in effective way using CASE tools and RUP. The current paper presents transitions from system requirements specification to structural source code, useful while developing an information system.

  3. CODES AND PRACTICES OF IMPLEMENTATION OF CORPORATE GOVERNANCE IN ROMANIA AND RESULTS REPORTING

    Directory of Open Access Journals (Sweden)

    GROSU MARIA

    2011-12-01

    Full Text Available Corporate governance refers to the manner in which companies are directed and controlled. Business management was always guided by certain principles, but the current meaning of corporate governance concerns and the contribution that companies must have the overall development of modern society. Romania used quite late in adopting a code of good practice in corporate governance, being driven, in particular, the privatization process, but also the transfer of control and surveillance of political organizations by the Board of Directors (BD. Adoption of codes of corporate governance is necessary to harmonize internal business requirements of a functioning market economy. In addition, the CEE countries, the European Commission adopted an action plan announcing measures to modernize company law and enhance corporate governance. Romania takes steps in this direction by amending the Company Law, and other regulations, although the practice does not necessarily keep pace with the requirements. This study aims on the one hand, an analysis of the evolution of corporate governance codes adopted in Romania, but also an empirical research of the implementation of corporate governance principles of a representative sample of companies listed on the Bucharest Stock Exchange (BSE. Consider relevant research methodology, because the issuer of the Codes of CG in Romania is BSE listed companies requesting their voluntary implementation. Implementation results are summarized and interpreted at the expense of public reports of the companies studied. Most studies undertaken in this direction have been made on multinational companies which respects the rule of corporate governance codes of countries of origin. In addition, many studies also emphasize the fair treatment of stakeholders rather than on models of governance adopted (monist/dualist with implications for optimizing economic objectives but also social. Undertaken research attempts to highlight on the one

  4. VINE-A NUMERICAL CODE FOR SIMULATING ASTROPHYSICAL SYSTEMS USING PARTICLES. II. IMPLEMENTATION AND PERFORMANCE CHARACTERISTICS

    International Nuclear Information System (INIS)

    Nelson, Andrew F.; Wetzstein, M.; Naab, T.

    2009-01-01

    We continue our presentation of VINE. In this paper, we begin with a description of relevant architectural properties of the serial and shared memory parallel computers on which VINE is intended to run, and describe their influences on the design of the code itself. We continue with a detailed description of a number of optimizations made to the layout of the particle data in memory and to our implementation of a binary tree used to access that data for use in gravitational force calculations and searches for smoothed particle hydrodynamics (SPH) neighbor particles. We describe the modifications to the code necessary to obtain forces efficiently from special purpose 'GRAPE' hardware, the interfaces required to allow transparent substitution of those forces in the code instead of those obtained from the tree, and the modifications necessary to use both tree and GRAPE together as a fused GRAPE/tree combination. We conclude with an extensive series of performance tests, which demonstrate that the code can be run efficiently and without modification in serial on small workstations or in parallel using the OpenMP compiler directives on large-scale, shared memory parallel machines. We analyze the effects of the code optimizations and estimate that they improve its overall performance by more than an order of magnitude over that obtained by many other tree codes. Scaled parallel performance of the gravity and SPH calculations, together the most costly components of most simulations, is nearly linear up to at least 120 processors on moderate sized test problems using the Origin 3000 architecture, and to the maximum machine sizes available to us on several other architectures. At similar accuracy, performance of VINE, used in GRAPE-tree mode, is approximately a factor 2 slower than that of VINE, used in host-only mode. Further optimizations of the GRAPE/host communications could improve the speed by as much as a factor of 3, but have not yet been implemented in VINE

  5. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    Science.gov (United States)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be

  6. Coded moderator approach for fast neutron source detection and localization at standoff

    Energy Technology Data Exchange (ETDEWEB)

    Littell, Jennifer [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Lukosi, Eric, E-mail: elukosi@utk.edu [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States); Institute for Nuclear Security, University of Tennessee, 1640 Cumberland Avenue, Knoxville, TN 37996 (United States); Hayward, Jason; Milburn, Robert; Rowan, Allen [Department of Nuclear Engineering, University of Tennessee, 305 Pasqua Engineering Building, Knoxville, TN 37996 (United States)

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  7. Uncertainties in source term calculations generated by the ORIGEN2 computer code for Hanford Production Reactors

    International Nuclear Information System (INIS)

    Heeb, C.M.

    1991-03-01

    The ORIGEN2 computer code is the primary calculational tool for computing isotopic source terms for the Hanford Environmental Dose Reconstruction (HEDR) Project. The ORIGEN2 code computes the amounts of radionuclides that are created or remain in spent nuclear fuel after neutron irradiation and radioactive decay have occurred as a result of nuclear reactor operation. ORIGEN2 was chosen as the primary code for these calculations because it is widely used and accepted by the nuclear industry, both in the United States and the rest of the world. Its comprehensive library of over 1,600 nuclides includes any possible isotope of interest to the HEDR Project. It is important to evaluate the uncertainties expected from use of ORIGEN2 in the HEDR Project because these uncertainties may have a pivotal impact on the final accuracy and credibility of the results of the project. There are three primary sources of uncertainty in an ORIGEN2 calculation: basic nuclear data uncertainty in neutron cross sections, radioactive decay constants, energy per fission, and fission product yields; calculational uncertainty due to input data; and code uncertainties (i.e., numerical approximations, and neutron spectrum-averaged cross-section values from the code library). 15 refs., 5 figs., 5 tabs

  8. Code of practice for the use of sealed radioactive sources in borehole logging (1998)

    International Nuclear Information System (INIS)

    1989-12-01

    The purpose of this code is to establish working practices, procedures and protective measures which will aid in keeping doses, arising from the use of borehole logging equipment containing sealed radioactive sources, to as low as reasonably achievable and to ensure that the dose-equivalent limits specified in the National Health and Medical Research Council s radiation protection standards, are not exceeded. This code applies to all situations and practices where a sealed radioactive source or sources are used through wireline logging for investigating the physical properties of the geological sequence, or any fluids contained in the geological sequence, or the properties of the borehole itself, whether casing, mudcake or borehole fluids. The radiation protection standards specify dose-equivalent limits for two categories: radiation workers and members of the public. 3 refs., tabs., ills

  9. COMPUTATION FORMAT computer codes X4TOC4 and PLOTC4. Implementing and Testing on a Personal Computer

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1987-05-01

    This document describes the contents of the diskette containing the COMPUTATION FORMAT codes X4TOC4 and PLOTC4 by D.E. Cullen, and example data for use in implementing and testing these codes on a Personal Computer of the type IBM-PC/AT. Upon request the codes are available from the IAEA Nuclear Data Section, free of charge, on a single diskette. (author)

  10. Experimental benchmark of the NINJA code for application to the Linac4 H- ion source plasma

    Science.gov (United States)

    Briefi, S.; Mattei, S.; Rauner, D.; Lettry, J.; Tran, M. Q.; Fantz, U.

    2017-10-01

    For a dedicated performance optimization of negative hydrogen ion sources applied at particle accelerators, a detailed assessment of the plasma processes is required. Due to the compact design of these sources, diagnostic access is typically limited to optical emission spectroscopy yielding only line-of-sight integrated results. In order to allow for a spatially resolved investigation, the electromagnetic particle-in-cell Monte Carlo collision code NINJA has been developed for the Linac4 ion source at CERN. This code considers the RF field generated by the ICP coil as well as the external static magnetic fields and calculates self-consistently the resulting discharge properties. NINJA is benchmarked at the diagnostically well accessible lab experiment CHARLIE (Concept studies for Helicon Assisted RF Low pressure Ion sourcEs) at varying RF power and gas pressure. A good general agreement is observed between experiment and simulation although the simulated electron density trends for varying pressure and power as well as the absolute electron temperature values deviate slightly from the measured ones. This can be explained by the assumption of strong inductive coupling in NINJA, whereas the CHARLIE discharges show the characteristics of loosely coupled plasmas. For the Linac4 plasma, this assumption is valid. Accordingly, both the absolute values of the accessible plasma parameters and their trends for varying RF power agree well in measurement and simulation. At varying RF power, the H- current extracted from the Linac4 source peaks at 40 kW. For volume operation, this is perfectly reflected by assessing the processes in front of the extraction aperture based on the simulation results where the highest H- density is obtained for the same power level. In surface operation, the production of negative hydrogen ions at the converter surface can only be considered by specialized beam formation codes, which require plasma parameters as input. It has been demonstrated that

  11. Identification of Sparse Audio Tampering Using Distributed Source Coding and Compressive Sensing Techniques

    Directory of Open Access Journals (Sweden)

    Valenzise G

    2009-01-01

    Full Text Available In the past few years, a large amount of techniques have been proposed to identify whether a multimedia content has been illegally tampered or not. Nevertheless, very few efforts have been devoted to identifying which kind of attack has been carried out, especially due to the large data required for this task. We propose a novel hashing scheme which exploits the paradigms of compressive sensing and distributed source coding to generate a compact hash signature, and we apply it to the case of audio content protection. The audio content provider produces a small hash signature by computing a limited number of random projections of a perceptual, time-frequency representation of the original audio stream; the audio hash is given by the syndrome bits of an LDPC code applied to the projections. At the content user side, the hash is decoded using distributed source coding tools. If the tampering is sparsifiable or compressible in some orthonormal basis or redundant dictionary, it is possible to identify the time-frequency position of the attack, with a hash size as small as 200 bits/second; the bit saving obtained by introducing distributed source coding ranges between 20% to 70%.

  12. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    Directory of Open Access Journals (Sweden)

    Sandeep Kakde

    2017-12-01

    Full Text Available For binary field and long code lengths, Low Density Parity Check (LDPC code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algorithm. VLSI Architecture is proposed which uses the value re-use property of min-sum algorithm and gives high throughput. The proposed work has been implemented and tested on Xilinx Virtex 5 FPGA. The MATLAB result of LDPC decoder for low bit error rate (BER gives bit error rate in the range of 10-1 to 10-3.5 at SNR=1 to 2 for 20 no of iterations. So it gives good bit error rate performance. The latency of the parallel design of LDPC decoder has also reduced. It has accomplished 141.22 MHz maximum frequency and throughput of 2.02 Gbps while consuming less area of the design.

  13. Design and implementation of a software tool intended for simulation and test of real time codes

    International Nuclear Information System (INIS)

    Le Louarn, C.

    1986-09-01

    The objective of real time software testing is to show off processing errors and unobserved functional requirements or timing constraints in a code. In the perspective of safety analysis of nuclear equipments of power plants testing should be carried independently from the physical process (which is not generally available), and because casual hardware failures must be considered. We propose here a simulation and test tool, integrally software, with large interactive possibilities for testing assembly code running on microprocessor. The OST (outil d'aide a la simulation et au Test de logiciels temps reel) simulates code execution and hardware or software environment behaviour. Test execution is closely monitored and many useful informations are automatically saved. The present thesis work details, after exposing methods and tools dedicated to real time software, the OST system. We show the internal mechanisms and objects of the system: particularly ''events'' (which describe evolutions of the system under test) and mnemonics (which describe the variables). Then, we detail the interactive means available to the user for constructing the test data and the environment of the tested software. Finally, a prototype implementation is presented along with the results of the tests carried out. This demonstrates the many advantages of the use of an automatic tool over a manual investigation. As a conclusion, further developments, nececessary to complete the final tool are rewieved [fr

  14. Implementations of non-drag interfacial forces into the CUPID code

    International Nuclear Information System (INIS)

    Park, I.K.; Cho, H.K.; Kim, J.; Yoon, H.Y.; Jeong, J.J.

    2009-01-01

    A component-scale thermal-hydraulics analysis module, the CUPID code has been being developed for a transient three-dimensional two-phase flow analysis of nuclear reactor components. The CUPID is based on a two-fluid, three-field model, which is solved by using an unstructured finite volume method. In the two-fluid momentum equation, the most important term to be modeled is the interfacial surface force. The simplest way to model this force is to formulate as the linear combination of various known interfacial forces such as the standard drag force, the virtual mass force, the Basset force, the lift force, the wall lift force, and the turbulent dispersion force. The standard drag force and the virtual mass force, which is essential for two-fluid computational models, are already considered in the CUPID code. In this paper, the wall lubrication force, the lift force, and the turbulent dispersion force including turbulence models, which play an important role on a radial distribution of the void in a two-phase flow, were implemented into the CUPID code, and the effect of these forces were verified qualitatively. (author)

  15. Implementation of 3D models in the Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    Lopes, Vivaldo; Millian, Felix M.; Guevara, Maria Victoria M.; Garcia, Fermin; Sena, Isaac; Menezes, Hugo

    2009-01-01

    On the area of numerical dosimetry Applied to medical physics, the scientific community focuses on the elaboration of new hybrids models based on 3D models. But different steps of the process of simulation with 3D models needed improvement and optimization in order to expedite the calculations and accuracy using this methodology. This project was developed with the aim of optimize the process of introduction of 3D models within the simulation code of radiation transport by Monte Carlo (MCNP). The fast implementation of these models on the simulation code allows the estimation of the dose deposited on the patient organs on a more personalized way, increasing the accuracy with this on the estimates and reducing the risks to health, caused by ionizing radiations. The introduction o these models within the MCNP was made through a input file, that was constructed through a sequence of images, bi-dimensional in the 3D model, generated using the program '3DSMAX', imported by the program 'TOMO M C' and thus, introduced as INPUT FILE of the MCNP code. (author)

  16. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  17. BSDWormer; an Open Source Implementation of a Poisson Wavelet Multiscale Analysis for Potential Fields

    Science.gov (United States)

    Horowitz, F. G.; Gaede, O.

    2014-12-01

    Wavelet multiscale edge analysis of potential fields (a.k.a. "worms") has been known since Moreau et al. (1997) and was independently derived by Hornby et al. (1999). The technique is useful for producing a scale-explicit overview of the structures beneath a gravity or magnetic survey, including establishing the location and estimating the attitude of surface features, as well as incorporating information about the geometric class (point, line, surface, volume, fractal) of the underlying sources — in a fashion much like traditional structural indices from Euler solutions albeit with better areal coverage. Hornby et al. (2002) show that worms form the locally highest concentration of horizontal edges of a given strike — which in conjunction with the results from Mallat and Zhong (1992) induces a (non-unique!) inversion where the worms are physically interpretable as lateral boundaries in a source distribution that produces a close approximation of the observed potential field. The technique has enjoyed widespread adoption and success in the Australian mineral exploration community — including "ground truth" via successfully drilling structures indicated by the worms. Unfortunately, to our knowledge, all implementations of the code to calculate the worms/multiscale edges (including Horowitz' original research code) are either part of commercial software packages, or have copyright restrictions that impede the use of the technique by the wider community. The technique is completely described mathematically in Hornby et al. (1999) along with some later publications. This enables us to re-implement from scratch the code required to calculate and visualize the worms. We are freely releasing the results under an (open source) BSD two-clause software license. A git repository is available at . We will give an overview of the technique, show code snippets using the codebase, and present visualization results for example datasets (including the Surat basin of Australia

  18. Product information representation for feature conversion and implementation of group technology automated coding

    Science.gov (United States)

    Medland, A. J.; Zhu, Guowang; Gao, Jian; Sun, Jian

    1996-03-01

    Feature conversion, also called feature transformation and feature mapping, is defined as the process of converting features from one view of an object to another view of the object. In a relatively simple implementation, for each application the design features are automatically converted into features specific for that application. All modifications have to be made via the design features. This is the approach that has attracted most attention until now. In the ideal situation, however, conversions directly from application views to the design view, and to other applications views, are also possible. In this paper, some difficulties faced in feature conversion are discussed. A new representation scheme of feature-based parts models has been proposed for the purpose of one-way feature conversion. The parts models consist of five different levels of abstraction, extending from an assembly level and its attributes, single parts and their attributes, single features and their attributes, one containing the geometric reference element and finally one for detailed geometry. One implementation of feature conversion for rotational components within GT (Group Technology) has already been undertaken using an automated coding procedure operating on a design-feature database. This database has been generated by a feature-based design system, and the GT coding scheme used in this paper is a specific scheme created for a textile machine manufacturing plant. Such feature conversion techniques presented here are only in their early stages of development and further research is underway.

  19. Implementation of an iteractive matching scheme for the Kapchinskij-Vladimirskij equations in the WARP code

    International Nuclear Information System (INIS)

    Chilton, Sven H.

    2008-01-01

    The WARP code is a robust electrostatic particle-in-cell simulation package used to model charged particle beams with strong space-charge forces. A fundamental operation associated with seeding detailed simulations of a beam transport channel is to generate initial conditions where the beam distribution is matched to the structure of a periodic focusing lattice. This is done by solving for periodic, matched solutions to a coupled set of ODEs called the Kapchinskij-Vladimirskij (KV) envelope equations, which describe the evolution of low-order beam moments subject to applied lattice focusing, space-charge defocusing, and thermal defocusing forces. Recently, an iterative numerical method was developed (Lund, Chilton, and Lee, Efficient computation of matched solutions to the KV envelope equations for periodic focusing lattices, Physical Review Special Topics-Accelerators and Beams 9, 064201 2006) to generate matching conditions in a highly flexible, convergent, and fail-safe manner. This method is extended and implemented in the WARP code as a Python package to vastly ease the setup of detailed simulations. In particular, the Python package accommodates any linear applied lattice focusing functions without skew coupling, and a more general set of beam parameter specifications than its predecessor. Lattice strength iteration tools were added to facilitate the implementation of problems with specific applied focusing strengths

  20. Implementation of an iterative matching scheme for the Kapchinskij-Vladimirskij equations in the WARP code

    International Nuclear Information System (INIS)

    Chilton, Sven; Chilton, Sven H.

    2008-01-01

    The WARP code is a robust electrostatic particle-in-cell simulation package used to model charged particle beams with strong space-charge forces. A fundamental operation associated with seeding detailed simulations of a beam transport channel is to generate initial conditions where the beam distribution is matched to the structure of a periodic focusing lattice. This is done by solving for periodic, matched solutions to a coupled set of ODEs called the Kapchinskij-Vladimirskij (KV) envelope equations, which describe the evolution of low-order beam moments subject to applied lattice focusing, space-charge defocusing, and thermal defocusing forces. Recently, an iterative numerical method was developed (Lund, Chilton, and Lee, Efficient computation of matched solutions to the KV envelope equations for periodic focusing lattices, Physical Review Special Topics-Accelerators and Beams 9, 064201 2006) to generate matching conditions in a highly flexible, convergent, and fail-safe manner. This method is extended and implemented in the WARP code as a Python package to vastly ease the setup of detailed simulations. In particular, the Python package accommodates any linear applied lattice focusing functions without skew coupling, and a more general set of beam parameter specifications than its predecessor. Lattice strength iteration tools were added to facilitate the implementation of problems with specific applied focusing strengths

  1. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  2. Implementation of decommissioning materials conditional clearance process to the OMEGA calculation code

    International Nuclear Information System (INIS)

    Zachar, Matej; Necas, Vladimir; Daniska, Vladimir

    2011-01-01

    The activities performed during nuclear installation decommissioning process inevitably lead to the production of large amount of radioactive material to be managed. Significant part of materials has such low radioactivity level that allows them to be released to the environment without any restriction for further use. On the other hand, for materials with radioactivity slightly above the defined unconditional clearance level, there is a possibility to release them conditionally for a specific purpose in accordance with developed scenario assuring that radiation exposure limits for population not to be exceeded. The procedure of managing such decommissioning materials, mentioned above, could lead to recycling and reuse of more solid materials and to save the radioactive waste repository volume. In the paper an a implementation of the process of conditional release to the OMEGA Code is analyzed in details; the Code is used for calculation of decommissioning parameters. The analytical approach in the material parameters assessment, firstly, assumes a definition of radiological limit conditions, based on the evaluation of possible scenarios for conditionally released materials, and their application to appropriate sorter type in existing material and radioactivity flow system. Other calculation procedures with relevant technological or economical parameters, mathematically describing e.g. final radiation monitoring or transport outside the locality, are applied to the OMEGA Code in the next step. Together with limits, new procedures creating independent material stream allow evaluation of conditional material release process during decommissioning. Model calculations evaluating various scenarios with different input parameters and considering conditional release of materials to the environment are performed to verify the implemented methodology. Output parameters and results of the model assessment are presented, discussed and conduced in the final part of the paper

  3. SOURCES-3A: A code for calculating (α, n), spontaneous fission, and delayed neutron sources and spectra

    International Nuclear Information System (INIS)

    Perry, R.T.; Wilson, W.B.; Charlton, W.S.

    1998-04-01

    In many systems, it is imperative to have accurate knowledge of all significant sources of neutrons due to the decay of radionuclides. These sources can include neutrons resulting from the spontaneous fission of actinides, the interaction of actinide decay α-particles in (α,n) reactions with low- or medium-Z nuclides, and/or delayed neutrons from the fission products of actinides. Numerous systems exist in which these neutron sources could be important. These include, but are not limited to, clean and spent nuclear fuel (UO 2 , ThO 2 , MOX, etc.), enrichment plant operations (UF 6 , PuF 4 , etc.), waste tank studies, waste products in borosilicate glass or glass-ceramic mixtures, and weapons-grade plutonium in storage containers. SOURCES-3A is a computer code that determines neutron production rates and spectra from (α,n) reactions, spontaneous fission, and delayed neutron emission due to the decay of radionuclides in homogeneous media (i.e., a mixture of α-emitting source material and low-Z target material) and in interface problems (i.e., a slab of α-emitting source material in contact with a slab of low-Z target material). The code is also capable of calculating the neutron production rates due to (α,n) reactions induced by a monoenergetic beam of α-particles incident on a slab of target material. Spontaneous fission spectra are calculated with evaluated half-life, spontaneous fission branching, and Watt spectrum parameters for 43 actinides. The (α,n) spectra are calculated using an assumed isotropic angular distribution in the center-of-mass system with a library of 89 nuclide decay α-particle spectra, 24 sets of measured and/or evaluated (α,n) cross sections and product nuclide level branching fractions, and functional α-particle stopping cross sections for Z < 106. The delayed neutron spectra are taken from an evaluated library of 105 precursors. The code outputs the magnitude and spectra of the resultant neutron source. It also provides an

  4. Time-dependent anisotropic distributed source capability in transient 3-d transport code tort-TD

    International Nuclear Information System (INIS)

    Seubert, A.; Pautz, A.; Becker, M.; Dagan, R.

    2009-01-01

    The transient 3-D discrete ordinates transport code TORT-TD has been extended to account for time-dependent anisotropic distributed external sources. The extension aims at the simulation of the pulsed neutron source in the YALINA-Thermal subcritical assembly. Since feedback effects are not relevant in this zero-power configuration, this offers a unique opportunity to validate the time-dependent neutron kinetics of TORT-TD with experimental data. The extensions made in TORT-TD to incorporate a time-dependent anisotropic external source are described. The steady state of the YALINA-Thermal assembly and its response to an artificial square-wave source pulse sequence have been analysed with TORT-TD using pin-wise homogenised cross sections in 18 prompt energy groups with P 1 scattering order and 8 delayed neutron groups. The results demonstrate the applicability of TORT-TD to subcritical problems with a time-dependent external source. (authors)

  5. Imaging x-ray sources at a finite distance in coded-mask instruments

    International Nuclear Information System (INIS)

    Donnarumma, Immacolata; Pacciani, Luigi; Lapshov, Igor; Evangelista, Yuri

    2008-01-01

    We present a method for the correction of beam divergence in finite distance sources imaging through coded-mask instruments. We discuss the defocusing artifacts induced by the finite distance showing two different approaches to remove such spurious effects. We applied our method to one-dimensional (1D) coded-mask systems, although it is also applicable in two-dimensional systems. We provide a detailed mathematical description of the adopted method and of the systematics introduced in the reconstructed image (e.g., the fraction of source flux collected in the reconstructed peak counts). The accuracy of this method was tested by simulating pointlike and extended sources at a finite distance with the instrumental setup of the SuperAGILE experiment, the 1D coded-mask x-ray imager onboard the AGILE (Astro-rivelatore Gamma a Immagini Leggero) mission. We obtained reconstructed images of good quality and high source location accuracy. Finally we show the results obtained by applying this method to real data collected during the calibration campaign of SuperAGILE. Our method was demonstrated to be a powerful tool to investigate the imaging response of the experiment, particularly the absorption due to the materials intercepting the line of sight of the instrument and the conversion between detector pixel and sky direction

  6. Hybrid digital-analog coding with bandwidth expansion for correlated Gaussian sources under Rayleigh fading

    Science.gov (United States)

    Yahampath, Pradeepa

    2017-12-01

    Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.

  7. Beyond the Business Model: Incentives for Organizations to Publish Software Source Code

    Science.gov (United States)

    Lindman, Juho; Juutilainen, Juha-Pekka; Rossi, Matti

    The software stack opened under Open Source Software (OSS) licenses is growing rapidly. Commercial actors have released considerable amounts of previously proprietary source code. These actions beg the question why companies choose a strategy based on giving away software assets? Research on outbound OSS approach has tried to answer this question with the concept of the “OSS business model”. When studying the reasons for code release, we have observed that the business model concept is too generic to capture the many incentives organizations have. Conversely, in this paper we investigate empirically what the companies’ incentives are by means of an exploratory case study of three organizations in different stages of their code release. Our results indicate that the companies aim to promote standardization, obtain development resources, gain cost savings, improve the quality of software, increase the trustworthiness of software, or steer OSS communities. We conclude that future research on outbound OSS could benefit from focusing on the heterogeneous incentives for code release rather than on revenue models.

  8. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    Science.gov (United States)

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  9. Survey of source code metrics for evaluating testability of object oriented systems

    OpenAIRE

    Shaheen , Muhammad Rabee; Du Bousquet , Lydie

    2010-01-01

    Software testing is costly in terms of time and funds. Testability is a software characteristic that aims at producing systems easy to test. Several metrics have been proposed to identify the testability weaknesses. But it is sometimes difficult to be convinced that those metrics are really related with testability. This article is a critical survey of the source-code based metrics proposed in the literature for object-oriented software testability. It underlines the necessity to provide test...

  10. NEACRP comparison of source term codes for the radiation protection assessment of transportation packages

    International Nuclear Information System (INIS)

    Broadhead, B.L.; Locke, H.F.; Avery, A.F.

    1994-01-01

    The results for Problems 5 and 6 of the NEACRP code comparison as submitted by six participating countries are presented in summary. These problems concentrate on the prediction of the neutron and gamma-ray sources arising in fuel after a specified irradiation, the fuel being uranium oxide for problem 5 and a mixture of uranium and plutonium oxides for problem 6. In both problems the predicted neutron sources are in good agreement for all participants. For gamma rays, however, there are differences, largely due to the omission of bremsstrahlung in some calculations

  11. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly; Pettersson, Gustav M.; Kostina, Victoria; Hassibi, Babak

    2017-01-01

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  12. Multi-rate control over AWGN channels via analog joint source-channel coding

    KAUST Repository

    Khina, Anatoly

    2017-01-05

    We consider the problem of controlling an unstable plant over an additive white Gaussian noise (AWGN) channel with a transmit power constraint, where the signaling rate of communication is larger than the sampling rate (for generating observations and applying control inputs) of the underlying plant. Such a situation is quite common since sampling is done at a rate that captures the dynamics of the plant and which is often much lower than the rate that can be communicated. This setting offers the opportunity of improving the system performance by employing multiple channel uses to convey a single message (output plant observation or control input). Common ways of doing so are through either repeating the message, or by quantizing it to a number of bits and then transmitting a channel coded version of the bits whose length is commensurate with the number of channel uses per sampled message. We argue that such “separated source and channel coding” can be suboptimal and propose to perform joint source-channel coding. Since the block length is short we obviate the need to go to the digital domain altogether and instead consider analog joint source-channel coding. For the case where the communication signaling rate is twice the sampling rate, we employ the Archimedean bi-spiral-based Shannon-Kotel\\'nikov analog maps to show significant improvement in stability margins and linear-quadratic Gaussian (LQG) costs over simple schemes that employ repetition.

  13. D-DSC: Decoding Delay-based Distributed Source Coding for Internet of Sensing Things.

    Science.gov (United States)

    Aktas, Metin; Kuscu, Murat; Dinc, Ergin; Akan, Ozgur B

    2018-01-01

    Spatial correlation between densely deployed sensor nodes in a wireless sensor network (WSN) can be exploited to reduce the power consumption through a proper source coding mechanism such as distributed source coding (DSC). In this paper, we propose the Decoding Delay-based Distributed Source Coding (D-DSC) to improve the energy efficiency of the classical DSC by employing the decoding delay concept which enables the use of the maximum correlated portion of sensor samples during the event estimation. In D-DSC, network is partitioned into clusters, where the clusterheads communicate their uncompressed samples carrying the side information, and the cluster members send their compressed samples. Sink performs joint decoding of the compressed and uncompressed samples and then reconstructs the event signal using the decoded sensor readings. Based on the observed degree of the correlation among sensor samples, the sink dynamically updates and broadcasts the varying compression rates back to the sensor nodes. Simulation results for the performance evaluation reveal that D-DSC can achieve reliable and energy-efficient event communication and estimation for practical signal detection/estimation applications having massive number of sensors towards the realization of Internet of Sensing Things (IoST).

  14. The Norwegian system for implementing the IAEA code of practice based on absorbed dose to water

    International Nuclear Information System (INIS)

    Bjerke, H.

    2002-01-01

    The Norwegian Radiation Protection Authority (NRPA) SSDL recommended in 2000 the use of absorbed dose to water as the quality for calibration and code of practice in radiotherapy. The absorbed dose to water standard traceable to BIPM was established in Norway in 1995. The international code of practice, IAEA TRS 398 was under preparation. As a part of the implementation of the new dosimetry system the SSDL went to radiotherapy departments in Norway in 2001. The aim of the visit was to: Prepare and support the users in the implementation of TRS 398 by teaching, discussions and measurements on-site; Gain experience for NRPA in the practical implementation of TRS 398 and perform comparisons between TRS 277 and TRS 398 for different beam qualities; Report experience from implementation of TRS 398 to IAEA. The NRPA 30x30x30 cm 3 water phantom is equal to the BIPM calibration phantom. This was used for the photon measurements in 16 different beams. NRPA used three chambers: NE 2571, NE 2611 and PR06C for the photon measurements. As a quality control the set-up was compared with the Finnish site-visit equipment at University Hospital of Helsinki, and the measured absorbed dose to water agreed within 0.6%. The Finnish SSDL calibrated the Norwegian chambers and the absorbed dose to water calibration factors given by the two SSDLs for the three chambers agreed within 0.3%. The local clinical dosimetry in Norway was based on TRS 277. For the site-visit the absorbed dose to water was determined by NRPA using own equipment including the three chambers and the hospitals reference chamber. The hospital determined the dose the same evening using their local equipment. For the 16 photon beams the deviations between the two absorbed dose to water determinations for TRS 277 were in the range -1,7% to +4.0%. The uncertainty in the measurements was 1% (k=1). The deviation was explained in local implementation of TRS 277, the use of plastic phantoms, no resent calibration of

  15. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  16. Implementation Of Code And Carrier Tracking Loops For Software GPS Receivers

    Directory of Open Access Journals (Sweden)

    Win Kay Khaing

    2015-06-01

    Full Text Available Abstract GPS is playing in very important role in our modern mobile societies. Software approach is very flexible rather than the traditional hardware receivers. The soft-GPS receiver includes two portions hardware and software. In hardware portion an antenna filter down-converter from RF Radio Frequency to IF Intermediate Frequency and an ADC Analog to Digital Converter are included. In software portion signal processing such as acquisition tracking and navigation that runs on general purpose processor is included. The GPS signal is taken from N-FUELS Full Educational Library of Signals for Navigation signal simulator. The heart of soft-GPS receiver is the synchronization processes such as acquisition and tracking. In tracking there are two main loops for code and carrier tracking. The objective of this paper is to analyse and find the optimum discriminator function for the code tracking loop in soft-GPS receivers. The delay lock loop DLL is a well-known technique to track the codes for GNSS spread spectrum systems. This paper also presents non-coherent square law DLLs and the impacts of some parameters on DLL discriminators such as number of samples per chip early-late spacing different C No values where C denotes the signal power and No is the noise spectral density and the impact of with or without front-end device. The results of discriminator outputs are illustrated by using S-curves. Testing results with the real GPS signal are also described. This optimized discriminator functions can be implemented in any soft-GPS receivers.

  17. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-01-01

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results

  18. Implementation of a 3D plasma particle-in-cell code on a MIMD parallel computer

    International Nuclear Information System (INIS)

    Liewer, P.C.; Lyster, P.; Wang, J.

    1993-01-01

    A three-dimensional plasma particle-in-cell (PIC) code has been implemented on the Intel Delta MIMD parallel supercomputer using the General Concurrent PIC algorithm. The GCPIC algorithm uses a domain decomposition to divide the computation among the processors: A processor is assigned a subdomain and all the particles in it. Particles must be exchanged between processors as they move. Results are presented comparing the efficiency for 1-, 2- and 3-dimensional partitions of the three dimensional domain. This algorithm has been found to be very efficient even when a large fraction (e.g. 30%) of the particles must be exchanged at every time step. On the 512-node Intel Delta, up to 125 million particles have been pushed with an electrostatic push time of under 500 nsec/particle/time step

  19. BREESE-II: auxiliary routines for implementing the albedo option in the MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cain, V.R.; Emmett, M.B.

    1979-07-01

    The routines in the BREESE package implement the albedo option in the MORSE Monte Carlo Code by providing (1) replacements for the default routines ALBIN and ALBDO in the MORSE Code, (2) an estimating routine ALBDOE compatible with the SAMBO package in MORSE, and (3) a separate program that writes a tape of albedo data in the proper format for ALBIN. These extensions of the package initially reported in 1974 were performed jointly by ORNL, Bechtel Power Corporation, and Science Applications, Inc. The first version of BREESE had a fixed number of outgoing polar angles and the number of outgoing azimuthal angles was a function of the value of the outgoing polar angle only. An examination of differential albedo data led to this modified version which allows the number of outgoing polar angles to be dependent upon the value of the incoming polar angle and the number of outgoing azimuthal angles to be a function of the value of both incoming and outgoing polar angles

  20. Implementation and Analysis Audio Steganography Used Parity Coding for Symmetric Cryptography Key Delivery

    Directory of Open Access Journals (Sweden)

    Afany Zeinata Firdaus

    2013-12-01

    Full Text Available In today's era of communication, online data transactions is increasing. Various information even more accessible, both upload and download. Because it takes a capable security system. Blowfish cryptographic equipped with Audio Steganography is one way to secure the data so that the data can not be accessed by unauthorized parties. In this study Audio Steganography technique is implemented using parity coding method that is used to send the key cryptography blowfish in e-commerce applications based on Android. The results obtained for the average computation time on stage insertion (embedding the secret message is shorter than the average computation time making phase (extracting the secret message. From the test results can also be seen that the more the number of characters pasted the greater the noise received, where the highest SNR is obtained when a character is inserted as many as 506 characters is equal to 11.9905 dB, while the lowest SNR obtained when a character is inserted as many as 2006 characters at 5,6897 dB . Keywords: audio steganograph, parity coding, embedding, extractin, cryptography blowfih.

  1. Design, implementation and verification of software code for radiation dose assessment based on simple generic environmental model

    International Nuclear Information System (INIS)

    I Putu Susila; Arif Yuniarto

    2017-01-01

    Radiation dose assessment to determine the potential of radiological impacts of various installations within nuclear facility complex is necessary to ensure environmental and public safety. A simple generic model-based method for calculating radiation doses caused by the release of radioactive substances into the environment has been published by the International Atomic Energy Agency (IAEA) as the Safety Report Series No. 19 (SRS-19). In order to assist the application of the assessment method and a basis for the development of more complex assessment methods, an open-source based software code has been designed and implemented. The software comes with maps and is very easy to be used because assessment scenarios can be done through diagrams. Software verification was performed by comparing its result to SRS-19 and CROM software calculation results. Dose estimated by SRS-19 are higher compared to the result of developed software. However, these are still acceptable since dose estimation in SRS-19 is based on conservative approach. On the other hand, compared to CROM software, the same results for three scenarios and a non-significant difference of 2.25 % in another scenario were obtained. These results indicate the correctness of our implementation and implies that the developed software is ready for use in real scenario. In the future, the addition of various features and development of new model need to be done to improve the capability of software that has been developed. (author)

  2. Implementation of data management and effect on chronic disease coding in a primary care organisation: A parallel cohort observational study.

    Science.gov (United States)

    Greiver, Michelle; Wintemute, Kimberly; Aliarzadeh, Babak; Martin, Ken; Khan, Shahriar; Jackson, Dave; Leggett, Jannet; Lambert-Lanning, Anita; Siu, Maggie

    2016-10-12

    Consistent and standardized coding for chronic conditions is associated with better care; however, coding may currently be limited in electronic medical records (EMRs) used in Canadian primary care.Objectives To implement data management activities in a community-based primary care organisation and to evaluate the effects on coding for chronic conditions. Fifty-nine family physicians in Toronto, Ontario, belonging to a single primary care organisation, participated in the study. The organisation implemented a central analytical data repository containing their EMR data extracted, cleaned, standardized and returned by the Canadian Primary Care Sentinel Surveillance Network (CPCSSN), a large validated primary care EMR-based database. They used reporting software provided by CPCSSN to identify selected chronic conditions and standardized codes were then added back to the EMR. We studied four chronic conditions (diabetes, hypertension, chronic obstructive pulmonary disease and dementia). We compared changes in coding over six months for physicians in the organisation with changes for 315 primary care physicians participating in CPCSSN across Canada. Chronic disease coding within the organisation increased significantly more than in other primary care sites. The adjusted difference in the increase of coding was 7.7% (95% confidence interval 7.1%-8.2%, p Data management activities were associated with an increase in standardized coding for chronic conditions. Exploring requirements to scale and spread this approach in Canadian primary care organisations may be worthwhile.

  3. Implementation of the kinetics in the transport code AZTRAN; Implementacion de la cinetica en el codigo de transporte AZTRAN

    Energy Technology Data Exchange (ETDEWEB)

    Duran G, J. A.; Del Valle G, E. [IPN, Escuela Superior de Fisica y Matematicas, Av. IPN s/n, San Pedro Zacatenco, 07738 Ciudad de Mexico (Mexico); Gomez T, A. M., E-mail: redfield1290@gmail.com [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2017-09-15

    This paper shows the implementation of the time dependence in the three-dimensional transport code AZTRAN (AZtlan TRANsport), which belongs to the AZTLAN platform, for the analysis of nuclear reactors (currently under development). The AZTRAN code with this implementation is able to numerically solve the time-dependent transport equation in XYZ geometry, for several energy groups, using the discrete ordinate method S{sub n} for the discretization of the angular variable, the nodal method RTN-0 for spatial discretization and method 0 for discretization in time. Initially, the code only solved the neutrons transport equation in steady state, so the implementation of the temporal part was made integrating the neutrons transport equation with respect to time and balance equations corresponding to the concentrations of delayed neutron precursors, for which method 0 was applied. After having directly implemented code kinetics, the improved quasi-static method was implemented, which is a tool for reducing computation time, where the angular flow is factored by the product of two functions called shape function and amplitude function, where the first is calculated for long time steps, called macro-steps and the second is resolved for small time steps called micro-steps. In the new version of AZTRAN several Benchmark problems that were taken from the literature were simulated, the problems used are of two and three dimensions which allowed corroborating the accuracy and stability of the code, showing in general in the reference tests a good behavior. (Author)

  4. Verification of Advective Bar Elements Implemented in the Aria Thermal Response Code.

    Energy Technology Data Exchange (ETDEWEB)

    Mills, Brantley [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-01-01

    A verification effort was undertaken to evaluate the implementation of the new advective bar capability in the Aria thermal response code. Several approaches to the verification process were taken : a mesh refinement study to demonstrate solution convergence in the fluid and the solid, visually examining the mapping of the advective bar element nodes to the surrounding surfaces, and a comparison of solutions produced using the advective bars for simple geometries with solutions from commercial CFD software . The mesh refinement study has shown solution convergence for simple pipe flow in both temperature and velocity . Guidelines were provided to achieve appropriate meshes between the advective bar elements and the surrounding volume. Simulations of pipe flow using advective bars elements in Aria have been compared to simulations using the commercial CFD software ANSYS Fluent (r) and provided comparable solutions in temperature and velocity supporting proper implementation of the new capability. Verification of Advective Bar Elements iv Acknowledgements A special thanks goes to Dean Dobranich for his guidance and expertise through all stages of this effort . His advice and feedback was instrumental to its completion. Thanks also goes to Sam Subia and Tolu Okusanya for helping to plan many of the verification activities performed in this document. Thank you to Sam, Justin Lamb and Victor Brunini for their assistance in resolving issues encountered with running the advective bar element model. Finally, thanks goes to Dean, Sam, and Adam Hetzler for reviewing the document and providing very valuable comments.

  5. The European source term code ESTER - basic ideas and tools for coupling of ATHLET and ESTER

    International Nuclear Information System (INIS)

    Schmidt, F.; Schuch, A.; Hinkelmann, M.

    1993-04-01

    The French software house CISI and IKE of the University of Stuttgart have developed during 1990 and 1991 in the frame of the Shared Cost Action Reactor Safety the informatic structure of the European Source TERm Evaluation System (ESTER). Due to this work tools became available which allow to unify on an European basis both code development and code application in the area of severe core accident research. The behaviour of reactor cores is determined by thermal hydraulic conditions. Therefore for the development of ESTER it was important to investigate how to integrate thermal hydraulic code systems with ESTER applications. This report describes the basic ideas of ESTER and improvements of ESTER tools in view of a possible coupling of the thermal hydraulic code system ATHLET and ESTER. Due to the work performed during this project the ESTER tools became the most modern informatic tools presently available in the area of severe accident research. A sample application is given which demonstrates the use of the new tools. (orig.) [de

  6. Sensitivity analysis and benchmarking of the BLT low-level waste source term code

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1993-07-01

    To evaluate the source term for low-level waste disposal, a comprehensive model had been developed and incorporated into a computer code, called BLT (Breach-Leach-Transport) Since the release of the original version, many new features and improvements had also been added to the Leach model of the code. This report consists of two different studies based on the new version of the BLT code: (1) a series of verification/sensitivity tests; and (2) benchmarking of the BLT code using field data. Based on the results of the verification/sensitivity tests, the authors concluded that the new version represents a significant improvement and it is capable of providing more realistic simulations of the leaching process. Benchmarking work was carried out to provide a reasonable level of confidence in the model predictions. In this study, the experimentally measured release curves for nitrate, technetium-99 and tritium from the saltstone lysimeters operated by Savannah River Laboratory were used. The model results are observed to be in general agreement with the experimental data, within the acceptable limits of uncertainty

  7. Implementation of the SAMPO computer code in the Cyber 170-750

    International Nuclear Information System (INIS)

    Chagas, E.F.; Liguori Neto, R.; Gomes, P.R.S.

    1985-01-01

    The code SAMPO, in this available version, incorporates algorithms that determine energy, eficiency and peak shape. The code also includes processing subroutines that provide automatic surveys of peaks raising all their characteristics. The handling of the code has been improved and its analysing capacity in each region of the spectrum has been amplified. Practical information regarding the use of the code is enclosed. Tests made guarantee the good performance of the code SAMPO in the Cyber system-IEAv. (Author) [pt

  8. Analyzing Capabilities of Commercial and Open-Source Routers to Implement Atomic BGP

    Directory of Open Access Journals (Sweden)

    A. Cvjetić

    2010-06-01

    Full Text Available The paper analyzes implementations of BGP protocol on commercial and open-source routers and presents how some existing BGP extensions and routing table isolation mechanisms may be used to solve issues found in standard BGP implementation.

  9. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  10. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications.

    Science.gov (United States)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J

    2004-09-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1 x 10(8) or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8 x 10(8) histories. For a smaller number of histories (1 x 10(8)) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1 x 10(8) histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy.

  11. Implementation of the DPM Monte Carlo code on a parallel architecture for treatment planning applications

    International Nuclear Information System (INIS)

    Tyagi, Neelam; Bose, Abhijit; Chetty, Indrin J.

    2004-01-01

    We have parallelized the Dose Planning Method (DPM), a Monte Carlo code optimized for radiotherapy class problems, on distributed-memory processor architectures using the Message Passing Interface (MPI). Parallelization has been investigated on a variety of parallel computing architectures at the University of Michigan-Center for Advanced Computing, with respect to efficiency and speedup as a function of the number of processors. We have integrated the parallel pseudo random number generator from the Scalable Parallel Pseudo-Random Number Generator (SPRNG) library to run with the parallel DPM. The Intel cluster consisting of 800 MHz Intel Pentium III processor shows an almost linear speedup up to 32 processors for simulating 1x10 8 or more particles. The speedup results are nearly linear on an Athlon cluster (up to 24 processors based on availability) which consists of 1.8 GHz+ Advanced Micro Devices (AMD) Athlon processors on increasing the problem size up to 8x10 8 histories. For a smaller number of histories (1x10 8 ) the reduction of efficiency with the Athlon cluster (down to 83.9% with 24 processors) occurs because the processing time required to simulate 1x10 8 histories is less than the time associated with interprocessor communication. A similar trend was seen with the Opteron Cluster (consisting of 1400 MHz, 64-bit AMD Opteron processors) on increasing the problem size. Because of the 64-bit architecture Opteron processors are capable of storing and processing instructions at a faster rate and hence are faster as compared to the 32-bit Athlon processors. We have validated our implementation with an in-phantom dose calculation study using a parallel pencil monoenergetic electron beam of 20 MeV energy. The phantom consists of layers of water, lung, bone, aluminum, and titanium. The agreement in the central axis depth dose curves and profiles at different depths shows that the serial and parallel codes are equivalent in accuracy

  12. A Paradigm Shift in the Implementation of Ethics Codes in Construction Organizations in Hong Kong: Towards an Ethical Behaviour.

    Science.gov (United States)

    Ho, Christabel Man-Fong; Oladinrin, Olugbenga Timo

    2018-01-30

    Due to the economic globalization which is characterized with business scandals, scholars and practitioners are increasingly engaged with the implementation of codes of ethics as a regulatory mechanism for stimulating ethical behaviours within an organization. The aim of this study is to examine various organizational practices regarding the effective implementation of codes of ethics within construction contracting companies. Views on ethics management in construction organizations together with the recommendations for improvement were gleaned through 19 semi-structured interviews, involving construction practitioners from various construction companies in Hong Kong. The findings suggested some practices for effective implementation of codes of ethics in order to diffuse ethical behaviours in an organizational setting which include; introduction of effective reward schemes, arrangement of ethics training for employees, and leadership responsiveness to reported wrongdoings. Since most of the construction companies in Hong Kong have codes of ethics, emphasis is made on the practical implementation of codes within the organizations. Hence, implications were drawn from the recommended measures to guide construction companies and policy makers.

  13. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation

    International Nuclear Information System (INIS)

    Kim, Sangroh; Yoshizumi, Terry T; Yin Fangfang; Chetty, Indrin J

    2013-01-01

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan—scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the ‘ISource = 8: Phase-Space Source Incident from Multiple Directions’ in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the

  14. Spiral computed tomography phase-space source model in the BEAMnrc/EGSnrc Monte Carlo system: implementation and validation.

    Science.gov (United States)

    Kim, Sangroh; Yoshizumi, Terry T; Yin, Fang-Fang; Chetty, Indrin J

    2013-04-21

    Currently, the BEAMnrc/EGSnrc Monte Carlo (MC) system does not provide a spiral CT source model for the simulation of spiral CT scanning. We developed and validated a spiral CT phase-space source model in the BEAMnrc/EGSnrc system. The spiral phase-space source model was implemented in the DOSXYZnrc user code of the BEAMnrc/EGSnrc system by analyzing the geometry of spiral CT scan-scan range, initial angle, rotational direction, pitch, slice thickness, etc. Table movement was simulated by changing the coordinates of the isocenter as a function of beam angles. Some parameters such as pitch, slice thickness and translation per rotation were also incorporated into the model to make the new phase-space source model, designed specifically for spiral CT scan simulations. The source model was hard-coded by modifying the 'ISource = 8: Phase-Space Source Incident from Multiple Directions' in the srcxyznrc.mortran and dosxyznrc.mortran files in the DOSXYZnrc user code. In order to verify the implementation, spiral CT scans were simulated in a CT dose index phantom using the validated x-ray tube model of a commercial CT simulator for both the original multi-direction source (ISOURCE = 8) and the new phase-space source model in the DOSXYZnrc system. Then the acquired 2D and 3D dose distributions were analyzed with respect to the input parameters for various pitch values. In addition, surface-dose profiles were also measured for a patient CT scan protocol using radiochromic film and were compared with the MC simulations. The new phase-space source model was found to simulate the spiral CT scanning in a single simulation run accurately. It also produced the equivalent dose distribution of the ISOURCE = 8 model for the same CT scan parameters. The MC-simulated surface profiles were well matched to the film measurement overall within 10%. The new spiral CT phase-space source model was implemented in the BEAMnrc/EGSnrc system. This work will be beneficial in estimating the spiral

  15. Chronos sickness: digital reality in Duncan Jones’s Source Code

    Directory of Open Access Journals (Sweden)

    Marcia Tiemy Morita Kawamoto

    2017-01-01

    Full Text Available http://dx.doi.org/10.5007/2175-8026.2017v70n1p249 The advent of the digital technologies unquestionably affected the cinema. The indexical relation and realistic effect with the photographed world much praised by André Bazin and Roland Barthes is just one of the affected aspects. This article discusses cinema in light of the new digital possibilities, reflecting on Steven Shaviro’s consideration of “how a nonindexical realism might be possible” (63 and how in fact a new kind of reality, a digital one, might emerge in the science fiction film Source Code (2013 by Duncan Jones.

  16. A Review on the Implementation of Nonlinear Source Emulators

    DEFF Research Database (Denmark)

    Nguyen-Duy, Khiem; Knott, Arnold; Andersen, Michael A. E.

    2014-01-01

    Renewable energy sources are playing an important role in industry as green sources of energy to reduce carbon dioxide emissions . They possess electrically non linear voltage - current characteristics. In the test and development of the downstream converters that utilize these renewable types of...

  17. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  18. Domain-Specific Acceleration and Auto-Parallelization of Legacy Scientific Code in FORTRAN 77 using Source-to-Source Compilation

    OpenAIRE

    Vanderbauwhede, Wim; Davidson, Gavin

    2017-01-01

    Massively parallel accelerators such as GPGPUs, manycores and FPGAs represent a powerful and affordable tool for scientists who look to speed up simulations of complex systems. However, porting code to such devices requires a detailed understanding of heterogeneous programming tools and effective strategies for parallelization. In this paper we present a source to source compilation approach with whole-program analysis to automatically transform single-threaded FORTRAN 77 legacy code into Ope...

  19. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  20. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  1. Coded aperture detector for high precision gamma-ray burst source locations

    International Nuclear Information System (INIS)

    Helmken, H.; Gorenstein, P.

    1977-01-01

    Coded aperture collimators in conjunction with position-sensitive detectors are very useful in the study of transient phenomenon because they combine broad field of view, high sensitivity, and an ability for precise source locations. Since the preceeding conference, a series of computer simulations of various detector designs have been carried out with the aid of a CDC 6400. Particular emphasis was placed on the development of a unit consisting of a one-dimensional random or periodic collimator in conjunction with a two-dimensional position-sensitive Xenon proportional counter. A configuration involving four of these units has been incorporated into the preliminary design study of the Transient Explorer (ATREX) satellite and are applicable to any SAS or HEAO type satellite mission. Results of this study, including detector response, fields of view, and source location precision, will be presented

  2. Implementation of domain decomposition and data decomposition algorithms in RMC code

    International Nuclear Information System (INIS)

    Liang, J.G.; Cai, Y.; Wang, K.; She, D.

    2013-01-01

    The applications of Monte Carlo method in reactor physics analysis is somewhat restricted due to the excessive memory demand in solving large-scale problems. Memory demand in MC simulation is analyzed firstly, it concerns geometry data, data of nuclear cross-sections, data of particles, and data of tallies. It appears that tally data is dominant in memory cost and should be focused on in solving the memory problem. Domain decomposition and tally data decomposition algorithms are separately designed and implemented in the reactor Monte Carlo code RMC. Basically, the domain decomposition algorithm is a strategy of 'divide and rule', which means problems are divided into different sub-domains to be dealt with separately and some rules are established to make sure the whole results are correct. Tally data decomposition consists in 2 parts: data partition and data communication. Two algorithms with differential communication synchronization mechanisms are proposed. Numerical tests have been executed to evaluate performance of the new algorithms. Domain decomposition algorithm shows potentials to speed up MC simulation as a space parallel method. As for tally data decomposition algorithms, memory size is greatly reduced

  3. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey

    OpenAIRE

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospi...

  4. PRIMUS: a computer code for the preparation of radionuclide ingrowth matrices from user-specified sources

    International Nuclear Information System (INIS)

    Hermann, O.W.; Baes, C.F. III; Miller, C.W.; Begovich, C.L.; Sjoreen, A.L.

    1984-10-01

    The computer program, PRIMUS, reads a library of radionuclide branching fractions and half-lives and constructs a decay-chain data library and a problem-specific decay-chain data file. PRIMUS reads the decay data compiled for 496 nuclides from the Evaluated Nuclear Structure Data File (ENSDF). The ease of adding radionuclides to the input library allows the CRRIS system to further expand its comprehensive data base. The decay-chain library produced is input to the ANEMOS code. Also, PRIMUS produces a data set reduced to only the decay chains required in a particular problem, for input to the SUMIT, TERRA, MLSOIL, and ANDROS codes. Air concentrations and deposition rates from the PRIMUS decay-chain data file. Source term data may be entered directly to PRIMUS to be read by MLSOIL, TERRA, and ANDROS. The decay-chain data prepared by PRIMUS is needed for a matrix-operator method that computes either time-dependent decay products from an initial concentration generated from a constant input source. This document describes the input requirements and the output obtained. Also, sections are included on methods, applications, subroutines, and sample cases. A short appendix indicates a method of utilizing PRIMUS and the associated decay subroutines from TERRA or ANDROS for applications to other decay problems. 18 references

  5. RMG An Open Source Electronic Structure Code for Multi-Petaflops Calculations

    Science.gov (United States)

    Briggs, Emil; Lu, Wenchang; Hodak, Miroslav; Bernholc, Jerzy

    RMG (Real-space Multigrid) is an open source, density functional theory code for quantum simulations of materials. It solves the Kohn-Sham equations on real-space grids, which allows for natural parallelization via domain decomposition. Either subspace or Davidson diagonalization, coupled with multigrid methods, are used to accelerate convergence. RMG is a cross platform open source package which has been used in the study of a wide range of systems, including semiconductors, biomolecules, and nanoscale electronic devices. It can optionally use GPU accelerators to improve performance on systems where they are available. The recently released versions (>2.0) support multiple GPU's per compute node, have improved performance and scalability, enhanced accuracy and support for additional hardware platforms. New versions of the code are regularly released at http://www.rmgdft.org. The releases include binaries for Linux, Windows and MacIntosh systems, automated builds for clusters using cmake, as well as versions adapted to the major supercomputing installations and platforms. Several recent, large-scale applications of RMG will be discussed.

  6. GNU Data Language (GDL) - a free and open-source implementation of IDL

    Science.gov (United States)

    Arabas, Sylwester; Schellens, Marc; Coulais, Alain; Gales, Joel; Messmer, Peter

    2010-05-01

    GNU Data Language (GDL) is developed with the aim of providing an open-source drop-in replacement for the ITTVIS's Interactive Data Language (IDL). It is free software developed by an international team of volunteers led by Marc Schellens - the project's founder (a list of contributors is available on the project's website). The development is hosted on SourceForge where GDL continuously ranks in the 99th percentile of most active projects. GDL with its library routines is designed as a tool for numerical data analysis and visualisation. As its proprietary counterparts (IDL and PV-WAVE), GDL is used particularly in geosciences and astronomy. GDL is dynamically-typed, vectorized and has object-oriented programming capabilities. The library routines handle numerical calculations, data visualisation, signal/image processing, interaction with host OS and data input/output. GDL supports several data formats such as netCDF, HDF4, HDF5, GRIB, PNG, TIFF, DICOM, etc. Graphical output is handled by X11, PostScript, SVG or z-buffer terminals, the last one allowing output to be saved in a variety of raster graphics formats. GDL is an incremental compiler with integrated debugging facilities. It is written in C++ using the ANTLR language-recognition framework. Most of the library routines are implemented as interfaces to open-source packages such as GNU Scientific Library, PLPlot, FFTW, ImageMagick, and others. GDL features a Python bridge (Python code can be called from GDL; GDL can be compiled as a Python module). Extensions to GDL can be written in C++, GDL, and Python. A number of open software libraries written in IDL, such as the NASA Astronomy Library, MPFIT, CMSVLIB and TeXtoIDL are fully or partially functional under GDL. Packaged versions of GDL are available for several Linux distributions and Mac OS X. The source code compiles on some other UNIX systems, including BSD and OpenSolaris. The presentation will cover the current status of the project, the key

  7. Implementation of the Actuator Cylinder Flow Model in the HAWC2 code for Aeroelastic Simulations on Vertical Axis Wind Turbines

    DEFF Research Database (Denmark)

    Aagaard Madsen, Helge; Larsen, Torben J.; Schmidt Paulsen, Uwe

    2013-01-01

    The paper presents the implementation of the Actuator Cylinder (AC) flow model in the HAWC2 aeroelastic code originally developed for simulation of Horizontal Axis Wind Turbine (HAWT) aeroelasticity. This is done within the DeepWind project where the main objective is to explore the competitiveness...

  8. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.

    Science.gov (United States)

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.

  9. First step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1984-09-01

    The first step of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code is presented. This step consists of the introduction of a simplified model for simulating the steam generator. This model is the GEVAP computer code, integrant part of LOOP code, which simulates the primary coolant circuit of PWR nuclear power plants during transients. The ALMOD3 computer code has a model for the steam generator, called UTSG, which is very detailed. This model has spatial dependence, correlations for 2-phase flow, distinguished correlations for different heat transfer process. The GEVAP model has thermal equilibrium between phases (gaseous and liquid homogeneous mixture), no spatial dependence and uses only one generalized correlation to treat several heat transfer processes. (Author) [pt

  10. Code of practice for the control and safe handling of radioactive sources used for therapeutic purposes (1988)

    International Nuclear Information System (INIS)

    1988-01-01

    This Code is intended as a guide to safe practices in the use of sealed and unsealed radioactive sources and in the management of patients being treated with them. It covers the procedures for the handling, preparation and use of radioactive sources, precautions to be taken for patients undergoing treatment, storage and transport of radioactive sources within a hospital or clinic, and routine testing of sealed sources [fr

  11. A Source Term Calculation for the APR1400 NSSS Auxiliary System Components Using the Modified SHIELD Code

    International Nuclear Information System (INIS)

    Park, Hong Sik; Kim, Min; Park, Seong Chan; Seo, Jong Tae; Kim, Eun Kee

    2005-01-01

    The SHIELD code has been used to calculate the source terms of NSSS Auxiliary System (comprising CVCS, SIS, and SCS) components of the OPR1000. Because the code had been developed based upon the SYSTEM80 design and the APR1400 NSSS Auxiliary System design is considerably changed from that of SYSTEM80 or OPR1000, the SHIELD code cannot be used directly for APR1400 radiation design. Thus the hand-calculation is needed for the portion of design changes using the results of the SHIELD code calculation. In this study, the SHIELD code is modified to incorporate the APR1400 design changes and the source term calculation is performed for the APR1400 NSSS Auxiliary System components

  12. Implementation of Layered Decoding Architecture for LDPC Code using Layered Min-Sum Algorithm

    OpenAIRE

    Sandeep Kakde; Atish Khobragade; Shrikant Ambatkar; Pranay Nandanwar

    2017-01-01

    For binary field and long code lengths, Low Density Parity Check (LDPC) code approaches Shannon limit performance. LDPC codes provide remarkable error correction performance and therefore enlarge the design space for communication systems.In this paper, we have compare different digital modulation techniques and found that BPSK modulation technique is better than other modulation techniques in terms of BER. It also gives error performance of LDPC decoder over AWGN channel using Min-Sum algori...

  13. User's manual for the code STAPRE as implemented at Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Vonach, H.

    1982-01-01

    This report gives a detailed description of the input and output of the statistical model code STAPRE for compound-nucleus reactions including a special section on the various level density options of the code. It is to be used in conjunction with the report IRK 76/01 + Add 76 + Add 78 by B. Strohmaier and M. Uhl which describes in detail the physical models on which the code is based and its general organization and structure

  14. Implementation of the KASKAD computer code system for WWER-440 at Kozloduy NPP

    International Nuclear Information System (INIS)

    Antonov, A.; Georgieva, N.; Spasova, V.

    2003-01-01

    Since 2002 at Kozloduy NPP - EP1 the code package KASKAD is used for WWER-440 reactor core calculations. The main codes entering this package are: BIPR-7A: 3-D diffusion and core analysis code; PERMAK-A: 2-D fine mesh diffusion code. The burnup calculations were performed for all cycles of the Kozloduy NPP Unit 1, Unit 2, Unit 3 and Unit 4. For the last 4-5 cycles of the Units were calculated control rods worth, critical boron concentration at zero power, reactivity coefficients and linear power. These results were analysed and were compared with experimental data. Some results were given in this paper

  15. Implementation of the SCDAP/RELAP5 Mod. 3.3 and MAAP/VVER codes

    International Nuclear Information System (INIS)

    Duspiva, J.; Vokac, P.; Dienstbier, J.

    2001-05-01

    The SR5 code was installed on a Hewlett/Packard workstation, and test problems, supplied with the software, were solved. Finally, the tool for graphical processing of the calculation results was prepared and tested. The MAAP/VVER code was installed on a HP J210 workstation and, in particular, on PC. The code was tested on two problems, supplied with the software. The transformation of the output from MAAP/VVER to the graphical format was carried out by using the support tools obtained as well as by using tools that have been in use at the Institute for other codes to analyze severe accidents. (P.A.)

  16. Implementation of the full viscoresistive magnetohydrodynamic equations in a nonlinear finite element code

    Energy Technology Data Exchange (ETDEWEB)

    Haverkort, J.W. [Centrum Wiskunde & Informatica, P.O. Box 94079, 1090 GB Amsterdam (Netherlands); Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Blank, H.J. de [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Huysmans, G.T.A. [ITER Organization, Route de Vinon sur Verdon, 13115 Saint Paul Lez Durance (France); Pratt, J. [Dutch Institute for Fundamental Energy Research, P.O. Box 6336, 5600 HH Eindhoven (Netherlands); Koren, B., E-mail: b.koren@tue.nl [Eindhoven University of Technology, P.O. Box 513, 5600 MB Eindhoven (Netherlands)

    2016-07-01

    Numerical simulations form an indispensable tool to understand the behavior of a hot plasma that is created inside a tokamak for providing nuclear fusion energy. Various aspects of tokamak plasmas have been successfully studied through the reduced magnetohydrodynamic (MHD) model. The need for more complete modeling through the full MHD equations is addressed here. Our computational method is presented along with measures against possible problems regarding pollution, stability, and regularity. The problem of ensuring continuity of solutions in the center of a polar grid is addressed in the context of a finite element discretization of the full MHD equations. A rigorous and generally applicable solution is proposed here. Useful analytical test cases are devised to verify the correct implementation of the momentum and induction equation, the hyperdiffusive terms, and the accuracy with which highly anisotropic diffusion can be simulated. A striking observation is that highly anisotropic diffusion can be treated with the same order of accuracy as isotropic diffusion, even on non-aligned grids, as long as these grids are generated with sufficient care. This property is shown to be associated with our use of a magnetic vector potential to describe the magnetic field. Several well-known instabilities are simulated to demonstrate the capabilities of the new method. The linear growth rate of an internal kink mode and a tearing mode are benchmarked against the results of a linear MHD code. The evolution of a tearing mode and the resulting magnetic islands are simulated well into the nonlinear regime. The results are compared with predictions from the reduced MHD model. Finally, a simulation of a ballooning mode illustrates the possibility to use our method as an ideal MHD method without the need to add any physical dissipation.

  17. Detecting Source Code Plagiarism on .NET Programming Languages using Low-level Representation and Adaptive Local Alignment

    Directory of Open Access Journals (Sweden)

    Oscar Karnalim

    2017-01-01

    Full Text Available Even though there are various source code plagiarism detection approaches, only a few works which are focused on low-level representation for deducting similarity. Most of them are only focused on lexical token sequence extracted from source code. In our point of view, low-level representation is more beneficial than lexical token since its form is more compact than the source code itself. It only considers semantic-preserving instructions and ignores many source code delimiter tokens. This paper proposes a source code plagiarism detection which rely on low-level representation. For a case study, we focus our work on .NET programming languages with Common Intermediate Language as its low-level representation. In addition, we also incorporate Adaptive Local Alignment for detecting similarity. According to Lim et al, this algorithm outperforms code similarity state-of-the-art algorithm (i.e. Greedy String Tiling in term of effectiveness. According to our evaluation which involves various plagiarism attacks, our approach is more effective and efficient when compared with standard lexical-token approach.

  18. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Yidong Xia; Mitch Plummer; Robert Podgorney; Ahmad Ghassemi

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation angle for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.

  19. Rural School District Dress Code Implementation: Perceptions of Stakeholders after First Year

    Science.gov (United States)

    Wright, Krystal M.

    2012-01-01

    Schools are continuously searching for solutions to solve truancy, academic, behavioral, safety, and climate issues. One of the latest trends in education is requiring students to adhere to dress codes as a solution to these issues. Dress codes can range from slightly restrictive clothing to the requiring of a uniform. Many school district…

  20. Disposal of Pesticide Wastes as Implemented in A Proposed Code of Practice in Egypt

    International Nuclear Information System (INIS)

    Sherif El-Hamady, E.

    1999-01-01

    In the present study , an Egyptian Code of Practice for the safe use of pesticides on farms and holdings is suggested. The Code has to be issued for the purpose of providing practical guidance to citizens especially farmers and growers engaged in crop production or to the authorities of companies and plants manufacturing and formulating pesticides in Egypt

  1. Living Up to the Code's Exhortations? Social Workers' Political Knowledge Sources, Expectations, and Behaviors.

    Science.gov (United States)

    Felderhoff, Brandi Jean; Hoefer, Richard; Watson, Larry Dan

    2016-01-01

    The National Association of Social Workers' (NASW's) Code of Ethics urges social workers to engage in political action. However, little recent research has been conducted to examine whether social workers support this admonition and the extent to which they actually engage in politics. The authors gathered data from a survey of social workers in Austin, Texas, to address three questions. First, because keeping informed about government and political news is an important basis for action, the authors asked what sources of knowledge social workers use. Second, they asked what the respondents believe are appropriate political behaviors for other social workers and NASW. Third, they asked for self-reports regarding respondents' own political behaviors. Results indicate that social workers use the Internet and traditional media services to stay informed; expect other social workers and NASW to be active; and are, overall, more active than the general public in many types of political activities. The comparisons made between expectations for others and their own behaviors are interesting in their complex outcomes. Social workers should strive for higher levels of adherence to the code's urgings on political activity. Implications for future work are discussed.

  2. RIES - Rijnland Internet Election System: A Cursory Study of Published Source Code

    Science.gov (United States)

    Gonggrijp, Rop; Hengeveld, Willem-Jan; Hotting, Eelco; Schmidt, Sebastian; Weidemann, Frederik

    The Rijnland Internet Election System (RIES) is a system designed for voting in public elections over the internet. A rather cursory scan of the source code to RIES showed a significant lack of security-awareness among the programmers which - among other things - appears to have left RIES vulnerable to near-trivial attacks. If it had not been for independent studies finding problems, RIES would have been used in the 2008 Water Board elections, possibly handling a million votes or more. While RIES was more extensively studied to find cryptographic shortcomings, our work shows that more down-to-earth secure design practices can be at least as important, and the aspects need to be examined much sooner than right before an election.

  3. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  4. Adaptation and implementation of the TRACE code for transient analysis on designs of cooled lead fast reactors

    International Nuclear Information System (INIS)

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2014-01-01

    The article describes the changes implemented in the TRACE code to include thermodynamic tables of liquid lead drawn from experimental results. He then explains the process for developing a thermohydraulic model for the prototype ALFRED and analysis of a selection of representative transient conducted within the framework of international research projects. The study demonstrates the applicability of TRACE code to simulate designs of cooled lead fast reactors and exposes the high safety margins are there in this technology to accommodate the most severe transients identified in their security study. (Author)

  5. GARLIC - A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    Science.gov (United States)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-04-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code - GARLIC - is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus.

  6. Implementation and training methodology of subcritical reactors neutronic calculations triggered by external neutron source and applications

    International Nuclear Information System (INIS)

    Carluccio, Thiago

    2011-01-01

    This works had as goal to investigate calculational methodologies on subcritical source driven reactor, such as Accelerator Driven Subcritical Reactor (ADSR) and Fusion Driven Subcritical Reactor (FDSR). Intense R and D has been done about these subcritical concepts, mainly due to Minor Actinides (MA) and Long Lived Fission Products (LLFP) transmutation possibilities. In this work, particular emphasis has been given to: (1) complement and improve calculation methodology with neutronic transmutation and decay capabilities and implement it computationally, (2) utilization of this methodology in the Coordinated Research Project (CRP) of the International Atomic Energy Agency Analytical and Experimental Benchmark Analysis of ADS and in the Collaborative Work on Use of Low Enriched Uranium in ADS, especially in the reproduction of the experimental results of the Yalina Booster subcritical assembly and study of a subcritical core of IPEN / MB-01 reactor, (3) to compare different nuclear data libraries calculation of integral parameters, such as k eff and k src , and differential distributions, such as spectrum and flux, and nuclides inventories and (4) apply the develop methodology in a study that may help future choices about dedicated transmutation system. The following tools have been used in this work: MCNP (Monte Carlo N particle transport code), MCB (enhanced version of MCNP that allows burnup calculation) and NJOY to process nuclear data from evaluated nuclear data files. (author)

  7. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  8. Open source cardiology electronic health record development for DIGICARDIAC implementation

    Science.gov (United States)

    Dugarte, Nelson; Medina, Rubén.; Huiracocha, Lourdes; Rojas, Rubén.

    2015-12-01

    This article presents the development of a Cardiology Electronic Health Record (CEHR) system. Software consists of a structured algorithm designed under Health Level-7 (HL7) international standards. Novelty of the system is the integration of high resolution ECG (HRECG) signal acquisition and processing tools, patient information management tools and telecardiology tools. Acquisition tools are for management and control of the DIGICARDIAC electrocardiograph functions. Processing tools allow management of HRECG signal analysis searching for indicative patterns of cardiovascular pathologies. Telecardiology tools incorporation allows system communication with other health care centers decreasing access time to the patient information. CEHR system was completely developed using open source software. Preliminary results of process validation showed the system efficiency.

  9. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  10. Adaptation and implementation of the TRACE code for transient analysis in designs lead cooled fast reactors

    International Nuclear Information System (INIS)

    Lazaro, A.; Ammirabile, L.; Martorell, S.

    2015-01-01

    Lead-Cooled Fast Reactor (LFR) has been identified as one of promising future reactor concepts in the technology road map of the Generation IVC International Forum (GIF)as well as in the Deployment Strategy of the European Sustainable Nuclear Industrial Initiative (ESNII), both aiming at improved sustainability, enhanced safety, economic competitiveness, and proliferation resistance. This new nuclear reactor concept requires the development of computational tools to be applied in design and safety assessments to confirm improved inherent and passive safety features of this design. One approach to this issue is to modify the current computational codes developed for the simulation of Light Water Reactors towards their applicability for the new designs. This paper reports on the performed modifications of the TRACE system code to make it applicable to LFR safety assessments. The capabilities of the modified code are demonstrated on series of benchmark exercises performed versus other safety analysis codes. (Author)

  11. Implementation and Performance Evaluation of Distributed Cloud Storage Solutions using Random Linear Network Coding

    DEFF Research Database (Denmark)

    Fitzek, Frank; Toth, Tamas; Szabados, Áron

    2014-01-01

    This paper advocates the use of random linear network coding for storage in distributed clouds in order to reduce storage and traffic costs in dynamic settings, i.e. when adding and removing numerous storage devices/clouds on-the-fly and when the number of reachable clouds is limited. We introduce...... various network coding approaches that trade-off reliability, storage and traffic costs, and system complexity relying on probabilistic recoding for cloud regeneration. We compare these approaches with other approaches based on data replication and Reed-Solomon codes. A simulator has been developed...... to carry out a thorough performance evaluation of the various approaches when relying on different system settings, e.g., finite fields, and network/storage conditions, e.g., storage space used per cloud, limited network use, and limited recoding capabilities. In contrast to standard coding approaches, our...

  12. Implementation of CFD module in the KORSAR thermal-hydraulic system code

    Energy Technology Data Exchange (ETDEWEB)

    Yudov, Yury V.; Danilov, Ilia G.; Chepilko, Stepan S. [Alexandrov Research Inst. of Technology (NITI), Sosnovy Bor (Russian Federation)

    2015-09-15

    The Russian KORSAR/GP (hereinafter KORSAR) computer code was developed by a joint team from Alexandrov NITI and OKB ''Gidropress'' for VVER safety analysis and certified by the Rostechnadzor of Russia in 2009. The code functionality is based on a 1D two-fluid model for calculation of two-phase flows. A 3D CFD module in the KORSAR computer code is being developed by Alexandrov NITI for representing 3D effects in the downcomer and lower plenum during asymmetrical loop operation. The CFD module uses Cartesian grid method with cut cell approach. The paper presents a numerical algorithm for coupling 1D and 3D thermal- hydraulic modules in the KORSAR code. The combined pressure field is calculated by the multigrid method. The performance efficiency of the algorithm for coupling 1D and 3D modules was demonstrated by solving the benchmark problem of mixing cold and hot flows in a T-junction.

  13. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    International Nuclear Information System (INIS)

    O'Connor, Evan; Ott, Christian D

    2010-01-01

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M o-dot zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  14. A new open-source code for spherically symmetric stellar collapse to neutron stars and black holes

    Energy Technology Data Exchange (ETDEWEB)

    O' Connor, Evan; Ott, Christian D, E-mail: evanoc@tapir.caltech.ed, E-mail: cott@tapir.caltech.ed [TAPIR, Mail Code 350-17, California Institute of Technology, Pasadena, CA 91125 (United States)

    2010-06-07

    We present the new open-source spherically symmetric general-relativistic (GR) hydrodynamics code GR1D. It is based on the Eulerian formulation of GR hydrodynamics (GRHD) put forth by Romero-Ibanez-Gourgoulhon and employs radial-gauge, polar-slicing coordinates in which the 3+1 equations simplify substantially. We discretize the GRHD equations with a finite-volume scheme, employing piecewise-parabolic reconstruction and an approximate Riemann solver. GR1D is intended for the simulation of stellar collapse to neutron stars and black holes and will also serve as a testbed for modeling technology to be incorporated in multi-D GR codes. Its GRHD part is coupled to various finite-temperature microphysical equations of state in tabulated form that we make available with GR1D. An approximate deleptonization scheme for the collapse phase and a neutrino-leakage/heating scheme for the postbounce epoch are included and described. We also derive the equations for effective rotation in 1D and implement them in GR1D. We present an array of standard test calculations and also show how simple analytic equations of state in combination with presupernova models from stellar evolutionary calculations can be used to study qualitative aspects of black hole formation in failing rotating core-collapse supernovae. In addition, we present a simulation with microphysical equations of state and neutrino leakage/heating of a failing core-collapse supernova and black hole formation in a presupernova model of a 40 M{sub o-dot} zero-age main-sequence star. We find good agreement on the time of black hole formation (within 20%) and last stable protoneutron star mass (within 10%) with predictions from simulations with full Boltzmann neutrino radiation hydrodynamics.

  15. Neutrons Flux Distributions of the Pu-Be Source and its Simulation by the MCNP-4B Code

    Science.gov (United States)

    Faghihi, F.; Mehdizadeh, S.; Hadad, K.

    Neutron Fluence rate of a low intense Pu-Be source is measured by Neutron Activation Analysis (NAA) of 197Au foils. Also, the neutron fluence rate distribution versus energy is calculated using the MCNP-4B code based on ENDF/B-V library. Theoretical simulation as well as our experimental performance are a new experience for Iranians to make reliability with the code for further researches. In our theoretical investigation, an isotropic Pu-Be source with cylindrical volume distribution is simulated and relative neutron fluence rate versus energy is calculated using MCNP-4B code. Variation of the fast and also thermal neutrons fluence rate, which are measured by NAA method and MCNP code, are compared.

  16. Calculation Of Fuel Burnup And Radionuclide Inventory In The Syrian Miniature Neutron Source Reactor Using The GETERA Code

    International Nuclear Information System (INIS)

    Khattab, K.; Dawahra, S.

    2011-01-01

    Calculations of the fuel burnup and radionuclide inventory in the Syrian Miniature Neutron Source Reactor (MNSR) after 10 years (the reactor core expected life) of the reactor operation time are presented in this paper using the GETERA code. The code is used to calculate the fuel group constants and the infinite multiplication factor versus the reactor operating time for 10, 20, and 30 kW operating power levels. The amounts of uranium burnup and plutonium produced in the reactor core, the concentrations and radionuclides of the most important fission product and actinide radionuclides accumulated in the reactor core, and the total radioactivity of the reactor core were calculated using the GETERA code as well. It is found that the GETERA code is better than the WIMSD4 code for the fuel burnup calculation in the MNSR reactor since it is newer and has a bigger library of isotopes and more accurate. (author)

  17. Implementation into a CFD code of neutron kinetics and fuel pin models for nuclear reactor transient analyses

    International Nuclear Information System (INIS)

    Chen Zhao; Chen, Xue-Nong; Rineiski, Andrei; Zhao Pengcheng; Chen Hongli

    2014-01-01

    Safety analysis is an important tool for justifying the safety of nuclear reactors. The traditional method for nuclear reactor safety analysis is performed by means of system codes, which use one-dimensional lumped-parameter method to model real reactor systems. However, there are many multi-dimensional thermal-hydraulic phenomena cannot be predicated using traditional one-dimensional system codes. This problem is extremely important for pool-type nuclear systems. Computational fluid dynamics (CFD) codes are powerful numerical simulation tools to solve multi-dimensional thermal-hydraulics problems, which are widely used in industrial applications for single phase flows. In order to use general CFD codes to solve nuclear reactor transient problems, some additional models beyond general ones are required. Neutron kinetics model for power calculation and fuel pin model for fuel pin temperature calculation are two important models of these additional models. The motivation of this work is to develop an advance numerical simulation method for nuclear reactor safety analysis by implementing neutron kinetics model and fuel pin model into general CFD codes. In this paper, the Point Kinetics Model (PKM) and Fuel Pin Model (FPM) are implemented into a general CFD code FLUENT. The improved FLUENT was called as FLUENT/PK. The mathematical models and implementary method of FLUENT/PK are descripted and two demonstration application cases, e.g. the unprotected transient overpower (UTOP) accident of a Liquid Metal cooled Fast Reactor (LMFR) and the unprotected beam overpower (UBOP) accident of an Accelerator Driven System (ADS), are presented. (author)

  18. Source term estimation via monitoring data and its implementation to the RODOS system

    International Nuclear Information System (INIS)

    Bohunova, J.; Duranova, T.

    2000-01-01

    A methodology and computer code for interpretation of environmental data, i.e. source term assessment, from on-line environmental monitoring network was developed. The method is based on the conversion of measured dose rates to the source term, i.e. airborne radioactivity release rate, taking into account real meteorological data and location of the monitoring points. The bootstrap estimation methodology and bipivot method to estimate the source term from on-site gamma dose rate monitors is used. The mentioned methods provide an estimate of the mean value of the source term and a confidence interval for it. (author)

  19. Implementation of an implicit method into heat conduction calculation of TRAC-PF1/MOD2 code

    International Nuclear Information System (INIS)

    Akimoto, Hajime; Abe, Yutaka; Ohnuki, Akira; Murao, Yoshio

    1990-08-01

    A two-dimensional unsteady heat conduction equation is solved in the TRAC-PF/MOD2 code to calculate temperature transients in fuel rod. A large CPU time is often required to get stable solution of temperature transients in the TRAC calculation with a small axial node size (less than 1.0 mm), because the heat conduction equation is discretized explicitly. To eliminate the restriction of the maximum time step size by the heat conduction calculation, an implicit method for solving the heat condition equation was developed and implemented into the TRAC code. Several assessment calculations were performed with the original and modified TRAC codes. It is confirmed that the implicit method is reliable and is successfully implemented into the TRAC code through comparison with theoretical solutions and assessment calculation results. It is demonstrated that the implicit method makes the heat conduction calculation practical even for the analyses of temperature transients with the axial node size less than 0.1 mm. (author)

  20. Time for change: a roadmap to guide the implementation of the World Anti-Doping Code 2015.

    Science.gov (United States)

    Dvorak, Jiri; Baume, Norbert; Botré, Francesco; Broséus, Julian; Budgett, Richard; Frey, Walter O; Geyer, Hans; Harcourt, Peter Rex; Ho, Dave; Howman, David; Isola, Victor; Lundby, Carsten; Marclay, François; Peytavin, Annie; Pipe, Andrew; Pitsiladis, Yannis P; Reichel, Christian; Robinson, Neil; Rodchenkov, Grigory; Saugy, Martial; Sayegh, Souheil; Segura, Jordi; Thevis, Mario; Vernec, Alan; Viret, Marjolaine; Vouillamoz, Marc; Zorzoli, Mario

    2014-05-01

    A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.

  1. Protecting breastfeeding in West and Central Africa: over 25 years of implementation of the International Code of Marketing of Breastmilk Substitutes.

    Science.gov (United States)

    Sokol, Ellen; Clark, David; Aguayo, Victor M

    2008-09-01

    In 1981 the World Health Assembly (WHA) adopted the International Code of Marketing of Breastmilk Substitutes out of concern that inappropriate marketing of breastmilk substitutes was contributing to the alarming decline in breastfeeding worldwide and the increase in child malnutrition and mortality, particularly in developing countries. To document progress, challenges, and lessons learned in the implementation of the International Code in West and Central Africa. Data were obtained by literature review and interviews with key informants. Twelve of the 24 countries have laws, decrees, or regulations that implement all or most of the provisions of the Code, 6 countries have a draft law or decree that is awaiting government approval or have a government committee that is studying how best to implement the Code, 3 countries have a legal instrument that enacts a few provisions of the Code, and 3 countries have not taken any action to implement the Code. International declarations and initiatives for child nutrition and survival have provided impetus for national implementation of the Code. National action to regulate the marketing of breastmilk substitutes needs to be linked to national priorities for nutrition and child survival. A clearly defined scope is essential for effective implementation of national legislation. Leadership and support by health professionals is essential to endorse and enforce national legislation. Training on Code implementation is instrumental for national action; national implementation of the Code requires provisions and capacity to monitor and enforce the legislative framework and needs to be part of a multipronged strategy to advance national child nutrition and survival goals. Nations in West and Central Africa have made important progress in implementing the International Code. More than 25 years after its adoption by the WHA, the Code remains as important as ever for child survival and development in West and Central Africa.

  2. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes.

    Science.gov (United States)

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-02-01

    To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.

  3. Uncertainty analysis methods for quantification of source terms using a large computer code

    International Nuclear Information System (INIS)

    Han, Seok Jung

    1997-02-01

    Quantification of uncertainties in the source term estimations by a large computer code, such as MELCOR and MAAP, is an essential process of the current probabilistic safety assessments (PSAs). The main objectives of the present study are (1) to investigate the applicability of a combined procedure of the response surface method (RSM) based on input determined from a statistical design and the Latin hypercube sampling (LHS) technique for the uncertainty analysis of CsI release fractions under a hypothetical severe accident sequence of a station blackout at Young-Gwang nuclear power plant using MAAP3.0B code as a benchmark problem; and (2) to propose a new measure of uncertainty importance based on the distributional sensitivity analysis. On the basis of the results obtained in the present work, the RSM is recommended to be used as a principal tool for an overall uncertainty analysis in source term quantifications, while using the LHS in the calculations of standardized regression coefficients (SRC) and standardized rank regression coefficients (SRRC) to determine the subset of the most important input parameters in the final screening step and to check the cumulative distribution functions (cdfs) obtained by RSM. Verification of the response surface model for its sufficient accuracy is a prerequisite for the reliability of the final results obtained by the combined procedure proposed in the present work. In the present study a new measure has been developed to utilize the metric distance obtained from cumulative distribution functions (cdfs). The measure has been evaluated for three different cases of distributions in order to assess the characteristics of the measure: The first case and the second are when the distribution is known as analytical distributions and the other case is when the distribution is unknown. The first case is given by symmetry analytical distributions. The second case consists of two asymmetry distributions of which the skewness is non zero

  4. 77 FR 24148 - Revision to the Hawaii State Implementation Plan, Minor New Source Review Program

    Science.gov (United States)

    2012-04-23

    ... ENVIRONMENTAL PROTECTION AGENCY 40 CFR Part 52 [EPA-R09-OAR-2012-0213; FRL-9661-6] Revision to the Hawaii State Implementation Plan, Minor New Source Review Program AGENCY: Environmental Protection Agency... final action to approve revisions to the Hawaii State Implementation Plan (SIP). These revisions would...

  5. Implementing an Open Source Learning Management System: A Critical Analysis of Change Strategies

    Science.gov (United States)

    Uys, Philip M.

    2010-01-01

    This paper analyses the change and innovation strategies that Charles Sturt University (CSU) used from 2007 to 2009 during the implementation and mainstreaming of an open source learning management system (LMS), Sakai, named locally as "CSU Interact". CSU was in January 2008 the first Australian University to implement an open source…

  6. Implementation of a tree algorithm in MCNP code for nuclear well logging applications.

    Science.gov (United States)

    Li, Fusheng; Han, Xiaogang

    2012-07-01

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  8. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  9. Prometheus: the implementation of clinical coding schemes in French routine general practice

    Directory of Open Access Journals (Sweden)

    Laurent Letrilliart

    2006-09-01

    Conclusions Coding health problems on a routine basis proved to be feasible. However, this process can be used on a more widespread basis and linked to other management data only if physicians are specially trained and rewarded, and the software incorporates large terminologies mapped with classifications.

  10. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    Science.gov (United States)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  11. Assessment of Programming Language Learning Based on Peer Code Review Model: Implementation and Experience Report

    Science.gov (United States)

    Wang, Yanqing; Li, Hang; Feng, Yuqiang; Jiang, Yu; Liu, Ying

    2012-01-01

    The traditional assessment approach, in which one single written examination counts toward a student's total score, no longer meets new demands of programming language education. Based on a peer code review process model, we developed an online assessment system called "EduPCR" and used a novel approach to assess the learning of computer…

  12. Teacher Candidates Implementing Universal Design for Learning: Enhancing Picture Books with QR Codes

    Science.gov (United States)

    Grande, Marya; Pontrello, Camille

    2016-01-01

    The purpose of this study was to investigate if teacher candidates could gain knowledge of the principles of Universal Design for Learning by enhancing traditional picture books with Quick Response (QR) codes and to determine if the process of making these enhancements would impact teacher candidates' comfort levels with using technology on both…

  13. Challenges for Knowledge Management in the Context of IT Global Sourcing Models Implementation

    OpenAIRE

    Perechuda , Kazimierz; Sobińska , Małgorzata

    2014-01-01

    Part 2: Models and Functioning of Knowledge Management; International audience; The article gives a literature overview of the current challenges connected with the implementation of the newest IT sourcing models. In the dynamic environment, organizations are required to build their competitive advantage not only on their own resources, but also on resources commissioned from external providers, accessed through various forms of sourcing, including the sourcing of IT services. This paper pres...

  14. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    Science.gov (United States)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  15. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Medley, S. S. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Liu, D. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy; Gorelenkova, M. V. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Heidbrink, W. W. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy; Stagner, L. [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States); Univ. of California, Irvine, CA (United States). Dept. of Physics and Astronomy

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  16. GARLIC — A general purpose atmospheric radiative transfer line-by-line infrared-microwave code: Implementation and evaluation

    International Nuclear Information System (INIS)

    Schreier, Franz; Gimeno García, Sebastián; Hedelt, Pascal; Hess, Michael; Mendrok, Jana; Vasquez, Mayte; Xu, Jian

    2014-01-01

    A suite of programs for high resolution infrared-microwave atmospheric radiative transfer modeling has been developed with emphasis on efficient and reliable numerical algorithms and a modular approach appropriate for simulation and/or retrieval in a variety of applications. The Generic Atmospheric Radiation Line-by-line Infrared Code — GARLIC — is suitable for arbitrary observation geometry, instrumental field-of-view, and line shape. The core of GARLIC's subroutines constitutes the basis of forward models used to implement inversion codes to retrieve atmospheric state parameters from limb and nadir sounding instruments. This paper briefly introduces the physical and mathematical basics of GARLIC and its descendants and continues with an in-depth presentation of various implementation aspects: An optimized Voigt function algorithm combined with a two-grid approach is used to accelerate the line-by-line modeling of molecular cross sections; various quadrature methods are implemented to evaluate the Schwarzschild and Beer integrals; and Jacobians, i.e. derivatives with respect to the unknowns of the atmospheric inverse problem, are implemented by means of automatic differentiation. For an assessment of GARLIC's performance, a comparison of the quadrature methods for solution of the path integral is provided. Verification and validation are demonstrated using intercomparisons with other line-by-line codes and comparisons of synthetic spectra with spectra observed on Earth and from Venus. - Highlights: • High resolution infrared-microwave radiative transfer model. • Discussion of algorithmic and computational aspects. • Jacobians by automatic/algorithmic differentiation. • Performance evaluation by intercomparisons, verification, validation

  17. Development and implementation of a set of numerical quadratures SQN and EQN type in the transport code AZTRAN

    International Nuclear Information System (INIS)

    Chepe P, M.; Xolocostli M, J. V.; Gomez T, A. M.; Del Valle G, E.

    2015-09-01

    The deterministic transport codes for analysis of nuclear reactors have been used for several years already, these codes have evolved in terms of the methodology used and the degree of accuracy, because at the present time has more computer power. In this paper, the transport code used considers the classical technique of multi-group for discretization energy, for space discretization uses the nodal methods, while for the angular discretization the discrete ordinates method is used; so that presents the development and implementation of a set of numerical quadratures of SQ N type symmetrical with the same weight for each angular direction and these are compared with the quadratures of EQ N type. The two sets of numerical quadratures were implemented in the program AZTRAN to a problem with isotropic medium in XYZ geometry, in steady state using the nodal method RTN-0 (Raviart-Thomas-Nedelec). The analyzed results correspond to the effective multiplication factor k eff and neutron angular flux with approximations from S 4 to S 16 . (Author)

  18. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system.

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  19. Source coherence impairments in a direct detection direct sequence optical code-division multiple-access system

    Science.gov (United States)

    Fsaifes, Ihsan; Lepers, Catherine; Lourdiane, Mounia; Gallion, Philippe; Beugin, Vincent; Guignard, Philippe

    2007-02-01

    We demonstrate that direct sequence optical code- division multiple-access (DS-OCDMA) encoders and decoders using sampled fiber Bragg gratings (S-FBGs) behave as multipath interferometers. In that case, chip pulses of the prime sequence codes generated by spreading in time-coherent data pulses can result from multiple reflections in the interferometers that can superimpose within a chip time duration. We show that the autocorrelation function has to be considered as the sum of complex amplitudes of the combined chip as the laser source coherence time is much greater than the integration time of the photodetector. To reduce the sensitivity of the DS-OCDMA system to the coherence time of the laser source, we analyze the use of sparse and nonperiodic quadratic congruence and extended quadratic congruence codes.

  20. A strategy of implementation of the improved constitutive equations for the advanced subchannel code

    International Nuclear Information System (INIS)

    Shirai, Hiroshi; Hotta, Akitoshi; Ninokata, Hisashi

    2004-01-01

    To develop the advanced subchannel analysis code, the dominant factors that influence the boiling transitional process must be taken into account in the mechanistic constitutive equations based on the flow geometries and the fluid properties. The dominant factors that influence the boiling transitional processes are (1) the gas-liquid re-distribution by cross flow, (2) the liquid film dryout, (3) the two-phase flow regime transition, (4) the droplet deposition, and (5) the spacer-droplet interaction. At first, we indicated the strategy for the development of the constitutive equations for the five dominant factors based on the experimental database by the latest measurement technique and the latest computational fluid dynamics method. Then, the problems of the present constitutive equations and the improvement plan of the constitutive equations were indicated. Finally, the layered structure for the two-phase/three-field subchannel code including the new constitutive equations was designed. (author)

  1. Grid-based Parallel Data Streaming Implemented for the Gyrokinetic Toroidal Code

    International Nuclear Information System (INIS)

    Klasky, S.; Ethier, S.; Lin, Z.; Martins, K.; McCune, D.; Samtaney, R.

    2003-01-01

    We have developed a threaded parallel data streaming approach using Globus to transfer multi-terabyte simulation data from a remote supercomputer to the scientist's home analysis/visualization cluster, as the simulation executes, with negligible overhead. Data transfer experiments show that this concurrent data transfer approach is more favorable compared with writing to local disk and then transferring this data to be post-processed. The present approach is conducive to using the grid to pipeline the simulation with post-processing and visualization. We have applied this method to the Gyrokinetic Toroidal Code (GTC), a 3-dimensional particle-in-cell code used to study microturbulence in magnetic confinement fusion from first principles plasma theory

  2. Implementation of a tree algorithm in MCNP code for nuclear well logging applications

    Energy Technology Data Exchange (ETDEWEB)

    Li Fusheng, E-mail: fusheng.li@bakerhughes.com [Baker Hughes Incorporated, 2001 Rankin Rd. Houston, TX 77073-5101 (United States); Han Xiaogang [Baker Hughes Incorporated, 2001 Rankin Rd. Houston, TX 77073-5101 (United States)

    2012-07-15

    The goal of this paper is to develop some modeling capabilities that are missing in the current MCNP code. Those missing capabilities can greatly help for some certain nuclear tools designs, such as a nuclear lithology/mineralogy spectroscopy tool. The new capabilities to be developed in this paper include the following: zone tally, neutron interaction tally, gamma rays index tally and enhanced pulse-height tally. The patched MCNP code also can be used to compute neutron slowing-down length and thermal neutron diffusion length. - Highlights: Black-Right-Pointing-Pointer Tree structure programming is suitable for Monte-Carlo based particle tracking. Black-Right-Pointing-Pointer Enhanced pulse height tally is developed for oilwell logging tool simulation. Black-Right-Pointing-Pointer Neutron interaction tally and gamma ray index tally for geochemical logging.

  3. Gaze strategies can reveal the impact of source code features on the cognitive load of novice programmers

    DEFF Research Database (Denmark)

    Wulff-Jensen, Andreas; Ruder, Kevin Vignola; Triantafyllou, Evangelia

    2018-01-01

    As shown by several studies, programmers’ readability of source code is influenced by its structural and the textual features. In order to assess the importance of these features, we conducted an eye-tracking experiment with programming students. To assess the readability and comprehensibility of...

  4. Use of WIMS-E lattice code for prediction of the transuranic source term for spent fuel dose estimation

    International Nuclear Information System (INIS)

    Schwinkendorf, K.N.

    1996-01-01

    A recent source term analysis has shown a discrepancy between ORIGEN2 transuranic isotopic production estimates and those produced with the WIMS-E lattice physics code. Excellent agreement between relevant experimental measurements and WIMS-E was shown, thus exposing an error in the cross section library used by ORIGEN2

  5. Lessons learned from new construction utility demand side management programs and their implications for implementing building energy codes

    Energy Technology Data Exchange (ETDEWEB)

    Wise, B.K.; Hughes, K.R.; Danko, S.L.; Gilbride, T.L.

    1994-07-01

    This report was prepared for the US Department of Energy (DOE) Office of Codes and Standards by the Pacific Northwest Laboratory (PNL) through its Building Energy Standards Program (BESP). The purpose of this task was to identify demand-side management (DSM) strategies for new construction that utilities have adopted or developed to promote energy-efficient design and construction. PNL conducted a survey of utilities and used the information gathered to extrapolate lessons learned and to identify evolving trends in utility new-construction DSM programs. The ultimate goal of the task is to identify opportunities where states might work collaboratively with utilities to promote the adoption, implementation, and enforcement of energy-efficient building energy codes.

  6. Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues

    Science.gov (United States)

    Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.

    Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.

  7. Implementation of non-condensable gases condensation suppression model into the WCOBRA/TRAC-TF2 LOCA safety evaluation code

    Energy Technology Data Exchange (ETDEWEB)

    Liao, J.; Cao, L.; Ohkawa, K.; Frepoli, C. [LOCA Integrated Services I, Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States)

    2012-07-01

    The non-condensable gases condensation suppression model is important for a realistic LOCA safety analysis code. A condensation suppression model for direct contact condensation was previously developed by Westinghouse using first principles. The model is believed to be an accurate description of the direct contact condensation process in the presence of non-condensable gases. The Westinghouse condensation suppression model is further revised by applying a more physical model. The revised condensation suppression model is thus implemented into the WCOBRA/TRAC-TF2 LOCA safety evaluation code for both 3-D module (COBRA-TF) and 1-D module (TRAC-PF1). Parametric study using the revised Westinghouse condensation suppression model is conducted. Additionally, the performance of non-condensable gases condensation suppression model is examined in the ACHILLES (ISP-25) separate effects test and LOFT L2-5 (ISP-13) integral effects test. (authors)

  8. A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry

    Science.gov (United States)

    Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb

    2013-01-01

    A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…

  9. VULCAN: An Open-source, Validated Chemical Kinetics Python Code for Exoplanetary Atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Tsai, Shang-Min; Grosheintz, Luc; Kitzmann, Daniel; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Lyons, James R. [Arizona State University, School of Earth and Space Exploration, Bateman Physical Sciences, Tempe, AZ 85287-1404 (United States); Rimmer, Paul B., E-mail: shang-min.tsai@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch, E-mail: jimlyons@asu.edu [University of St. Andrews, School of Physics and Astronomy, St. Andrews, KY16 9SS (United Kingdom)

    2017-02-01

    We present an open-source and validated chemical kinetics code for studying hot exoplanetary atmospheres, which we name VULCAN. It is constructed for gaseous chemistry from 500 to 2500 K, using a reduced C–H–O chemical network with about 300 reactions. It uses eddy diffusion to mimic atmospheric dynamics and excludes photochemistry. We have provided a full description of the rate coefficients and thermodynamic data used. We validate VULCAN by reproducing chemical equilibrium and by comparing its output versus the disequilibrium-chemistry calculations of Moses et al. and Rimmer and Helling. It reproduces the models of HD 189733b and HD 209458b by Moses et al., which employ a network with nearly 1600 reactions. We also use VULCAN to examine the theoretical trends produced when the temperature–pressure profile and carbon-to-oxygen ratio are varied. Assisted by a sensitivity test designed to identify the key reactions responsible for producing a specific molecule, we revisit the quenching approximation and find that it is accurate for methane but breaks down for acetylene, because the disequilibrium abundance of acetylene is not directly determined by transport-induced quenching, but is rather indirectly controlled by the disequilibrium abundance of methane. Therefore we suggest that the quenching approximation should be used with caution and must always be checked against a chemical kinetics calculation. A one-dimensional model atmosphere with 100 layers, computed using VULCAN, typically takes several minutes to complete. VULCAN is part of the Exoclimes Simulation Platform (ESP; exoclime.net) and publicly available at https://github.com/exoclime/VULCAN.

  10. Code of Conduct on the Safety and Security of Radioactive Sources and the Supplementary Guidance on the Import and Export of Radioactive Sources

    International Nuclear Information System (INIS)

    2005-01-01

    In operative paragraph 4 of its resolution GC(47)/RES/7.B, the General Conference, having welcomed the approval by the Board of Governors of the revised IAEA Code of Conduct on the Safety and Security of Radioactive Sources (GC(47)/9), and while recognizing that the Code is not a legally binding instrument, urged each State to write to the Director General that it fully supports and endorses the IAEA's efforts to enhance the safety and security of radioactive sources and is working toward following the guidance contained in the IAEA Code of Conduct. In operative paragraph 5, the Director General was requested to compile, maintain and publish a list of States that have made such a political commitment. The General Conference, in operative paragraph 6, recognized that this procedure 'is an exceptional one, having no legal force and only intended for information, and therefore does not constitute a precedent applicable to other Codes of Conduct of the Agency or of other bodies belonging to the United Nations system'. In operative paragraph 7 of resolution GC(48)/RES/10.D, the General Conference welcomed the fact that more than 60 States had made political commitments with respect to the Code in line with resolution GC(47)/RES/7.B and encouraged other States to do so. In operative paragraph 8 of resolution GC(48)/RES/10.D, the General Conference further welcomed the approval by the Board of Governors of the Supplementary Guidance on the Import and Export of Radioactive Sources (GC(48)/13), endorsed this Guidance while recognizing that it is not legally binding, noted that more than 30 countries had made clear their intention to work towards effective import and export controls by 31 December 2005, and encouraged States to act in accordance with the Guidance on a harmonized basis and to notify the Director General of their intention to do so as supplementary information to the Code of Conduct, recalling operative paragraph 6 of resolution GC(47)/RES/7.B. 4. The

  11. Implementation and adaption of the Computer Code ECOSYS/EXCEL for Austria as OECOSYS/EXCEL

    International Nuclear Information System (INIS)

    Hick, H.; Suda, M.; Mueck, K.

    1998-03-01

    During 1989, under contract to the Austrian Chamber of the Federal Chancellor, department VII, the radioecological forecast model OECOSYS was implemented by the Austrian Research Centre Seibersdorf on a VAX computer using VAX Fortran. OECOSYS allows the prediction of the consequences after a large scale contamination event. During 1992, under contract to the Austrian Federal Ministry of Health, Sports and Consumer Protection, department III OECOSYS - in the version of 1989 - was implemented on PC's in Seibersdorf and the Ministry using OS/2 and Microsoft -Fortran. In March 1993, the Ministry ordered an update which had become necessary and the evaluation of two exercise scenarios. Since that time the prognosis model with its auxiliary program and communication facilities is kept on stand-by and yearly exercises are performed to maintain its readiness. The current report describes the implementation and adaption to Austrian conditions of the newly available EXCEL version of the German ECOSYS prognosis model as OECOSYS. (author)

  12. On the implementation of new technology modules for fusion reactor systems codes

    Energy Technology Data Exchange (ETDEWEB)

    Franza, F., E-mail: fabrizio.franza@kit.edu [Institute of Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany); Boccaccini, L.V.; Fisher, U. [Institute of Neutron Physics and Reactor Technology, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany); Gade, P.V.; Heller, R. [Institute for Technical Physics, Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen, 76344 (Germany)

    2015-10-15

    Highlights: • At KIT a new technology modules for systems code are under development. • A new algorithm for the definition of the main reactor's components is defined. • A new blanket model based on 1D neutronics analysis is described. • A new TF coil stress model based on 3D electromagnetic analysis is described. • The models were successfully benchmarked against more detailed models. - Abstract: In the frame of the pre-conceptual design of the next generation fusion power plant (DEMO), systems codes are being used from nearly 20 years. In such computational tools the main reactor components (e.g. plasma, blanket, magnets, etc.) are integrated in a unique computational algorithm and simulated by means of rather simplified mathematical models (e.g. steady state and zero dimensional models). The systems code tries to identify the main design parameters (e.g. major radius, net electrical power, toroidal field) and to make the reactor's requirements and constraints to be simultaneously accomplished. In fusion applications, requirements and constraints can be either of physics or technology kind. Concerning the latest category, at Karlsruhe Institute of Technology a new modelling activity has been recently launched aiming to develop improved models focusing on the main technology areas, such as neutronics, thermal-hydraulics, electromagnetics, structural mechanics, fuel cycle and vacuum systems. These activities started by developing: (1) a geometry model for the definition of poloidal profiles for the main reactors components, (2) a blanket model based on neutronics analyses and (3) a toroidal field coil model based on electromagnetic analysis, firstly focusing on the stresses calculations. The objective of this paper is therefore to give a short outline of these models.

  13. On the implementation of new technology modules for fusion reactor systems codes

    International Nuclear Information System (INIS)

    Franza, F.; Boccaccini, L.V.; Fisher, U.; Gade, P.V.; Heller, R.

    2015-01-01

    Highlights: • At KIT a new technology modules for systems code are under development. • A new algorithm for the definition of the main reactor's components is defined. • A new blanket model based on 1D neutronics analysis is described. • A new TF coil stress model based on 3D electromagnetic analysis is described. • The models were successfully benchmarked against more detailed models. - Abstract: In the frame of the pre-conceptual design of the next generation fusion power plant (DEMO), systems codes are being used from nearly 20 years. In such computational tools the main reactor components (e.g. plasma, blanket, magnets, etc.) are integrated in a unique computational algorithm and simulated by means of rather simplified mathematical models (e.g. steady state and zero dimensional models). The systems code tries to identify the main design parameters (e.g. major radius, net electrical power, toroidal field) and to make the reactor's requirements and constraints to be simultaneously accomplished. In fusion applications, requirements and constraints can be either of physics or technology kind. Concerning the latest category, at Karlsruhe Institute of Technology a new modelling activity has been recently launched aiming to develop improved models focusing on the main technology areas, such as neutronics, thermal-hydraulics, electromagnetics, structural mechanics, fuel cycle and vacuum systems. These activities started by developing: (1) a geometry model for the definition of poloidal profiles for the main reactors components, (2) a blanket model based on neutronics analyses and (3) a toroidal field coil model based on electromagnetic analysis, firstly focusing on the stresses calculations. The objective of this paper is therefore to give a short outline of these models.

  14. Implementation of second moment closure turbulence model for incompressible flows in the industrial finite element code N3S

    International Nuclear Information System (INIS)

    Pot, G.; Laurence, D.; Rharif, N.E.; Leal de Sousa, L.; Compe, C.

    1995-12-01

    This paper deals with the introduction of a second moment closure turbulence model (Reynolds Stress Model) in an industrial finite element code, N3S, developed at Electricite de France.The numerical implementation of the model in N3S will be detailed in 2D and 3D. Some details are given concerning finite element computations and solvers. Then, some results will be given, including a comparison between standard k-ε model, R.S.M. model and experimental data for some test case. (authors). 22 refs., 3 figs

  15. Structural integrity assessment of a pressure container component. Design and service code implementation. Case studies

    International Nuclear Information System (INIS)

    Sanzi, H.C.

    2006-01-01

    In the present work, the most important results of the local stresses occurred in the cracked pipes with a axial through-wall crack (outer), produced during operation of a Petrochemical Plant, using finite elements method, are presented. As requested, the component has been verified based 3D FE plastic analysis, under the postulated failure loading, assuring with this method a high degree of accuracy in the results. Codes used by Design and Service, as ASME Section VIII Div. 2 and API 579, have been used in the analysis. (author) [es

  16. Implementation, reliability, and feasibility test of an Open-Source PACS.

    Science.gov (United States)

    Valeri, Gianluca; Zuccaccia, Matteo; Badaloni, Andrea; Ciriaci, Damiano; La Riccia, Luigi; Mazzoni, Giovanni; Maggi, Stefania; Giovagnoni, Andrea

    2015-12-01

    To implement a hardware and software system able to perform the major functions of an Open-Source PACS, and to analyze it in a simulated real-world environment. A small home network was implemented, and the Open-Source operating system Ubuntu 11.10 was installed in a laptop containing the Dcm4chee suite with the software devices needed. The Open-Source PACS implemented is compatible with Linux OS, Microsoft OS, and Mac OS X; furthermore, it was used with operating systems that guarantee the operation in portable devices (smartphone, tablet) Android and iOS. An OSS PACS is useful for making tutorials and workshops on post-processing techniques for educational and training purposes.

  17. Implementation of creep-fatigue model into finite-element code to assess cooled turbine blade.

    CSIR Research Space (South Africa)

    Dedekind, MO

    1994-01-01

    Full Text Available Turbine blades which are designed with airfoil cooling are subject to thermo-mechanical fatigue as well as creep damage. These problems arise due to thermal cycling and high operating temperatures in service. An implementation of fatigue and creep...

  18. Implementation of refined core thermal-hydraulic calculation feature in the MARS/MASTER code

    International Nuclear Information System (INIS)

    Joo, H. K.; Jung, J. J.; Cho, B. O.; Ji, S. K.; Lee, W. J.; Jang, M. H.

    2000-01-01

    As an effort to enhance the fidelity of the core thermal/hydraulic calculation in the MARS/MASTER code, a best-estimate system/core coupled code, the COBRA-III module of MASTER is activated that enables refined core T/H calculations. Since the COBRA-III module is capable of using fuel-assembly sized nodes, the resolution of the T/H solution is high so that accurate incorporation of local T/H feedback effects becomes possible. The COBRA-III module is utilized such that the refined core T/H calculation is performed using the coarse-mesh flow boundary conditions specified by MARS at both ends of the core. The results of application to the OECD MSLB benchmark analysis indicate that the local peaking factor can be reduced by upto 15% with the refined calculation through the accurate representation of the local Doppler effect evaluation, although the prediction of the global transient behaviors such as the total core power change remain essentially unaffected

  19. Numeric implementation of a nucleation, growth and transport model for helium bubbles in lead-lithium HCLL breeding blanket channels: Theory and code development

    Energy Technology Data Exchange (ETDEWEB)

    Batet, L., E-mail: lluis.batet@upc.edu [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Physics and Nuclear Engineering (DFEN), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Fradera, J. [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Physics and Nuclear Engineering (DFEN), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Valls, E. Mas de les [Technical University of Catalonia (UPC), Energy and Radiation Studies Research Group (GREENER), Technology for Fusion T4F, Barcelona (Spain); UPC, Department of Heat Engines (DMMT), ETSEIB, Av. Diagonal 647, 08028 Barcelona (Spain); Sedano, L.A. [EURATOM-CIEMAT Association, Fusion Technology Division, Av. Complutense 22, 28040 Madrid (Spain)

    2011-06-15

    Large helium (He) production rates in liquid metal breeding blankets of a DT fusion reactor might have a significant influence in the system design. Low He solubility together with high local concentrations may create the conditions for He cavitation, which would have an impact in the components performance. The paper states that such a possibility is not remote in a helium cooled lithium-lead breeding blanket design. A model based on the Classical Nucleation Theory (CNT) has been developed and implemented in order to have a specific tool able to simulate HCLL systems and identify the key parameters and sensitivities. The nucleation and growth model has been implemented in the open source CFD code OpenFOAM so that transport of dissolved atomic He and nucleated He bubbles can be simulated. At the current level of development it is assumed that void fraction is small enough not to affect either the hydrodynamics or the properties of the liquid metal; thus, bubbles can be represented by means of a passive scalar. He growth and transport has been implemented using the mean radius approach in order to save computational time. Limitations and capabilities of the model are shown by means of zero-dimensional simulation and sensitivity analysis under HCLL breeding unit conditions.

  20. Monte Carlo method implemented in a finite element code with application to dynamic vacuum in particle accelerators

    CERN Document Server

    Garion, C

    2009-01-01

    Modern particle accelerators require UHV conditions during their operation. In the accelerating cavities, breakdowns can occur, releasing large amount of gas into the vacuum chamber. To determine the pressure profile along the cavity as a function of time, the time-dependent behaviour of the gas has to be simulated. To do that, it is useful to apply accurate three-dimensional method, such as Test Particles Monte Carlo. In this paper, a time-dependent Test Particles Monte Carlo is used. It has been implemented in a Finite Element code, CASTEM. The principle is to track a sample of molecules during time. The complex geometry of the cavities can be created either in the FE code or in a CAD software (CATIA in our case). The interface between the two softwares to export the geometry from CATIA to CASTEM is given. The algorithm of particle tracking for collisionless flow in the FE code is shown. Thermal outgassing, pumping surfaces and electron and/or ion stimulated desorption can all be generated as well as differ...

  1. Implications caused by SARex on the implementation of the IMO polar code on survival at sea

    Science.gov (United States)

    Solberg, K. E.

    2017-12-01

    The International Code for Ships Operating in Polar waters goes into effect on 01 January 2018 for all ships. This puts additional strain on vessel owners and operators as they will have to comply with an additional set of requirements. This includes the functional requirement of a minimum of 5 days survival time. The SARex exercise has elaborated on the issue of survival in close cooperation with the different stakeholders associated with the marine industry. Being an objective third party is important when organizing and executing these activities as all of the stakeholders has different agendas and priorities. Developing sustainable solutions is a balancing act, incorporating economic and political aspects as well as technology and requires a mutual common understanding of the mechanism involved.

  2. Implementation of Finite Volume based Navier Stokes Algorithm Within General Purpose Flow Network Code

    Science.gov (United States)

    Schallhorn, Paul; Majumdar, Alok

    2012-01-01

    This paper describes a finite volume based numerical algorithm that allows multi-dimensional computation of fluid flow within a system level network flow analysis. There are several thermo-fluid engineering problems where higher fidelity solutions are needed that are not within the capacity of system level codes. The proposed algorithm will allow NASA's Generalized Fluid System Simulation Program (GFSSP) to perform multi-dimensional flow calculation within the framework of GFSSP s typical system level flow network consisting of fluid nodes and branches. The paper presents several classical two-dimensional fluid dynamics problems that have been solved by GFSSP's multi-dimensional flow solver. The numerical solutions are compared with the analytical and benchmark solution of Poiseulle, Couette and flow in a driven cavity.

  3. Managing ethics in higher education : implementing a code or embedding virtue ?

    OpenAIRE

    Moore, G.

    2006-01-01

    This paper reviews a publication entitled 'Ethics Matters. Managing Ethical Issues in Higher Education', which was distributed to all UK universities and equivalent (HEIs) in October 2005. The publication proposed that HEIs should put in place an institution-wide ethical policy framework, well beyond the customary focus on research ethics, together with the mechanisms necessary to ensure its implementation. Having summarised the processes that led to the publication and the publication itself...

  4. Parallel Implementation of the Multi-Dimensional Spectral Code SPECT3D on large 3D grids.

    Science.gov (United States)

    Golovkin, Igor E.; Macfarlane, Joseph J.; Woodruff, Pamela R.; Pereyra, Nicolas A.

    2006-10-01

    The multi-dimensional collisional-radiative, spectral analysis code SPECT3D can be used to study radiation from complex plasmas. SPECT3D can generate instantaneous and time-gated images and spectra, space-resolved and streaked spectra, which makes it a valuable tool for post-processing hydrodynamics calculations and direct comparison between simulations and experimental data. On large three dimensional grids, transporting radiation along lines of sight (LOS) requires substantial memory and CPU resources. Currently, the parallel option in SPECT3D is based on parallelization over photon frequencies and allows for a nearly linear speed-up for a variety of problems. In addition, we are introducing a new parallel mechanism that will greatly reduce memory requirements. In the new implementation, spatial domain decomposition will be utilized allowing transport along a LOS to be performed only on the mesh cells the LOS crosses. The ability to operate on a fraction of the grid is crucial for post-processing the results of large-scale three-dimensional hydrodynamics simulations. We will present a parallel implementation of the code and provide a scalability study performed on a Linux cluster.

  5. Implementation of wall film condensation model to two-fluid model in component thermal hydraulic analysis code CUPID - 15237

    International Nuclear Information System (INIS)

    Lee, J.H.; Park, G.C.; Cho, H.K.

    2015-01-01

    In the containment of a nuclear reactor, the wall condensation occurs when containment cooling system and structures remove the mass and energy release and this phenomenon is of great importance to ensure containment integrity. If the phenomenon occurs in the presence of non-condensable gases, their accumulation near the condensate film leads to significant reduction in heat transfer during the condensation. This study aims at simulating the wall film condensation in the presence of non-condensable gas using CUPID, a computational multi-fluid dynamics code, which is developed by the Korea Atomic Energy Research Institute (KAERI) for the analysis of transient two-phase flows in nuclear reactor components. In order to simulate the wall film condensation in containment, the code requires a proper wall condensation model and liquid film model applicable to the analysis of the large scale system. In the present study, the liquid film model and wall film condensation model were implemented in the two-fluid model of CUPID. For the condensation simulation, a wall function approach with heat and mass transfer analogy was applied in order to save computational time without considerable refinement for the boundary layer. This paper presents the implemented wall film condensation model and then, introduces the simulation result using CUPID with the model for a conceptual condensation problem in a large system. (authors)

  6. Open-source tool for automatic import of coded surveying data to multiple vector layers in GIS environment

    Directory of Open Access Journals (Sweden)

    Eva Stopková

    2016-12-01

    Full Text Available This paper deals with a tool that enables import of the coded data in a singletext file to more than one vector layers (including attribute tables, together withautomatic drawing of line and polygon objects and with optional conversion toCAD. Python script v.in.survey is available as an add-on for open-source softwareGRASS GIS (GRASS Development Team. The paper describes a case study basedon surveying at the archaeological mission at Tell-el Retaba (Egypt. Advantagesof the tool (e.g. significant optimization of surveying work and its limits (demandson keeping conventions for the points’ names coding are discussed here as well.Possibilities of future development are suggested (e.g. generalization of points’names coding or more complex attribute table creation.

  7. BLT [Breach, Leach, and Transport]: A source term computer code for low-level waste shallow land burial

    International Nuclear Information System (INIS)

    Suen, C.J.; Sullivan, T.M.

    1990-01-01

    This paper discusses the development of a source term model for low-level waste shallow land burial facilities and separates the problem into four individual compartments. These are water flow, corrosion and subsequent breaching of containers, leaching of the waste forms, and solute transport. For the first and the last compartments, we adopted the existing codes, FEMWATER and FEMWASTE, respectively. We wrote two new modules for the other two compartments in the form of two separate Fortran subroutines -- BREACH and LEACH. They were incorporated into a modified version of the transport code FEMWASTE. The resultant code, which contains all three modules of container breaching, waste form leaching, and solute transport, was renamed BLT (for Breach, Leach, and Transport). This paper summarizes the overall program structure and logistics, and presents two examples from the results of verification and sensitivity tests. 6 refs., 7 figs., 1 tab

  8. Computational methods and implementation of the 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction

    International Nuclear Information System (INIS)

    Aragones, J.M.; Ahnert, C.

    1995-01-01

    New computational methods have been developed in our 3-D PWR core dynamics SIMTRAN code for online surveillance and prediction. They improve the accuracy and efficiency of the coupled neutronic-thermalhydraulic solution and extend its scope to provide, mainly, the calculation of: the fission reaction rates at the incore mini-detectors; the responses at the excore detectors (power range); the temperatures at the thermocouple locations; and the in-vessel distribution of the loop cold-leg inlet coolant conditions in the reflector and core channels, and to the hot-leg outlets per loop. The functional capabilities implemented in the extended SIMTRAN code for online utilization include: online surveillance, incore-excore calibration, evaluation of peak power factors and thermal margins, nominal update and cycle follow, prediction of maneuvers and diagnosis of fast transients and oscillations. The new code has been installed at the Vandellos-II PWR unit in Spain, since the startup of its cycle 7 in mid-June, 1994. The computational implementation has been performed on HP-700 workstations under the HP-UX Unix system, including the machine-man interfaces for online acquisition of measured data and interactive graphical utilization, in C and X11. The agreement of the simulated results with the measured data, during the startup tests and first months of actual operation, is well within the accuracy requirements. The performance and usefulness shown during the testing and demo phase, to be extended along this cycle, has proved that SIMTRAN and the man-machine graphic user interface have the qualities for a fast, accurate, user friendly, reliable, detailed and comprehensive online core surveillance and prediction

  9. Expected Range of Cooperation Between Transmission System Operators and Distribution System Operators After Implementation of ENTSO-E Grid Codes

    Directory of Open Access Journals (Sweden)

    Tomasz Pakulski

    2015-06-01

    Full Text Available The authors present the prospects of cooperation between transmission system operators (TSO and distribution system operators (DSO after entry into force ENTSO-E (European Network of Transmission System Operators for Electricity grid codes. New areas of DSO activities, associated with offering TSO aggregated services for national power system regulation based on the regulation resources connected to the distribution grid, and services on the distribution system level as part of the creation of local balancing areas (LBA are presented. The paper also presents the possibilities of providing ancillary services by different types of distributed generation sources in the distribution network. The LBA concept, which involves integrated management of local regulation resources including generation, demand, and energy storage is described. The options of the renewable energy sources (RES using for voltage and reactive power control in the distribution network with the use of wind farms (WF connected to the distribution system are characterized.

  10. SCRIC: a code dedicated to the detailed emission and absorption of heterogeneous NLTE plasmas; application to xenon EUV sources

    International Nuclear Information System (INIS)

    Gaufridy de Dortan, F. de

    2006-01-01

    Nearly all spectral opacity codes for LTE and NLTE plasmas rely on configurations approximate modelling or even supra-configurations modelling for mid Z plasmas. But in some cases, configurations interaction (either relativistic and non relativistic) induces dramatic changes in spectral shapes. We propose here a new detailed emissivity code with configuration mixing to allow for a realistic description of complex mid Z plasmas. A collisional radiative calculation. based on HULLAC precise energies and cross sections. determines the populations. Detailed emissivities and opacities are then calculated and radiative transfer equation is resolved for wide inhomogeneous plasmas. This code is able to cope rapidly with very large amount of atomic data. It is therefore possible to use complex hydrodynamic files even on personal computers in a very limited time. We used this code for comparison with Xenon EUV sources within the framework of nano-lithography developments. It appears that configurations mixing strongly shifts satellite lines and must be included in the description of these sources to enhance their efficiency. (author)

  11. Implementation of VOC source reduction practices in a manufactured house and in school classrooms

    International Nuclear Information System (INIS)

    Hodgson, A.T.; Apte, M.G.; Shendell, D.G.; Beal, D.; McIlvaine, J.E.R.

    2002-01-01

    Detailed studies of a new manufactured house and four new industrialized relocatable school classrooms were conducted to determine the emission sources of formaldehyde and other VOCs and to identify and implement source reduction practices. Procedures were developed to generate VOC emission factors that allowed reasonably accurate predictions of indoor air VOC concentrations. Based on the identified sources of formaldehyde and other aldehydes, practices were developed to reduce the concentrations of these compounds in new house construction. An alternate ceiling panel reduced formaldehyde concentrations in the classrooms. Overall, the classrooms had relatively low VOC concentrations

  12. Implementation of Design Changes Towards a More Reliable, Hands-off Magnetron Ion Source

    Energy Technology Data Exchange (ETDEWEB)

    Sosa, A. [Fermilab; Bollinger, D. S. [Fermilab; Karns, P. R. [Fermilab; Tan, C. Y. [Fermilab

    2017-12-07

    As the main H- ion source for the accelerator complex, magnetron ion sources have been used at Fermilab since the 1970’s. At the offline test stand, new R&D is carried out to develop and upgrade the present magnetron-type sources of H- ions of up to 80 mA and 35 keV beam energy in the context of the Proton Improvement Plan. The aim of this plan is to provide high-power proton beams for the experiments at FNAL. In order to reduce the amount of tuning and monitoring of these ion sources, a new electronic system consisting of a current-regulated arc discharge modulator allow the ion source to run at a constant arc current for improved beam output and operation. A solenoid-type gas valve feeds H2 gas into the source precisely and independently of ambient temperature. This summary will cover several studies and design changes that have been tested and will eventually be implemented on the operational magnetron sources at Fermilab. Innovative results for this type of ion source include cathode geometries, solenoid gas valves, current controlled arc pulser, cesium boiler redesign, gas mixtures of hydrogen and nitrogen, and duty factor reduction, with the aim to improve source lifetime, stability, and reducing the amount of tuning needed. In this summary, I will highlight the advances made in ion sources at Fermilab and will outline the directions of the continuing R&D effort.

  13. Use of CITATION code for flux calculation in neutron activation analysis with voluminous sample using an Am-Be source

    International Nuclear Information System (INIS)

    Khelifi, R.; Idiri, Z.; Bode, P.

    2002-01-01

    The CITATION code based on neutron diffusion theory was used for flux calculations inside voluminous samples in prompt gamma activation analysis with an isotopic neutron source (Am-Be). The code uses specific parameters related to the energy spectrum source and irradiation system materials (shielding, reflector). The flux distribution (thermal and fast) was calculated in the three-dimensional geometry for the system: air, polyethylene and water cuboidal sample (50x50x50 cm). Thermal flux was calculated in a series of points inside the sample. The results agreed reasonably well with observed values. The maximum thermal flux was observed at a distance of 3.2 cm while CITATION gave 3.7 cm. Beyond a depth of 7.2 cm, the thermal flux to fast flux ratio increases up to twice and allows us to optimise the detection system position in the scope of in-situ PGAA

  14. OpenPSTD : The open source implementation of the pseudospectral time-domain method

    NARCIS (Netherlands)

    Krijnen, T.; Hornikx, M.C.J.; Borkowski, B.

    2014-01-01

    An open source implementation of the pseudospectral time-domain method for the propagation of sound is presented, which is geared towards applications in the built environment. Being a wavebased method, PSTD captures phenomena like diffraction, but maintains efficiency in processing time and memory

  15. The AAM-API: An Open Source Active Appearance Model Implementation

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2003-01-01

    This paper presents a public domain implementation of the Active Appearance Model framework and gives examples using it for segmentation and analysis of medical images. The software is open source, designed with efficiency in mind, and has been thoroughly tested and evaluated in several medical...

  16. Implementation of an approximate zero-variance scheme in the TRIPOLI Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, S.; Hoogenboom, J. E. [Delft Univ. of Technology, Mekelweg 15, 2629 JB Delft (Netherlands); Dumonteil, E.; Petit, O.; Diop, C. [Commissariat a l' Energie Atomique CEA, Gif-sur-Yvette (France)

    2006-07-01

    In an accompanying paper it is shown that theoretically a zero-variance Monte Carlo scheme can be devised for criticality calculations if the space, energy and direction dependent adjoint function is exactly known. This requires biasing of the transition and collision kernels with the appropriate adjoint function. In this paper it is discussed how an existing general purpose Monte Carlo code like TRIPOLI can be modified to approach the zero-variance scheme. This requires modifications for reading in the adjoint function obtained from a separate deterministic calculation for a number of space intervals, energy groups and discrete directions. Furthermore, a function has to be added to supply the direction dependent and the averaged adjoint function at a specific position in the system by interpolation. The initial particle weights of a certain batch must be set inversely proportional to the averaged adjoint function and proper normalization of the initial weights must be secured. The sampling of the biased transition kernel requires cumulative integrals of the biased kernel along the flight path until a certain value, depending on a selected random number is reached to determine a new collision site. The weight of the particle must be adapted accordingly. The sampling of the biased collision kernel (in a multigroup treatment) is much more like the normal sampling procedure. A numerical example is given for a 3-group calculation with a simplified transport model (two-direction model), demonstrating that the zero-variance scheme can be approximated quite well for this simplified case. (authors)

  17. Code implementation of partial-range angular scattering cross sections: GAMMER and MORSE

    International Nuclear Information System (INIS)

    Ward, J.T. Jr.

    1978-01-01

    A partial-range (finite-element) method has been previously developed for representing multigroup angular scattering in Monte Carlo photon transport. Computer application of the method, with preliminary quantitative results is discussed here. A multigroup photon cross section processing code, GAMMER, was written which utilized ENDF File 23 point data and the Klein--Nishina formula for Compton scattering. The cross section module of MORSE, along with several execution routines, were rewritten to permit use of the method with photon transport. Both conventional and partial-range techniques were applied for comparison to calculating angular and spectral penetration of 6-MeV photons through a six-inch iron slab. GAMMER was found to run 90% faster than SMUG, with further improvement evident for multiple-media situations; MORSE cross section storage was reduced by one-third; cross section processing, greatly simplified; and execution time, reduced by 15%. Particle penetration was clearly more forward peaked, as moment accuracy is retained to extremly high order. This method of cross section treatment offers potential savings in both storage and handling, as well as improved accuracy and running time in the actual execution phase. 3 figures, 4 tables

  18. Assessment of cavity dispersal correlations for possible implementation in the CONTAIN code

    International Nuclear Information System (INIS)

    Williams, D.C.; Griffith, R.O.

    1996-02-01

    Candidate models and correlations describing entrainment and dispersal of core debris from reactor cavities in direct containment heating (DCH) event, are assessed against a data base of approximately 600 experiments performed previously at Brookhaven National Laboratory and Sandia National Laboratories reactor cavities was studied. Cavity geometries studied are those of the Surry and Zion nuclear power plants and scale factors of 1/42 and 1/10 were studied for both geometries. Other parameters varied in the experiments include gas pressure driving the dispersal, identities of the driving gas and of the simulant fluid, orifice diameter in the pressure vessel, and volume of the gas pressure vessel. Correlations were assessed in terms of their ability to reproduce the observed trends in the fractions dispersed as the experimental parameters were varied. For the fraction of the debris dispersed, the correlations recommended for inclusion in the CONTAIN code are the Tutu-Ginsberg correlations, the integral form of the correlation proposed by Levy and a modified form of the Whalley-Hewitt correlation. For entrainment rates, the recommended correlations are the time-dependent forms of the Levy correlation, a correlation suggested by Tutu, and the modified Whalley-Hewitt correlation

  19. EchoSeed Model 6733 Iodine-125 brachytherapy source: Improved dosimetric characterization using the MCNP5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Mosleh-Shirazi, M. A.; Hadad, K.; Faghihi, R.; Baradaran-Ghahfarokhi, M.; Naghshnezhad, Z.; Meigooni, A. S. [Center for Research in Medical Physics and Biomedical Engineering and Physics Unit, Radiotherapy Department, Shiraz University of Medical Sciences, Shiraz 71936-13311 (Iran, Islamic Republic of); Radiation Research Center and Medical Radiation Department, School of Engineering, Shiraz University, Shiraz 71936-13311 (Iran, Islamic Republic of); Comprehensive Cancer Center of Nevada, Las Vegas, Nevada 89169 (United States)

    2012-08-15

    This study primarily aimed to obtain the dosimetric characteristics of the Model 6733 {sup 125}I seed (EchoSeed) with improved precision and accuracy using a more up-to-date Monte-Carlo code and data (MCNP5) compared to previously published results, including an uncertainty analysis. Its secondary aim was to compare the results obtained using the MCNP5, MCNP4c2, and PTRAN codes for simulation of this low-energy photon-emitting source. The EchoSeed geometry and chemical compositions together with a published {sup 125}I spectrum were used to perform dosimetric characterization of this source as per the updated AAPM TG-43 protocol. These simulations were performed in liquid water material in order to obtain the clinically applicable dosimetric parameters for this source model. Dose rate constants in liquid water, derived from MCNP4c2 and MCNP5 simulations, were found to be 0.993 cGyh{sup -1} U{sup -1} ({+-}1.73%) and 0.965 cGyh{sup -1} U{sup -1} ({+-}1.68%), respectively. Overall, the MCNP5 derived radial dose and 2D anisotropy functions results were generally closer to the measured data (within {+-}4%) than MCNP4c and the published data for PTRAN code (Version 7.43), while the opposite was seen for dose rate constant. The generally improved MCNP5 Monte Carlo simulation may be attributed to a more recent and accurate cross-section library. However, some of the data points in the results obtained from the above-mentioned Monte Carlo codes showed no statistically significant differences. Derived dosimetric characteristics in liquid water are provided for clinical applications of this source model.

  20. Design and implementation of low-Q diffractometers at spallation sources

    International Nuclear Information System (INIS)

    Seeger, P.A.; Hjelm, R.P.

    1993-01-01

    Low-Q diffractometers at spallation sources that use time of flight methods have been successfully implemented at several facilities, including the Los Alamos Neutron Scattering Center. The proposal to build new, more powerful, advanced spallation sources using advanced moderator concepts will provide luminosity greater than 20 times the brightest spallation source available today. These developments provide opportunity and challenge to expand the capabilities of present instruments with new designs. The authors review the use of time of flight for low-Q measurements and introduce new designs to extend the capabilities of present-day instruments. They introduce Monte Carlo methods to optimize design and simulate the performance of these instruments. The expected performance of the new instruments are compared to present day pulsed source- and reactor-based small-angle neutron scattering instruments. They review some of the new developments that will be needed to use the power of brighter sources effectively

  1. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  2. Methodology and Toolset for Model Verification, Hardware/Software co-simulation, Performance Optimisation and Customisable Source-code generation

    DEFF Research Database (Denmark)

    Berger, Michael Stübert; Soler, José; Yu, Hao

    2013-01-01

    The MODUS project aims to provide a pragmatic and viable solution that will allow SMEs to substantially improve their positioning in the embedded-systems development market. The MODUS tool will provide a model verification and Hardware/Software co-simulation tool (TRIAL) and a performance...... optimisation and customisable source-code generation tool (TUNE). The concept is depicted in automated modelling and optimisation of embedded-systems development. The tool will enable model verification by guiding the selection of existing open-source model verification engines, based on the automated analysis...

  3. OFF, Open source Finite volume Fluid dynamics code: A free, high-order solver based on parallel, modular, object-oriented Fortran API

    Science.gov (United States)

    Zaghi, S.

    2014-07-01

    OFF, an open source (free software) code for performing fluid dynamics simulations, is presented. The aim of OFF is to solve, numerically, the unsteady (and steady) compressible Navier-Stokes equations of fluid dynamics by means of finite volume techniques: the research background is mainly focused on high-order (WENO) schemes for multi-fluids, multi-phase flows over complex geometries. To this purpose a highly modular, object-oriented application program interface (API) has been developed. In particular, the concepts of data encapsulation and inheritance available within Fortran language (from standard 2003) have been stressed in order to represent each fluid dynamics "entity" (e.g. the conservative variables of a finite volume, its geometry, etc…) by a single object so that a large variety of computational libraries can be easily (and efficiently) developed upon these objects. The main features of OFF can be summarized as follows: Programming LanguageOFF is written in standard (compliant) Fortran 2003; its design is highly modular in order to enhance simplicity of use and maintenance without compromising the efficiency; Parallel Frameworks Supported the development of OFF has been also targeted to maximize the computational efficiency: the code is designed to run on shared-memory multi-cores workstations and distributed-memory clusters of shared-memory nodes (supercomputers); the code's parallelization is based on Open Multiprocessing (OpenMP) and Message Passing Interface (MPI) paradigms; Usability, Maintenance and Enhancement in order to improve the usability, maintenance and enhancement of the code also the documentation has been carefully taken into account; the documentation is built upon comprehensive comments placed directly into the source files (no external documentation files needed): these comments are parsed by means of doxygen free software producing high quality html and latex documentation pages; the distributed versioning system referred as git

  4. Study of the source term of radiation of the CDTN GE-PET trace 8 cyclotron with the MCNPX code

    Energy Technology Data Exchange (ETDEWEB)

    Benavente C, J. A.; Lacerda, M. A. S.; Fonseca, T. C. F.; Da Silva, T. A. [Centro de Desenvolvimento da Tecnologia Nuclear / CNEN, Av. Pte. Antonio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R., E-mail: jhonnybenavente@gmail.com [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico)

    2015-10-15

    Full text: The knowledge of the neutron spectra in a PET cyclotron is important for the optimization of radiation protection of the workers and individuals of the public. The main objective of this work is to study the source term of radiation of the GE-PET trace 8 cyclotron of the Development Center of Nuclear Technology (CDTN/CNEN) using computer simulation by the Monte Carlo method. The MCNPX version 2.7 code was used to calculate the flux of neutrons produced from the interaction of the primary proton beam with the target body and other cyclotron components, during 18F production. The estimate of the source term and the corresponding radiation field was performed from the bombardment of a H{sub 2}{sup 18}O target with protons of 75 μA current and 16.5 MeV of energy. The values of the simulated fluxes were compared with those reported by the accelerator manufacturer (GE Health care Company). Results showed that the fluxes estimated with the MCNPX codes were about 70% lower than the reported by the manufacturer. The mean energies of the neutrons were also different of that reported by GE Health Care. It is recommended to investigate other cross sections data and the use of physical models of the code itself for a complete characterization of the source term of radiation. (Author)

  5. Supporting the Cybercrime Investigation Process: Effective Discrimination of Source Code Authors Based on Byte-Level Information

    Science.gov (United States)

    Frantzeskou, Georgia; Stamatatos, Efstathios; Gritzalis, Stefanos

    Source code authorship analysis is the particular field that attempts to identify the author of a computer program by treating each program as a linguistically analyzable entity. This is usually based on other undisputed program samples from the same author. There are several cases where the application of such a method could be of a major benefit, such as tracing the source of code left in the system after a cyber attack, authorship disputes, proof of authorship in court, etc. In this paper, we present our approach which is based on byte-level n-gram profiles and is an extension of a method that has been successfully applied to natural language text authorship attribution. We propose a simplified profile and a new similarity measure which is less complicated than the algorithm followed in text authorship attribution and it seems more suitable for source code identification since is better able to deal with very small training sets. Experiments were performed on two different data sets, one with programs written in C++ and the second with programs written in Java. Unlike the traditional language-dependent metrics used by previous studies, our approach can be applied to any programming language with no additional cost. The presented accuracy rates are much better than the best reported results for the same data sets.

  6. Performance Analysis for Bit Error Rate of DS- CDMA Sensor Network Systems with Source Coding

    Directory of Open Access Journals (Sweden)

    Haider M. AlSabbagh

    2012-03-01

    Full Text Available The minimum energy (ME coding combined with DS-CDMA wireless sensor network is analyzed in order to reduce energy consumed and multiple access interference (MAI with related to number of user(receiver. Also, the minimum energy coding which exploits redundant bits for saving power with utilizing RF link and On-Off-Keying modulation. The relations are presented and discussed for several levels of errors expected in the employed channel via amount of bit error rates and amount of the SNR for number of users (receivers.

  7. Numerical modeling of the Linac4 negative ion source extraction region by 3D PIC-MCC code ONIX

    CERN Document Server

    Mochalskyy, S; Minea, T; Lifschitz, AF; Schmitzer, C; Midttun, O; Steyaert, D

    2013-01-01

    At CERN, a high performance negative ion (NI) source is required for the 160 MeV H- linear accelerator Linac4. The source is planned to produce 80 mA of H- with an emittance of 0.25 mm mradN-RMS which is technically and scientifically very challenging. The optimization of the NI source requires a deep understanding of the underling physics concerning the production and extraction of the negative ions. The extraction mechanism from the negative ion source is complex involving a magnetic filter in order to cool down electrons’ temperature. The ONIX (Orsay Negative Ion eXtraction) code is used to address this problem. The ONIX is a selfconsistent 3D electrostatic code using Particles-in-Cell Monte Carlo Collisions (PIC-MCC) approach. It was written to handle the complex boundary conditions between plasma, source walls, and beam formation at the extraction hole. Both, the positive extraction potential (25kV) and the magnetic field map are taken from the experimental set-up, in construction at CERN. This contrib...

  8. First massively parallel algorithm to be implemented in Apollo-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability (CP) method in neutron transport, as applied to arbitrary 2D XY geometries, like the TDT module in APOLLO-II, is very time consuming. Consequently RZ or 3D extensions became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the CP method. Parallelization is applied to the energy groups, using the CMMD message passing library. In our case we use 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future fine multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 3 tabs., 4 figs., 4 refs

  9. First massively parallel algorithm to be implemented in APOLLO-II code

    International Nuclear Information System (INIS)

    Stankovski, Z.

    1994-01-01

    The collision probability method in neutron transport, as applied to arbitrary 2-dimensional geometries, like the two dimensional transport module in APOLLO-II is very time consuming. Consequently 3-dimensional extension became prohibitive. Fortunately, this method is very suitable for parallelization. Massively parallel computer architectures, especially MIMD machines, bring a new breath to this method. In this paper we present a CM5 implementation of the collision probability method. Parallelization is applied to the energy groups, using the CMMD massage passing library. In our case we used 32 processors for the standard 99-group APOLLIB-II library. The real advantage of this algorithm will appear in the calculation of the future multigroup library (about 8000 groups) of the SAPHYR project with a massively parallel computer (to the order of hundreds of processors). (author). 4 refs., 4 figs., 3 tabs

  10. Implementation of advanced finite element technology in structural analysis computer codes

    International Nuclear Information System (INIS)

    Kohli, T.D.; Wiley, J.W.; Koss, P.W.

    1975-01-01

    Advances in finite element technology over the last several years have been rapid and have largely outstripped the ability of general purpose programs in the public domain to assimilate them. As a result, it has become the burden of the structural analyst to incorporate these advances himself. This paper discusses the implementation and extension of specific technological advances in Bechtel structural analysis programs. In general these advances belong in two categories: (1) the finite elements themselves and (2) equation solution algorithms. Improvements in the finite elements involve increased accuracy of the elements and extension of their applicability to various specialized modelling situations. Improvements in solution algorithms have been almost exclusively aimed at expanding problem solving capacity. (Auth.)

  11. Active Fault Near-Source Zones Within and Bordering the State of California for the 1997 Uniform Building Code

    Science.gov (United States)

    Petersen, M.D.; Toppozada, Tousson R.; Cao, T.; Cramer, C.H.; Reichle, M.S.; Bryant, W.A.

    2000-01-01

    The fault sources in the Project 97 probabilistic seismic hazard maps for the state of California were used to construct maps for defining near-source seismic coefficients, Na and Nv, incorporated in the 1997 Uniform Building Code (ICBO 1997). The near-source factors are based on the distance from a known active fault that is classified as either Type A or Type B. To determine the near-source factor, four pieces of geologic information are required: (1) recognizing a fault and determining whether or not the fault has been active during the Holocene, (2) identifying the location of the fault at or beneath the ground surface, (3) estimating the slip rate of the fault, and (4) estimating the maximum earthquake magnitude for each fault segment. This paper describes the information used to produce the fault classifications and distances.

  12. U3Si2 Fabrication and Testing for Implementation into the BISON Fuel Performance Code

    Energy Technology Data Exchange (ETDEWEB)

    Knight, Travis W.

    2018-04-23

    A creep test stand was designed and constructed for compressive creep testing of U3Si2 pellets. This is described in Chapter 3.

    • Creep testing of U3Si2 pellets was completed. In total, 13 compressive creep tests of U3Si2 pellets was successfully completed. This is reported in Chapter 3.
    • Secondary creep model of U3Si2 was developed and implemented in BISON. This is described in Chapter 4.
    • Properties of U3Si2 were implemented in BISON. This is described in Chapter 4.
    • A resonant frequency and damping analyzer (RFDA) using impulse excitation technique (IET) was setup, tested, and used to analyze U3Si2 samples to measure Young’s and Shear Moduli which were then used to calculate the Poisson ratio for U3Si2. This is described in Chapter 5.
    • Characterization of U3Si2 samples was completed. Samples were prepared and analyzed by XRD, SEM, and optical microscopy. Grain size analysis was conducted on images.
    SEM with EDS was used to analyze second phase precipitates. Impulse excitation technique was used to determine the Young’s and Shear Moduli of a tile specimen which allowed for the determination of the Poisson ratio. Helium pycnometry and mercury intrusion porosimetry was performed and used with image analysis to determine porosity size distribution. Vickers microindentation characterization method was used to evaluate the mechanical properties of U3Si2 including toughness, hardness, and Vickers hardness. Electrical resistivity measurement was done using the four-point probe method. This is reported in Chapter 5.

  13. Implementation of the International Code of Practice on Dosimetry in Diagnostic Radiology (TRS 457): Review of Test Results

    International Nuclear Information System (INIS)

    2011-01-01

    In 2007, the IAEA published Dosimetry in Diagnostic Radiology: An International Code of Practice (IAEA Technical Reports Series No. 457). This publication recommends procedures for calibration and dosimetric measurement for the attainment of standardized dosimetry. It also addresses requirements both in standards dosimetry laboratories, especially Secondary Standards Dosimetry Laboratories (SSDLs), and in clinical centres for radiology, as found in most hospitals. The implementation of TRS No. 457 decreases the uncertainty in the dosimetry of diagnostic radiology beams and provides Member States with a unified and consistent framework for dosimetry in diagnostic radiology, which previously did not exist. A coordinated research project (CRP E2.10.06) was established in order to provide practical guidance to professionals at SSDLs and to clinical medical physicists on the implementation of TRS No. 457. This includes the calibration of radiological dosimetry instrumentation, the dissemination of calibration coefficients to clinical centres and the establishment of dosimetric measurement processes in clinical settings. The main goals of the CRP were to: Test the procedures recommended in TRS No. 457 for calibration of radiation detectors in different types of diagnostic beams and measuring instruments for varying diagnostic X ray modalities; Test the clinical dosimetry procedures, including the use of phantoms and patient dose surveys; Report on the practical implementation of TRS No. 457 at both SSDLs and hospital sites. Testing of TRS No. 457 was performed by a group of medical physicists from hospitals and SSDLs from various institutions worldwide

  14. Transmission from theory to practice: Experiences using open-source code development and a virtual short course to increase the adoption of new theoretical approaches

    Science.gov (United States)

    Harman, C. J.

    2015-12-01

    Even amongst the academic community, new theoretical tools can remain underutilized due to the investment of time and resources required to understand and implement them. This surely limits the frequency that new theory is rigorously tested against data by scientists outside the group that developed it, and limits the impact that new tools could have on the advancement of science. Reducing the barriers to adoption through online education and open-source code can bridge the gap between theory and data, forging new collaborations, and advancing science. A pilot venture aimed at increasing the adoption of a new theory of time-variable transit time distributions was begun in July 2015 as a collaboration between Johns Hopkins University and The Consortium of Universities for the Advancement of Hydrologic Science (CUAHSI). There were four main components to the venture: a public online seminar covering the theory, an open source code repository, a virtual short course designed to help participants apply the theory to their data, and an online forum to maintain discussion and build a community of users. 18 participants were selected for the non-public components based on their responses in an application, and were asked to fill out a course evaluation at the end of the short course, and again several months later. These evaluations, along with participation in the forum and on-going contact with the organizer suggest strengths and weaknesses in this combination of components to assist participants in adopting new tools.

  15. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  16. Large-eddy simulation of convective boundary layer generated by highly heated source with open source code, OpenFOAM

    International Nuclear Information System (INIS)

    Hattori, Yasuo; Suto, Hitoshi; Eguchi, Yuzuru; Sano, Tadashi; Shirai, Koji; Ishihara, Shuji

    2011-01-01

    Spatial- and temporal-characteristics of turbulence structures in the close vicinity of a heat source, which is a horizontal upward-facing round plate heated at high temperature, are examined by using well resolved large-eddy simulations. The verification is carried out through the comparison with experiments: the predicted statistics, including the PDF distribution of temperature fluctuations, agree well with measurements, indicating that the present simulations have a capability to appropriately reproduce turbulence structures near the heat source. The reproduced three-dimensional thermal- and fluid-fields in the close vicinity of the heat source reveals developing processes of coherence structures along the surface: the stationary- and streaky-flow patterns appear near the edge, and such patterns randomly shift to cell-like patterns with incursion into the center region, resulting in thermal-plume meandering. Both the patterns have very thin structures, but the depth of streaky structure is considerably small compared with that of cell-like patterns; this discrepancy causes the layered structures. The structure is the source of peculiar turbulence characteristics, the prediction of which is quite difficult with RANS-type turbulence models. The understanding such structures obtained in present study must be helpful to improve the turbulence model used in nuclear engineering. (author)

  17. Limiting precision in differential equation solvers. II Sources of trouble and starting a code

    International Nuclear Information System (INIS)

    Shampine, L.F.

    1978-01-01

    The reasons a class of codes for solving ordinary differential equations might want to use an extremely small step size are investigated. For this class the likelihood of precision difficulties is evaluated and remedies examined. The investigations suggests a way of selecting automatically an initial step size which should be reliably on scale

  18. Beacon- and Schema-Based Method for Recognizing Algorithms from Students' Source Code

    Science.gov (United States)

    Taherkhani, Ahmad; Malmi, Lauri

    2013-01-01

    In this paper, we present a method for recognizing algorithms from students programming submissions coded in Java. The method is based on the concept of "programming schemas" and "beacons". Schemas are high-level programming knowledge with detailed knowledge abstracted out, and beacons are statements that imply specific…

  19. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  20. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-03-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimised to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the dataset. As a test case we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model we find that the best fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  1. SPIDERMAN: an open-source code to model phase curves and secondary eclipses

    Science.gov (United States)

    Louden, Tom; Kreidberg, Laura

    2018-06-01

    We present SPIDERMAN (Secondary eclipse and Phase curve Integrator for 2D tempERature MAppiNg), a fast code for calculating exoplanet phase curves and secondary eclipses with arbitrary surface brightness distributions in two dimensions. Using a geometrical algorithm, the code solves exactly the area of sections of the disc of the planet that are occulted by the star. The code is written in C with a user-friendly Python interface, and is optimized to run quickly, with no loss in numerical precision. Approximately 1000 models can be generated per second in typical use, making Markov Chain Monte Carlo analyses practicable. The modular nature of the code allows easy comparison of the effect of multiple different brightness distributions for the data set. As a test case, we apply the code to archival data on the phase curve of WASP-43b using a physically motivated analytical model for the two-dimensional brightness map. The model provides a good fit to the data; however, it overpredicts the temperature of the nightside. We speculate that this could be due to the presence of clouds on the nightside of the planet, or additional reflected light from the dayside. When testing a simple cloud model, we find that the best-fitting model has a geometric albedo of 0.32 ± 0.02 and does not require a hot nightside. We also test for variation of the map parameters as a function of wavelength and find no statistically significant correlations. SPIDERMAN is available for download at https://github.com/tomlouden/spiderman.

  2. Pre-coding method and apparatus for multiple source or time-shifted single source data and corresponding inverse post-decoding method and apparatus

    Science.gov (United States)

    Yeh, Pen-Shu (Inventor)

    1998-01-01

    A pre-coding method and device for improving data compression performance by removing correlation between a first original data set and a second original data set, each having M members, respectively. The pre-coding method produces a compression-efficiency-enhancing double-difference data set. The method and device produce a double-difference data set, i.e., an adjacent-delta calculation performed on a cross-delta data set or a cross-delta calculation performed on two adjacent-delta data sets, from either one of (1) two adjacent spectral bands coming from two discrete sources, respectively, or (2) two time-shifted data sets coming from a single source. The resulting double-difference data set is then coded using either a distortionless data encoding scheme (entropy encoding) or a lossy data compression scheme. Also, a post-decoding method and device for recovering a second original data set having been represented by such a double-difference data set.

  3. Pre-Test Analysis of the MEGAPIE Spallation Source Target Cooling Loop Using the TRAC/AAA Code

    International Nuclear Information System (INIS)

    Bubelis, Evaldas; Coddington, Paul; Leung, Waihung

    2006-01-01

    A pilot project is being undertaken at the Paul Scherrer Institute in Switzerland to test the feasibility of installing a Lead-Bismuth Eutectic (LBE) spallation target in the SINQ facility. Efforts are coordinated under the MEGAPIE project, the main objectives of which are to design, build, operate and decommission a 1 MW spallation neutron source. The technology and experience of building and operating a high power spallation target are of general interest in the design of an Accelerator Driven System (ADS) and in this context MEGAPIE is one of the key experiments. The target cooling is one of the important aspects of the target system design that needs to be studied in detail. Calculations were performed previously using the RELAP5/Mod 3.2.2 and ATHLET codes, but in order to verify the previous code results and to provide another capability to model LBE systems, a similar study of the MEGAPIE target cooling system has been conducted with the TRAC/AAA code. In this paper a comparison is presented for the steady-state results obtained using the above codes. Analysis of transients, such as unregulated cooling of the target, loss of heat sink, the main electro-magnetic pump trip of the LBE loop and unprotected proton beam trip, were studied with TRAC/AAA and compared to those obtained earlier using RELAP5/Mod 3.2.2. This work extends the existing validation data-base of TRAC/AAA to heavy liquid metal systems and comprises the first part of the TRAC/AAA code validation study for LBE systems based on data from the MEGAPIE test facility and corresponding inter-code comparisons. (authors)

  4. Multiple implementation of a reactor protection code in PHI2, PASCAL, and IFTRAN on the SIEMENS-330 computer

    International Nuclear Information System (INIS)

    Gmeiner, L.; Lemperle, W.; Voges, U.

    1978-01-01

    In safety related computer applications, as in the case of a reactor protection system considered here, mostly multi-computer systems are necessary for reasons of reliability and availability. The hardware structure of the protection system and the software requierements derived from it are explained. In order to study the effects of diversified programming of the three computers the protection codes were implemented in the languages IFTRAN, PASCAL, and PHI2. According to the experience gained diversified programming seems to be a proper means to prevent identical programming errors in all three computers on one hand and to detect ambiguities of the specification on the other. During all of the experiment the errors occurring were recorded in detail and at the moment are being evaluated. (orig./WB) [de

  5. Second and third stages of project for the implementation of two asymmetric cooling loops modeled by the ALMOD3 code

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1985-04-01

    The second and third steps of the project for implementation of two non-symmetric cooling loops modeled by the ALMOD3 computer code are presented. These steps consists in activate the option for 2 loops already present in ALMOD3 original version and to introduce the GEVAP model for one of the two steam generators. In ALMOD3 original version the simulation of two non-symmetric loops was only possible using external functions, which provide the removed heat for each time step for one of the steam generators. With the introduction of GEVAP model, it is possible to obtain more accurate results. Due to its simplicity, the computer time required for execution is short. The results obtained in Angra 1 simulations are presented, analysed and compared with results obtained using one loop for simulating symmetric transients. (Author) [pt

  6. Field programmable gate array (FPGA implementation of novel complex PN-code-generator- based data scrambler and descrambler

    Directory of Open Access Journals (Sweden)

    Shabir A. Parah

    2010-04-01

    Full Text Available A novel technique for the generation of complex and lengthy code sequences using low- length linear feedback shift registers (LFSRs for data scrambling and descrambling is proposed. The scheme has been implemented using VHSIC hardware description language (VHDL approach which allows the reconfigurability of the proposed system such that the length of the generated sequences can be changed as per the security requirements. In the present design consideration the power consumption and chip area requirements are small and the operating speed is high compared to conventional discrete I.C. design, which is a pre-requisite for any system designer. The design has been synthesised on device EP2S15F484C3 of Straitx II FPGA family, using Quarts Altera version 8.1. The simulation results have been found satisfactory and are in conformity with the theoretical observations.

  7. Physical analysis and modelling of aerosols transport. implementation in a finite elements code. Experimental validation in laminar and turbulent flows

    International Nuclear Information System (INIS)

    Armand, Patrick

    1995-01-01

    The aim of this work consists in the Fluid Mechanics and aerosol Physics coupling. In the first part, the order of magnitude analysis of the particle dynamics is done. This particle is embedded in a non-uniform unsteady flow. Flow approximations around the inclusion are described. Corresponding aerodynamic drag formulae are expressed. Possible situations related to the problem data are extensively listed. In the second part, one studies the turbulent particles transport. Eulerian approach which is particularly well adapted to industrial codes is preferred in comparison with the Lagrangian methods. One chooses the two-fluid formalism in which career gas-particles slip is taken into account. Turbulence modelling gets through a k-epsilon modulated by the inclusions action on the flow. The model is implemented In a finite elements code. Finally, In the third part, one validates the modelling in laminar and turbulent cases. We compare simulations to various experiments (settling battery, inertial impaction in a bend, jets loaded with glass beads particles) which are taken in the literature or done by ourselves at the laboratory. The results are very close. It is a good point when it is thought of the particles transport model and associated software future use. (author) [fr

  8. A parallel Monte Carlo code for planar and SPECT imaging: implementation, verification and applications in (131)I SPECT.

    Science.gov (United States)

    Dewaraja, Yuni K; Ljungberg, Michael; Majumdar, Amitava; Bose, Abhijit; Koral, Kenneth F

    2002-02-01

    This paper reports the implementation of the SIMIND Monte Carlo code on an IBM SP2 distributed memory parallel computer. Basic aspects of running Monte Carlo particle transport calculations on parallel architectures are described. Our parallelization is based on equally partitioning photons among the processors and uses the Message Passing Interface (MPI) library for interprocessor communication and the Scalable Parallel Random Number Generator (SPRNG) to generate uncorrelated random number streams. These parallelization techniques are also applicable to other distributed memory architectures. A linear increase in computing speed with the number of processors is demonstrated for up to 32 processors. This speed-up is especially significant in Single Photon Emission Computed Tomography (SPECT) simulations involving higher energy photon emitters, where explicit modeling of the phantom and collimator is required. For (131)I, the accuracy of the parallel code is demonstrated by comparing simulated and experimental SPECT images from a heart/thorax phantom. Clinically realistic SPECT simulations using the voxel-man phantom are carried out to assess scatter and attenuation correction.

  9. Real-time implementation of logo detection on open source BeagleBoard

    Science.gov (United States)

    George, M.; Kehtarnavaz, N.; Estevez, L.

    2011-03-01

    This paper presents the real-time implementation of our previously developed logo detection and tracking algorithm on the open source BeagleBoard mobile platform. This platform has an OMAP processor that incorporates an ARM Cortex processor. The algorithm combines Scale Invariant Feature Transform (SIFT) with k-means clustering, online color calibration and moment invariants to robustly detect and track logos in video. Various optimization steps that are carried out to allow the real-time execution of the algorithm on BeagleBoard are discussed. The results obtained are compared to the PC real-time implementation results.

  10. Radiation Shielding Information Center: a source of computer codes and data for fusion neutronics studies

    International Nuclear Information System (INIS)

    McGill, B.L.; Roussin, R.W.; Trubey, D.K.; Maskewitz, B.F.

    1980-01-01

    The Radiation Shielding Information Center (RSIC), established in 1962 to collect, package, analyze, and disseminate information, computer codes, and data in the area of radiation transport related to fission, is now being utilized to support fusion neutronics technology. The major activities include: (1) answering technical inquiries on radiation transport problems, (2) collecting, packaging, testing, and disseminating computing technology and data libraries, and (3) reviewing literature and operating a computer-based information retrieval system containing material pertinent to radiation transport analysis. The computer codes emphasize methods for solving the Boltzmann equation such as the discrete ordinates and Monte Carlo techniques, both of which are widely used in fusion neutronics. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  11. kspectrum: an open-source code for high-resolution molecular absorption spectra production

    International Nuclear Information System (INIS)

    Eymet, V.; Coustet, C.; Piaud, B.

    2016-01-01

    We present the kspectrum, scientific code that produces high-resolution synthetic absorption spectra from public molecular transition parameters databases. This code was originally required by the atmospheric and astrophysics communities, and its evolution is now driven by new scientific projects among the user community. Since it was designed without any optimization that would be specific to any particular application field, its use could also be extended to other domains. kspectrum produces spectral data that can subsequently be used either for high-resolution radiative transfer simulations, or for producing statistic spectral model parameters using additional tools. This is a open project that aims at providing an up-to-date tool that takes advantage of modern computational hardware and recent parallelization libraries. It is currently provided by Méso-Star (http://www.meso-star.com) under the CeCILL license, and benefits from regular updates and improvements. (paper)

  12. Implementation of Japanese male and female tomographic phantoms to multi-particle Monte Carlo code for ionizing radiation dosimetry

    International Nuclear Information System (INIS)

    Lee, Choonsik; Nagaoka, Tomoaki; Lee, Jai-Ki

    2006-01-01

    Japanese male and female tomographic phantoms, which have been developed for radio-frequency electromagnetic-field dosimetry, were implemented into multi-particle Monte Carlo transport code to evaluate realistic dose distribution in human body exposed to radiation field. Japanese tomographic phantoms, which were developed from the whole body magnetic resonance images of Japanese average adult male and female, were processed as follows to be implemented into general purpose multi-particle Monte Carlo code, MCNPX2.5. Original array size of Japanese male and female phantoms, 320 x 160 x 866 voxels and 320 x 160 x 804 voxels, respectively, were reduced into 320 x 160 x 433 voxels and 320 x 160 x 402 voxels due to the limitation of memory use in MCNPX2.5. The 3D voxel array of the phantoms were processed by using the built-in repeated structure algorithm, where the human anatomy was described by the repeated lattice of tiny cube containing the information of material composition and organ index number. Original phantom data were converted into ASCII file, which can be directly ported into the lattice card of MCNPX2.5 input deck by using in-house code. A total of 30 material compositions obtained from International Commission on Radiation Units and Measurement (ICRU) report 46 were assigned to 54 and 55 organs and tissues in the male and female phantoms, respectively, and imported into the material card of MCNPX2.5 along with the corresponding cross section data. Illustrative calculation of absorbed doses for 26 internal organs and effective dose were performed for idealized broad parallel photon and neutron beams in anterior-posterior irradiation geometry, which is typical for workers at nuclear power plant. The results were compared with the data from other Japanese and Caucasian tomographic phantom, and International Commission on Radiological Protection (ICRP) report 74. The further investigation of the difference in organ dose and effective dose among tomographic

  13. Impact on DNB predictions of mixing models implemented into the three-dimensional thermal-hydraulic code Thyc

    International Nuclear Information System (INIS)

    Banner, D.

    1993-10-01

    The objective of this paper is to point out how departure from nucleate boiling (DNB) predictions can be improved by the THYC software. The EPRI/Columbia University E161 data base has been used for this study. In a first step, three thermal-hydraulic mixing models have been implemented into the code in order to obtain more accurate calculations of local void fractions at the DNB location. The three investigated models (A, B and C) are presented by growing complexity. Model A assumes a constant turbulent viscosity throughout the flow. In model B, a k-L turbulence transport equation has been implemented to model generation and decay of turbulence in the DNB test section. Model C is obtained by representing oriented transverse flows due to mixing vanes in addition to the k-L equation. A parametric study carried out with the three mixing models exhibits the most significant parameters. The occurrence of departure from nucleate boiling is then predicted by using a DNB correlation. Similar results are obtained as long as the DNB correlation is kept unchanged. In a second step, an attempt to substitute correlations by another statistical approach (pseudo-cubic thin-plate type Spline method) has been done. It is then shown that standard deviations of P/M (predicted to measured) ratios can be greatly improved by advanced statistics. (author). 7 figs., 2 tabs., 9 refs

  14. Experience with the Open Source based implementation for ATLAS Conditions Data Management System

    CERN Document Server

    Amorim, A; Oliveira, C; Pedro, L; Barros, N

    2003-01-01

    Conditions Data in high energy physics experiments is frequently seen as every data needed for reconstruction besides the event data itself. This includes all sorts of slowly evolving data like detector alignment, calibration and robustness, and data from detector control system. Also, every Conditions Data Object is associated with a time interval of validity and a version. Besides that, quite often is useful to tag collections of Conditions Data Objects altogether. These issues have already been investigated and a data model has been proposed and used for different implementations based in commercial DBMSs, both at CERN and for the BaBar experiment. The special case of the ATLAS complex trigger that requires online access to calibration and alignment data poses new challenges that have to be met using a flexible and customizable solution more in the line of Open Source components. Motivated by the ATLAS challenges we have developed an alternative implementation, based in an Open Source RDBMS. Several issues...

  15. Four energy group neutron flux distribution in the Syrian miniature neutron source reactor using the WIMSD4 and CITATION code

    International Nuclear Information System (INIS)

    Khattab, K.; Omar, H.; Ghazi, N.

    2009-01-01

    A 3-D (R, θ , Z) neutronic model for the Miniature Neutron Source Reactor (MNSR) was developed earlier to conduct the reactor neutronic analysis. The group constants for all the reactor components were generated using the WIMSD4 code. The reactor excess reactivity and the four group neutron flux distributions were calculated using the CITATION code. This model is used in this paper to calculate the point wise four energy group neutron flux distributions in the MNSR versus the radius, angle and reactor axial directions. Good agreement is noticed between the measured and the calculated thermal neutron flux in the inner and the outer irradiation site with relative difference less than 7% and 5% respectively. (author)

  16. New implementation of an SX700 undulator beamline at the Advanced Light Source

    International Nuclear Information System (INIS)

    Warwick, T.; Andresen, N.; Comins, J.; Kaznacheyev, K.; Kortright, J.B.; McKean, P.J.; Padmore, H.A.; Shuh, D.K.; Stevens, T.; Tyliszczak, T.

    2004-01-01

    A newly engineered implementation of a collimated SX700-style beam line for soft x-rays is described. This facility is operational at the Advanced Light Source and delivers high brightness undulator beams to a scanning zone plate microscope and to an array of end stations for x-ray spectroscopic studies of wet surfaces. Switching between branches is motorized, servo-steering systems maintain throughput and the monochromator works together with the elliptical undulator for a fully automated facility

  17. Implementing a bar-code assisted medication administration system: effects on the dispensing process and user perceptions.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Cheng, K; Lai, K; Chui, W C M; Cheung, B M Y

    2014-06-01

    We assessed the effects of a bar-code assisted medication administration system used without the support of computerised prescribing (stand-alone BCMA), on the dispensing process and its users. The stand-alone BCMA system was implemented in one ward of a teaching hospital. The number of dispensing steps, dispensing time and potential dispensing errors (PDEs) were directly observed one month before and eight months after the intervention. Attitudes of pharmacy and nursing staff were assessed using a questionnaire (Likert scale) and interviews. Among 1291 and 471 drug items observed before and after the introduction of the technology respectively, the number of dispensing steps increased from five to eight and time (standard deviation) to dispense one drug item by one staff personnel increased from 0.8 (0.09) to 1.5 (0.12) min. Among 2828 and 471 drug items observed before and after the intervention respectively, the number of PDEs increased significantly (Psystem offered less benefit to the dispensing process (9/16). Nursing staff perceived the system as useful in improving the accuracy of drug administration (7/10). Implementing a stand-alone BCMA system may slow down and complicate the dispensing process. Nursing staff believe the stand-alone BCMA system could improve the drug administration process but pharmacy staff believes the technology would be more helpful if supported by computerised prescribing. However, periodical assessments are needed to identify weaknesses in the process after implementation, and all users should be educated on the benefits of using this technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Investigation of Anisotropy Caused by Cylinder Applicator on Dose Distribution around Cs-137 Brachytherapy Source using MCNP4C Code

    Directory of Open Access Journals (Sweden)

    Sedigheh Sina

    2011-06-01

    Full Text Available Introduction: Brachytherapy is a type of radiotherapy in which radioactive sources are used in proximity of tumors normally for treatment of malignancies in the head, prostate and cervix. Materials and Methods: The Cs-137 Selectron source is a low-dose-rate (LDR brachytherapy source used in a remote afterloading system for treatment of different cancers. This system uses active and inactive spherical sources of 2.5 mm diameter, which can be used in different configurations inside the applicator to obtain different dose distributions. In this study, first the dose distribution at different distances from the source was obtained around a single pellet inside the applicator in a water phantom using the MCNP4C Monte Carlo code. The simulations were then repeated for six active pellets in the applicator and for six point sources.  Results: The anisotropy of dose distribution due to the presence of the applicator was obtained by division of dose at each distance and angle to the dose at the same distance and angle of 90 degrees. According to the results, the doses decreased towards the applicator tips. For example, for points at the distances of 5 and 7 cm from the source and angle of 165 degrees, such discrepancies reached 5.8% and 5.1%, respectively.  By increasing the number of pellets to six, these values reached 30% for the angle of 5 degrees. Discussion and Conclusion: The results indicate that the presence of the applicator causes a significant dose decrease at the tip of the applicator compared with the dose in the transverse plane. However, the treatment planning systems consider an isotropic dose distribution around the source and this causes significant errors in treatment planning, which are not negligible, especially for a large number of sources inside the applicator.

  19. Developing open-source codes for electromagnetic geophysics using industry support

    Science.gov (United States)

    Key, K.

    2017-12-01

    Funding for open-source software development in academia often takes the form of grants and fellowships awarded by government bodies and foundations where there is no conflict-of-interest between the funding entity and the free dissemination of the open-source software products. Conversely, funding for open-source projects in the geophysics industry presents challenges to conventional business models where proprietary licensing offers value that is not present in open-source software. Such proprietary constraints make it easier to convince companies to fund academic software development under exclusive software distribution agreements. A major challenge for obtaining commercial funding for open-source projects is to offer a value proposition that overcomes the criticism that such funding is a give-away to the competition. This work draws upon a decade of experience developing open-source electromagnetic geophysics software for the oil, gas and minerals exploration industry, and examines various approaches that have been effective for sustaining industry sponsorship.

  20. Calculation of the effective dose from natural radioactivity sources in soil using MCNP code

    International Nuclear Information System (INIS)

    Krstic, D.; Nikezic, D.

    2008-01-01

    Full text: Effective dose delivered by photon emitted from natural radioactivity in soil was calculated in this report. Calculations have been done for the most common natural radionuclides in soil as 238 U, 232 Th series and 40 K. A ORNL age-dependent phantom and the Monte Carlo transport code MCNP-4B were employed to calculate the energy deposited in all organs of phantom.The effective dose was calculated according to ICRP74 recommendations. Conversion coefficients of effective dose per air kerma were determined. Results obtained here were compared with other authors

  1. In-vessel source term analysis code TRACER version 2.3. User's manual

    International Nuclear Information System (INIS)

    Toyohara, Daisuke; Ohno, Shuji; Hamada, Hirotsugu; Miyahara, Shinya

    2005-01-01

    A computer code TRACER (Transport Phenomena of Radionuclides for Accident Consequence Evaluation of Reactor) version 2.3 has been developed to evaluate species and quantities of fission products (FPs) released into cover gas during a fuel pin failure accident in an LMFBR. The TRACER version 2.3 includes new or modified models shown below. a) Both model: a new model for FPs release from fuel. b) Modified model for FPs transfer from fuel to bubbles or sodium coolant. c) Modified model for bubbles dynamics in coolant. Computational models, input data and output data of the TRACER version 2.3 are described in this user's manual. (author)

  2. Proposal for implementation of alternative source term in the nuclear power plant of Laguna Verde

    International Nuclear Information System (INIS)

    Bazan L, A.; Lopez L, M.; Vargas A, A.; Cardenas J, J. B.

    2009-10-01

    In 2010 the nuclear power plant of Laguna Verde will implement the extended power upbeat in both units of the plant. Agree with methodology of NEDC-33004P-A, (constant pressure power up rate), and the source term of core, for accidents evaluations, were increased in proportion to the ratio of power level. This means that for the case of a design basis accident of loss of coolant an increase of power of 15% originated an increase of 15% in dose to main control room. Using the method of NEDC-33004P-A to extended power upbeat conditions was determined that the dose value to main control room is very near to regulatory limit established by SRP 6.4. By the above and in order to recover the margin, the nuclear power plant of Laguna Verde will calculate an alternative source term following the criteria established in RG 1.183 (alternative radiological source term for evaluating DBA at nuclear power reactor). This approach also have a more realistic dose value using the criterion of 10-CFR-50.67, in addition is predicted to get the benefit of additional operational flexibilities. This paper present the proposal of implementing the alternative source term in Laguna Verde. (Author)

  3. Design and implementation of embedded hardware accelerator for diagnosing HDL-CODE in assertion-based verification environment

    Directory of Open Access Journals (Sweden)

    C. U. Ngene

    2013-08-01

    Full Text Available The use of assertions for monitoring the designer’s intention in hardware description language (HDL model is gaining popularity as it helps the designer to observe internal errors at the output ports of the device under verification. During verification assertions are synthesised and the generated data are represented in a tabular forms. The amount of data generated can be enormous depending on the size of the code and the number of modules that constitute the code. Furthermore, to manually inspect these data and diagnose the module with functional violation is a time consuming process which negatively affects the overall product development time. To locate the module with functional violation within acceptable diagnostic time, the data processing and analysis procedure must be accelerated. In this paper a multi-array processor (hardware accelerator was designed and implemented in Virtex6 field programmable gate array (FPGA and it can be integrated into verification environment. The design was captured in very high speed integrated circuit HDL (VHDL. The design was synthesised with Xilinx design suite ISE 13.1 and simulated with Xilinx ISIM. The multi-array processor (MAP executes three logical operations (AND, OR, XOR and a one’s compaction operation on array of data in parallel. An improvement in processing and analysis time was recorded as compared to the manual procedure after the multi-array processor was integrated into the verification environment. It was also found that the multi-array processor which was developed as an Intellectual Property (IP core can also be used in applications where output responses and golden model that are represented in the form of matrices can be compared for searching, recognition and decision-making.

  4. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ - supplementary report

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, Jr, D E; Pleasant, J C; Killough, G G

    1980-05-01

    The purpose of this report is to describe revisions in the SFACTOR computer code and to provide useful documentation for that program. The SFACTOR computer code has been developed to implement current methodologies for computing the average dose equivalent rate S(X reverse arrow Y) to specified target organs in man due to 1 ..mu..Ci of a given radionuclide uniformly distributed in designated source orrgans. The SFACTOR methodology is largely based upon that of Snyder, however, it has been expanded to include components of S from alpha and spontaneous fission decay, in addition to electron and photon radiations. With this methodology, S-factors can be computed for any radionuclide for which decay data are available. The tabulations in Appendix II provide a reference compilation of S-factors for several dosimetrically important radionuclides which are not available elsewhere in the literature. These S-factors are calculated for an adult with characteristics similar to those of the International Commission on Radiological Protection's Reference Man. Corrections to tabulations from Dunning are presented in Appendix III, based upon the methods described in Section 2.3. 10 refs.

  5. Implementation of a Transition Model in a NASA Code and Validation Using Heat Transfer Data on a Turbine Blade

    Science.gov (United States)

    Ameri, Ali A.

    2012-01-01

    The purpose of this report is to summarize and document the work done to enable a NASA CFD code to model laminar-turbulent transition process on an isolated turbine blade. The ultimate purpose of the present work is to down-select a transition model that would allow the flow simulation of a variable speed power turbine to be accurately performed. The flow modeling in its final form will account for the blade row interactions and their effects on transition which would lead to accurate accounting for losses. The present work only concerns itself with steady flows of variable inlet turbulence. The low Reynolds number k- model of Wilcox and a modified version of the same model will be used for modeling of transition on experimentally measured blade pressure and heat transfer. It will be shown that the k- model and its modified variant fail to simulate the transition with any degree of accuracy. A case is thus made for the adoption of more accurate transition models. Three-equation models based on the work of Mayle on Laminar Kinetic Energy were explored. The three-equation model of Walters and Leylek was thought to be in a relatively mature state of development and was implemented in the Glenn-HT code. Two-dimensional heat transfer predictions of flat plate flow and two-dimensional and three-dimensional heat transfer predictions on a turbine blade were performed and reported herein. Surface heat transfer rate serves as sensitive indicator of transition. With the newly implemented model, it was shown that the simulation of transition process is much improved over the baseline k- model for the single Reynolds number and pressure ratio attempted; while agreement with heat transfer data became more satisfactory. Armed with the new transition model, total-pressure losses of computed three-dimensional flow of E3 tip section cascade were compared to the experimental data for a range of incidence angles. The results obtained, form a partial loss bucket for the chosen blade

  6. SMILEI: A collaborative, open-source, multi-purpose PIC code for the next generation of super-computers

    Science.gov (United States)

    Grech, Mickael; Derouillat, J.; Beck, A.; Chiaramello, M.; Grassi, A.; Niel, F.; Perez, F.; Vinci, T.; Fle, M.; Aunai, N.; Dargent, J.; Plotnikov, I.; Bouchard, G.; Savoini, P.; Riconda, C.

    2016-10-01

    Over the last decades, Particle-In-Cell (PIC) codes have been central tools for plasma simulations. Today, new trends in High-Performance Computing (HPC) are emerging, dramatically changing HPC-relevant software design and putting some - if not most - legacy codes far beyond the level of performance expected on the new and future massively-parallel super computers. SMILEI is a new open-source PIC code co-developed by both plasma physicists and HPC specialists, and applied to a wide range of physics-related studies: from laser-plasma interaction to astrophysical plasmas. It benefits from an innovative parallelization strategy that relies on a super-domain-decomposition allowing for enhanced cache-use and efficient dynamic load balancing. Beyond these HPC-related developments, SMILEI also benefits from additional physics modules allowing to deal with binary collisions, field and collisional ionization and radiation back-reaction. This poster presents the SMILEI project, its HPC capabilities and illustrates some of the physics problems tackled with SMILEI.

  7. Simulation of droplet impact onto a deep pool for large Froude numbers in different open-source codes

    Science.gov (United States)

    Korchagova, V. N.; Kraposhin, M. V.; Marchevsky, I. K.; Smirnova, E. V.

    2017-11-01

    A droplet impact on a deep pool can induce macro-scale or micro-scale effects like a crown splash, a high-speed jet, formation of secondary droplets or thin liquid films, etc. It depends on the diameter and velocity of the droplet, liquid properties, effects of external forces and other factors that a ratio of dimensionless criteria can account for. In the present research, we considered the droplet and the pool consist of the same viscous incompressible liquid. We took surface tension into account but neglected gravity forces. We used two open-source codes (OpenFOAM and Gerris) for our computations. We review the possibility of using these codes for simulation of processes in free-surface flows that may take place after a droplet impact on the pool. Both codes simulated several modes of droplet impact. We estimated the effect of liquid properties with respect to the Reynolds number and Weber number. Numerical simulation enabled us to find boundaries between different modes of droplet impact on a deep pool and to plot corresponding mode maps. The ratio of liquid density to that of the surrounding gas induces several changes in mode maps. Increasing this density ratio suppresses the crown splash.

  8. Bug-Fixing and Code-Writing: The Private Provision of Open Source Software

    DEFF Research Database (Denmark)

    Bitzer, Jürgen; Schröder, Philipp

    2002-01-01

    Open source software (OSS) is a public good. A self-interested individual would consider providing such software, if the benefits he gained from having it justified the cost of programming. Nevertheless each agent is tempted to free ride and wait for others to develop the software instead...

  9. SETMDC: Preprocessor for CHECKR, FIZCON, INTER, etc. ENDF Utility source codes

    International Nuclear Information System (INIS)

    Dunford, Charles L.

    2002-01-01

    Description of program or function: SETMDC-6.13 is a utility program that converts the source decks of the following set of programs to different computers: CHECKR-6.13; FIZCON-6.13; GETMAT-6.13; INTER-6.13; LISTEF-6; PLOTEF-6; PSYCHE-6; STANEF-6.13

  10. Implementing an Open Source Electronic Health Record System in Kenyan Health Care Facilities: Case Study.

    Science.gov (United States)

    Muinga, Naomi; Magare, Steve; Monda, Jonathan; Kamau, Onesmus; Houston, Stuart; Fraser, Hamish; Powell, John; English, Mike; Paton, Chris

    2018-04-18

    The Kenyan government, working with international partners and local organizations, has developed an eHealth strategy, specified standards, and guidelines for electronic health record adoption in public hospitals and implemented two major health information technology projects: District Health Information Software Version 2, for collating national health care indicators and a rollout of the KenyaEMR and International Quality Care Health Management Information Systems, for managing 600 HIV clinics across the country. Following these projects, a modified version of the Open Medical Record System electronic health record was specified and developed to fulfill the clinical and administrative requirements of health care facilities operated by devolved counties in Kenya and to automate the process of collating health care indicators and entering them into the District Health Information Software Version 2 system. We aimed to present a descriptive case study of the implementation of an open source electronic health record system in public health care facilities in Kenya. We conducted a landscape review of existing literature concerning eHealth policies and electronic health record development in Kenya. Following initial discussions with the Ministry of Health, the World Health Organization, and implementing partners, we conducted a series of visits to implementing sites to conduct semistructured individual interviews and group discussions with stakeholders to produce a historical case study of the implementation. This case study describes how consultants based in Kenya, working with developers in India and project stakeholders, implemented the new system into several public hospitals in a county in rural Kenya. The implementation process included upgrading the hospital information technology infrastructure, training users, and attempting to garner administrative and clinical buy-in for adoption of the system. The initial deployment was ultimately scaled back due to a

  11. An alternative technique for simulating volumetric cylindrical sources in the Morse code utilization

    International Nuclear Information System (INIS)

    Vieira, W.J.; Mendonca, A.G.

    1985-01-01

    In the solution of deep-penetration problems using the Monte Carlo method, calculation techniques and strategies are used in order to increase the particle population in the regions of interest. A common procedure is the coupling of bidimensional calculations, with (r,z) discrete ordinates transformed into source data, and tridimensional Monte Carlo calculations. An alternative technique for this procedure is presented. This alternative proved effective when applied to a sample problem. (F.E.) [pt

  12. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    Science.gov (United States)

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  13. Coding For Compression Of Low-Entropy Data

    Science.gov (United States)

    Yeh, Pen-Shu

    1994-01-01

    Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.

  14. Case study of open-source enterprise resource planning implementation in a small business

    Science.gov (United States)

    Olson, David L.; Staley, Jesse

    2012-02-01

    Enterprise resource planning (ERP) systems have been recognised as offering great benefit to some organisations, although they are expensive and problematic to implement. The cost and risk make well-developed proprietorial systems unaffordable to small businesses. Open-source software (OSS) has become a viable means of producing ERP system products. The question this paper addresses is the feasibility of OSS ERP systems for small businesses. A case is reported involving two efforts to implement freely distributed ERP software products in a small US make-to-order engineering firm. The case emphasises the potential of freely distributed ERP systems, as well as some of the hurdles involved in their implementation. The paper briefly reviews highlights of OSS ERP systems, with the primary focus on reporting the case experiences for efforts to implement ERPLite software and xTuple software. While both systems worked from a technical perspective, both failed due to economic factors. While these economic conditions led to imperfect results, the case demonstrates the feasibility of OSS ERP for small businesses. Both experiences are evaluated in terms of risk dimension.

  15. Development and implementation of the regulatory control of sources in Latin American Model Project countries

    International Nuclear Information System (INIS)

    Ferruz Cruz, P.

    2001-01-01

    After a general assessment of the situation regarding radiation safety and the radiation protection infrastructure in Latin American countries, several of them were invited to participate in a Model Project oriented, in some cases, towards establishing a mechanism for national regulatory control of radiation sources, and in others, towards upgrading their national control programme. All these activities aimed at reaching an effective and sustainable radiation protection infrastructure based on international basic safety standards. The paper presents a general overview of the current situation with regard to radiation protection within the Model Project countries in Latin America after almost five years of activities. It includes: the implementation of regulatory issues; the control of occupational, medical and public exposures; emergency response and waste safety issues. The paper also presents some lessons learned during implementation concerning the numerous activities involved in this interregional project. (author)

  16. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Science.gov (United States)

    Lin, Hui; Liu, Tianyu; Su, Lin; Bednarz, Bryan; Caracappa, Peter; Xu, X. George

    2017-09-01

    Monte Carlo (MC) simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi) and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  17. Modeling of Radiotherapy Linac Source Terms Using ARCHER Monte Carlo Code: Performance Comparison for GPU and MIC Parallel Computing Devices

    Directory of Open Access Journals (Sweden)

    Lin Hui

    2017-01-01

    Full Text Available Monte Carlo (MC simulation is well recognized as the most accurate method for radiation dose calculations. For radiotherapy applications, accurate modelling of the source term, i.e. the clinical linear accelerator is critical to the simulation. The purpose of this paper is to perform source modelling and examine the accuracy and performance of the models on Intel Many Integrated Core coprocessors (aka Xeon Phi and Nvidia GPU using ARCHER and explore the potential optimization methods. Phase Space-based source modelling for has been implemented. Good agreements were found in a tomotherapy prostate patient case and a TrueBeam breast case. From the aspect of performance, the whole simulation for prostate plan and breast plan cost about 173s and 73s with 1% statistical error.

  18. Implementation of the Third Energy Package and Renewable Energy Sources on Croatian Liberalised Market

    International Nuclear Information System (INIS)

    Toljan, I.

    2016-01-01

    The Croatian Third Energy Package was adopted in 2012 and its implementation in the previous period until today has accelerated changes in all areas of Croatian energy sector. The content of The EU's Third Energy Package was made in two directives and three regulations. Directives are implemented into national legislation of EU Member States (they choose the methods). Regulations are implemented directly in the entire EU. The main goal is to establish a unique electricity and gas market with market prices (or lower) and high safety and public service standards. Croatia began with incentives for generation from renewable energy sources in 2004 and by the end of this year, the first contract in that system (wind power plant Ravne on Pag island) will end. The question that presents itself is where and how will the owner sell electricity from now on. Current market suppliers as well as the new organisation in Croatian energy sector, Croatian Power Exchange, are both realistic options. Balancing market led by Croatian TSO is becoming bigger and participates in the business because of the higher amount of installed power connected to the power grid (mostly wind power plants). Can Croatian Transmission Operator still guarantee safe operational planning as before (the last blackout happened 10 years ago)? The existing electricity and gas market design doesn't satisfy its participants any more so an adjustment to new facts of a free market is necessary (power exchange, no more stimulations for renewable sources). What changes should be made in legislation so that the free market can develop and be harmonised with the European market? Decarbonization and digitalisation are a base of European energy policy but they are still waiting for a wider and stronger application in Croatia, is the current legislation enough? With these analysis the paper contributes to the learnings about the implementation of The Third Energy Package in Croatia and a unique energy policy in the EU.(author).

  19. Advanced Neutron Source Dynamic Model (ANSDM) code description and user guide

    International Nuclear Information System (INIS)

    March-Leuba, J.

    1995-08-01

    A mathematical model is designed that simulates the dynamic behavior of the Advanced Neutron Source (ANS) reactor. Its main objective is to model important characteristics of the ANS systems as they are being designed, updated, and employed; its primary design goal, to aid in the development of safety and control features. During the simulations the model is also found to aid in making design decisions for thermal-hydraulic systems. Model components, empirical correlations, and model parameters are discussed; sample procedures are also given. Modifications are cited, and significant development and application efforts are noted focusing on examination of instrumentation required during and after accidents to ensure adequate monitoring during transient conditions

  20. Basic design of the HANARO cold neutron source using MCNP code

    International Nuclear Information System (INIS)

    Yu, Yeong Jin; Lee, Kye Hong; Kim, Young Jin; Hwang, Dong Gil

    2005-01-01

    The design of the Cold Neutron Source (CNS) for the HANARO research reactor is on progress. The CNS produces neutrons in the low energy range less than 5meV using liquid hydrogen at around 21.6 K as the moderator. The primary goal for the CNS design is to maximize the cold neutron flux with wavelengths of around 2 ∼ 12 A and to minimize the nuclear heat load. In this paper, the basic design of the HANARO CNS is described

  1. Developing and implementing safety culture in the uses of radiation sources

    International Nuclear Information System (INIS)

    Rojkind, R.H.

    1998-01-01

    This paper presents an approach to develop and implement safety culture in the uses of radiation sources in medicine, industry, agriculture, research and teaching, and makes reference to the experience gained by the industries where that culture has been developed and improved, i.e. the nuclear industry. Suggestions to assist progress toward safety culture are here described for regulators, organisations using those sources, and professional associations. Even though emphasis is given to small organisations or teams of workers, this approach may be also useful to greater organisations like industrial irradiation companies or governmental research laboratories. In each case, parties being the principal focus of the learning process toward a progressive safety culture should be identified. (author)

  2. Personalized reminiscence therapy M-health application for patients living with dementia: Innovating using open source code repository.

    Science.gov (United States)

    Zhang, Melvyn W B; Ho, Roger C M

    2017-01-01

    Dementia is known to be an illness which brings forth marked disability amongst the elderly individuals. At times, patients living with dementia do also experience non-cognitive symptoms, and these symptoms include that of hallucinations, delusional beliefs as well as emotional liability, sexualized behaviours and aggression. According to the National Institute of Clinical Excellence (NICE) guidelines, non-pharmacological techniques are typically the first-line option prior to the consideration of adjuvant pharmacological options. Reminiscence and music therapy are thus viable options. Lazar et al. [3] previously performed a systematic review with regards to the utilization of technology to delivery reminiscence based therapy to individuals who are living with dementia and has highlighted that technology does have benefits in the delivery of reminiscence therapy. However, to date, there has been a paucity of M-health innovations in this area. In addition, most of the current innovations are not personalized for each of the person living with Dementia. Prior research has highlighted the utility for open source repository in bioinformatics study. The authors hoped to explain how they managed to tap upon and make use of open source repository in the development of a personalized M-health reminiscence therapy innovation for patients living with dementia. The availability of open source code repository has changed the way healthcare professionals and developers develop smartphone applications today. Conventionally, a long iterative process is needed in the development of native application, mainly because of the need for native programming and coding, especially so if the application needs to have interactive features or features that could be personalized. Such repository enables the rapid and cost effective development of application. Moreover, developers are also able to further innovate, as less time is spend in the iterative process.

  3. Self characterization of a coded aperture array for neutron source imaging

    Energy Technology Data Exchange (ETDEWEB)

    Volegov, P. L., E-mail: volegov@lanl.gov; Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H. [Los Alamos National Laboratory, Los Alamos, New Mexico 87544 (United States); Fittinghoff, D. N. [Livermore National Laboratory, Livermore, California 94550 (United States)

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  4. Microcontroller based implementation of fuel cell and battery integrated hybrid power source

    International Nuclear Information System (INIS)

    Fahad, A.; Ali, S.M.; Bhatti, A.A.; Nasir, M

    2013-01-01

    This paper presents the implementation of a digitally controlled hybrid power source system, composed of fuel cell and battery. Use of individual fuel cell stacks as a power source, encounters many problems in achieving the desired load characteristics. A battery integrated, digitally controlled hybrid system is proposed for high pulse requirements. The proposed hybrid power source fulfils these peak demands with efficient flow of energy as compared to individual operations of fuel cell or battery system. A dc/dc converter is applied which provides an optimal control of power flow among fuel cell, battery and load. The proposed system efficiently overcomes the electrochemical constraints like over current, battery leakage current, and over and under voltage dips. By formulation of an intelligent algorithm and incorporating a digital technology (AVR Microcontroller), an efficient control is achieved over fuel cell current limit, battery charge, voltage and current. The hybrid power source is tested and analyzed by carrying out simulations using MATLAB simulink. Along with the attainment of desired complex load profiles, the proposed design can also be used for power enhancement and optimization for different capacities. (author)

  5. Delaunay Tetrahedralization of the Heart Based on Integration of Open Source Codes

    International Nuclear Information System (INIS)

    Pavarino, E; Neves, L A; Machado, J M; Momente, J C; Zafalon, G F D; Pinto, A R; Valêncio, C R; Godoy, M F de; Shiyou, Y; Nascimento, M Z do

    2014-01-01

    The Finite Element Method (FEM) is a way of numerical solution applied in different areas, as simulations used in studies to improve cardiac ablation procedures. For this purpose, the meshes should have the same size and histological features of the focused structures. Some methods and tools used to generate tetrahedral meshes are limited mainly by the use conditions. In this paper, the integration of Open Source Softwares is presented as an alternative to solid modeling and automatic mesh generation. To demonstrate its efficiency, the cardiac structures were considered as a first application context: atriums, ventricles, valves, arteries and pericardium. The proposed method is feasible to obtain refined meshes in an acceptable time and with the required quality for simulations using FEM

  6. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  7. An advanced boundary element method (BEM) implementation for the forward problem of electromagnetic source imaging

    International Nuclear Information System (INIS)

    Akalin-Acar, Zeynep; Gencer, Nevzat G

    2004-01-01

    The forward problem of electromagnetic source imaging has two components: a numerical model to solve the related integral equations and a model of the head geometry. This study is on the boundary element method (BEM) implementation for numerical solutions and realistic head modelling. The use of second-order (quadratic) isoparametric elements and the recursive integration technique increase the accuracy in the solutions. Two new formulations are developed for the calculation of the transfer matrices to obtain the potential and magnetic field patterns using realistic head models. The formulations incorporate the use of the isolated problem approach for increased accuracy in solutions. If a personal computer is used for computations, each transfer matrix is calculated in 2.2 h. After this pre-computation period, solutions for arbitrary source configurations can be obtained in milliseconds for a realistic head model. A hybrid algorithm that uses snakes, morphological operations, region growing and thresholding is used for segmentation. The scalp, skull, grey matter, white matter and eyes are segmented from the multimodal magnetic resonance images and meshes for the corresponding surfaces are created. A mesh generation algorithm is developed for modelling the intersecting tissue compartments, such as eyes. To obtain more accurate results quadratic elements are used in the realistic meshes. The resultant BEM implementation provides more accurate forward problem solutions and more efficient calculations. Thus it can be the firm basis of the future inverse problem solutions

  8. Microwave implementation of two-source energy balance approach for estimating evapotranspiration

    Directory of Open Access Journals (Sweden)

    T. R. H. Holmes

    2018-02-01

    Full Text Available A newly developed microwave (MW land surface temperature (LST product is used to substitute thermal infrared (TIR-based LST in the Atmosphere–Land Exchange Inverse (ALEXI modeling framework for estimating evapotranspiration (ET from space. ALEXI implements a two-source energy balance (TSEB land surface scheme in a time-differential approach, designed to minimize sensitivity to absolute biases in input records of LST through the analysis of the rate of temperature change in the morning. Thermal infrared retrievals of the diurnal LST curve, traditionally from geostationary platforms, are hindered by cloud cover, reducing model coverage on any given day. This study tests the utility of diurnal temperature information retrieved from a constellation of satellites with microwave radiometers that together provide six to eight observations of Ka-band brightness temperature per location per day. This represents the first ever attempt at a global implementation of ALEXI with MW-based LST and is intended as the first step towards providing all-weather capability to the ALEXI framework. The analysis is based on 9-year-long, global records of ALEXI ET generated using both MW- and TIR-based diurnal LST information as input. In this study, the MW-LST (MW-based LST sampling is restricted to the same clear-sky days as in the IR-based implementation to be able to analyze the impact of changing the LST dataset separately from the impact of sampling all-sky conditions. The results show that long-term bulk ET estimates from both LST sources agree well, with a spatial correlation of 92 % for total ET in the Europe–Africa domain and agreement in seasonal (3-month totals of 83–97 % depending on the time of year. Most importantly, the ALEXI-MW (MW-based ALEXI also matches ALEXI-IR (IR-based ALEXI very closely in terms of 3-month inter-annual anomalies, demonstrating its ability to capture the development and extent of drought conditions. Weekly ET output

  9. Summary report for ITER task - D10: Update and implementation of neutron transport and activation codes and processed libraries

    International Nuclear Information System (INIS)

    Attaya, H.

    1995-01-01

    The primary goal of this task is to provide the capabilities in the activation code RACC, to treat pulsed operation modes. In addition, it is required that the code utilizes the same spatial mesh and geometrical models as employed in the one or multidimensional neutron transport codes used in ITER design. This would ensure the use of the same neutron flux generated by those codes to calculate the different activation parameters. It is also required to have the capabilities for generating graphical outputs for the calculated activation parameters

  10. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders.   Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partne...

  11. The Journey of a Source Line: How your Code is Translated into a Controlled Flow of Electrons

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    In this series we help you understand the bits and pieces that make your code command the underlying hardware. A multitude of layers translate and optimize source code, written in compiled and interpreted programming languages such as C++, Python or Java, to machine language. We explain the role and behavior of the layers in question in a typical usage scenario. While our main focus is on compilers and interpreters, we also talk about other facilities - such as the operating system, instruction sets and instruction decoders. Biographie: Andrzej Nowak runs TIK Services, a technology and innovation consultancy based in Geneva, Switzerland. In the recent past, he co-founded and sold an award-winning Fintech start-up focused on peer-to-peer lending. Earlier, Andrzej worked at Intel and in the CERN openlab. At openlab, he managed a lab collaborating with Intel and was part of the Chief Technology Office, which set up next-generation technology projects for CERN and the openlab partners.

  12. Raviart–Thomas-type sources adapted to applied EEG and MEG: implementation and results

    International Nuclear Information System (INIS)

    Pursiainen, S

    2012-01-01

    This paper studies numerically electroencephalography and magnetoencephalography (EEG and MEG), two non-invasive imaging modalities in which external measurements of the electric potential and the magnetic field are, respectively, utilized to reconstruct the primary current density (neuronal activity) of the human brain. The focus is on adapting a Raviart–Thomas-type source model to meet the needs of EEG and MEG applications. The goal is to construct a model that provides an accurate approximation of dipole source currents and can be flexibly applied to different reconstruction strategies as well as to realistic computation geometries. The finite element method is applied in the simulation of the data. Least-squares fit interpolation is used to establish Cartesian source directions, which guarantee that the recovered current field is minimally dependent on the underlying finite element mesh. Implementation is explained in detail and made accessible, e.g., by using quadrature-free formulae and the Gaussian one-point rule in numerical integration. Numerical results are presented concerning, for example, the iterative alternating sequential inverse algorithm as well as resolution, smoothness and local refinement of the finite element mesh. Both spherical and pseudo-realistic head models, as well as real MEG data, are utilized in the numerical experiments. (paper)

  13. Review of the status of validation of the computer codes used in the severe accident source term reassessment study (BMI-2104)

    International Nuclear Information System (INIS)

    Kress, T.S.

    1985-04-01

    The determination of severe accident source terms must, by necessity it seems, rely heavily on the use of complex computer codes. Source term acceptability, therefore, rests on the assessed validity of such codes. Consequently, one element of NRC's recent efforts to reassess LWR severe accident source terms is to provide a review of the status of validation of the computer codes used in the reassessment. The results of this review is the subject of this document. The separate review documents compiled in this report were used as a resource along with the results of the BMI-2104 study by BCL and the QUEST study by SNL to arrive at a more-or-less independent appraisal of the status of source term modeling at this time

  14. California Air Quality State Implementation Plans; Final Approval; Butte County Air Quality Management District; Stationary Source Permits

    Science.gov (United States)

    EPA is taking final action to approve a revision to the Butte County Air Quality Management District (BCAQMD) portion of the California State Implementation Plan (SIP). This revision concerns the District's New Source Review (NSR) permitting program.

  15. Fourth and last part in the modelling implementation project of two assimetric cooling systems for ALMOD 3 computer codes

    International Nuclear Information System (INIS)

    Dominguez, L.; Camargo, C.T.M.

    1985-01-01

    A two loop simulation capability was developed to the ALMOD 3W2 code through modelling the steam header line connecting the secondary side of steam generators. A brief description of the model is presented and two test cases are shown. Basic code changes are addressed. (Author) [pt

  16. Implementation, verification, and validation of the FPIN2 metal fuel pin mechanics model in the SASSYS/SAS4A LMR transient analysis codes

    International Nuclear Information System (INIS)

    Sofu, T.; Kramer, J.M.

    1994-01-01

    The metal fuel version of the FPIN2 code which provides a validated pin mechanics model is coupled with SASSYS/SAS4A Version 3.0 for single pin calculations. In this implementation, SASSY/SAS4A provides pin temperatures, and FPIN2 performs analysis of pin deformation and predicts the time and location of cladding failure. FPIN2 results are also used for the estimates of axial expansion of fuel and associated reactivity effects. The revalidation of the integrated SAS-FPIN2 code system is performed using TREAT tests

  17. The design and implementation of multi-source application middleware based on service bus

    Science.gov (United States)

    Li, Yichun; Jiang, Ningkang

    2017-06-01

    With the rapid development of the Internet of Things(IoT), the real-time monitoring data are increasing with different types and large amounts. Aiming at taking full advantages of the data, we designed and implemented an application middleware, which not only supports the three-layer architecture of IoT information system but also enables the flexible configuration of multiple resources access and other accessional modules. The middleware platform shows the characteristics of lightness, security, AoP (aspect-oriented programming), distribution and real-time, which can let application developers construct the information processing systems on related areas in a short period. It focuses not limited to these functions: pre-processing of data format, the definition of data entity, the callings and handlings of distributed service and massive data process. The result of experiment shows that the performance of middleware is more excellent than some message queue construction to some degree and its throughput grows better as the number of distributed nodes increases while the code is not complex. Currently, the middleware is applied to the system of Shanghai Pudong environmental protection agency and achieved a great success.

  18. Implementation of Wolsong Pump Model, Pressure Tube Deformation Model and Off-take Model into MARS Code for Regulatory Auditing of CANDU Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Yoon, C.; Rhee, B. W.; Chung, B. D. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Cho, Y. J.; Kim, M. W. [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2008-05-15

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use vendor's code for regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of RELAP5/MOD3/CANDU code to MARS code including quality assurance of the developed models. This first part of the research series presents the implementation and verification of the Wolsong pump model, the pressure tube deformation model, and the off-take model for arbitrary-angled branch pipes.

  19. Implementation of Wolsong Pump Model, Pressure Tube Deformation Model and Off-take Model into MARS Code for Regulatory Auditing of CANDU Reactors

    International Nuclear Information System (INIS)

    Yoon, C.; Rhee, B. W.; Chung, B. D.; Cho, Y. J.; Kim, M. W.

    2008-01-01

    Korea currently has four operating units of the CANDU-6 type reactor in Wolsong. However, the safety assessment system for CANDU reactors has not been fully established due to lack of self-reliance technology. Although the CATHENA code had been introduced from AECL, it is undesirable to use vendor's code for regulatory auditing analysis. In Korea, the MARS code has been developed for decades and is being considered by KINS as a thermal hydraulic regulatory auditing tool for nuclear power plants. Before this decision, KINS (Korea Institute of Nuclear Safety) had developed RELAP5/MOD3/CANDU code for CANDU safety analyses by modifying the model of existing PWR auditing tool, RELAP5/MOD3. The main purpose of this study is to transplant the CANDU models of RELAP5/MOD3/CANDU code to MARS code including quality assurance of the developed models. This first part of the research series presents the implementation and verification of the Wolsong pump model, the pressure tube deformation model, and the off-take model for arbitrary-angled branch pipes

  20. An Open Source Web Map Server Implementation For California and the Digital Earth: Lessons Learned

    Science.gov (United States)

    Sullivan, D. V.; Sheffner, E. J.; Skiles, J. W.; Brass, J. A.; Condon, Estelle (Technical Monitor)

    2000-01-01

    This paper describes an Open Source implementation of the Open GIS Consortium's Web Map interface. It is based on the very popular Apache WWW Server, the Sun Microsystems Java ServIet Development Kit, and a C language shared library interface to a spatial datastore. This server was initially written as a proof of concept, to support a National Aeronautics and Space Administration (NASA) Digital Earth test bed demonstration. It will also find use in the California Land Science Information Partnership (CaLSIP), a joint program between NASA and the state of California. At least one WebMap enabled server will be installed in every one of the state's 58 counties. This server will form a basis for a simple, easily maintained installation for those entities that do not yet require one of the larger, more expensive, commercial offerings.

  1. A Pipelining Implementation for Parsing X-ray Diffraction Source Data and Removing the Background Noise

    International Nuclear Information System (INIS)

    Bauer, Michael A; Biem, Alain; McIntyre, Stewart; Xie Yuzhen

    2010-01-01

    Synchrotrons can be used to generate X-rays in order to probe materials at the atomic level. One approach is to use X-ray diffraction (XRD) to do this. The data from an XRD experiment consists of a sequence of digital image files which for a single scan could consist of hundreds or even thousands of digital images. Existing analysis software processes these images individually sequentially and is usually used after the experiment is completed. The results from an XRD detector can be thought of as a sequence of images, generated during the scan by the X-ray beam. If these images could be analyzed in near real-time, the results could be sent to the researcher running the experiment and used to improve the overall experimental process and results. In this paper, we report on a stream processing application to remove background from XRD images using a pipelining implementation. We describe our implementation techniques of using IBM Infosphere Streams for parsing XRD source data and removing the background. We present experimental results showing the super-linear speedup attained over a purely sequential version of the algorithm on a quad-core machine. These results demonstrate the potential of making good use of multi-cores for high-performance stream processing of XRD images.

  2. WEB 2.0-SUPPORT FOR CHANGE MANAGEMENT DURING BPMS IMPLEMENTATION USING AN OPEN SOURCE APPROACH

    Directory of Open Access Journals (Sweden)

    P.G. Van Schalkwyk

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: The authors argue that business process management systems (BPMS are exposed to similar risks of failure as are traditional enterprise resource planning (ERP systems. Change management is a significant critical success factor, and has to be managed well. Given the socio-technical nature of the implementation environment, communication and collaboration are crucially important to the success of change management. The authors provide an example of how collaboration and communication, as part of change management during BPMS implementation, can be achieved in practice, based on the use of Web 2.0 tools and an open source approach.

    AFRIKAANSE OPSOMMING: Die outeurs voer aan dat besigheidsprosesbestuurstelsels aan soortgelyke falings-risiko’s blootgestel is as tradisionele ondernemingshulpbron-stelsels. Veranderings-bestuur is 'n beduidende kritiese suksesfaktor wat goed bestuur moet word. As gevolg van die sosio-tegniese aard van die implementeringsomgewing is kommunikasie en samewerking van deurslaggewende belang as deel van die veranderingsbestuurs-proses.Die outeurs wys met behulp van ’n voorbeeld hoe samewerking en kommunikasie as deel van veranderings-bestuur bewerkstellig kan word met behulp van Web 2.0 gereedskap en ’n oopbron-benadering.

  3. Implementation of Neutronics Analysis Code using the Features of Object Oriented Programming via Fortran90/95

    Energy Technology Data Exchange (ETDEWEB)

    Han, Tae Young; Cho, Beom Jin [KEPCO Nuclear Fuel, Daejeon (Korea, Republic of)

    2011-05-15

    The object-oriented programming (OOP) concept was radically established after 1990s and successfully involved in Fortran 90/95. The features of OOP are such as the information hiding, encapsulation, modularity and inheritance, which lead to producing code that satisfy three R's: reusability, reliability and readability. The major OOP concepts, however, except Module are not mainly used in neutronics analysis codes even though the code was written by Fortran 90/95. In this work, we show that the OOP concept can be employed to develop the neutronics analysis code, ASTRA1D (Advanced Static and Transient Reactor Analyzer for 1-Dimension), via Fortran90/95 and those can be more efficient and reasonable programming methods

  4. Dispersed flow film boiling: An investigation of the possibility to improve the models implemented in the NRC computer codes for the reflooding phase of the LOCA

    International Nuclear Information System (INIS)

    Andreani, M.; Yadigaroglu, G.; Paul Scherrer Inst.

    1992-08-01

    Dispersed Flow Film Boiling is the heat transfer regime that occurs at high void fractions in a heated channel. The way this heat transfer mode is modelled in the NRC computer codes (RELAP5 and TRAC) and the validity of the assumptions and empirical correlations used is discussed. An extensive review of the theoretical and experimental work related with heat transfer to highly dispersed mixtures reveals the basic deficiencies of these models: the investigation refers mostly to the typical conditions of low rate bottom reflooding, since the simulation of this physical situation by the computer codes has often showed poor results. The alternative models that are available in the literature are reviewed, and their merits and limits are highlighted. The modifications that could improve the physics of the models implemented in the codes are identified

  5. Assessment of gamma irradiation heating and damage in miniature neutron source reactor vessel using computational methods and SRIM - TRIM code

    International Nuclear Information System (INIS)

    Appiah-Ofori, F. F.

    2014-07-01

    The Effects of Gamma Radiation Heating and Irradiation Damage in the reactor vessel of Ghana Research Reactor 1, Miniature Neutron Source Reactor were assessed using Implicit Control Volume Finite Difference Numerical Computation and validated by SRIM - TRIM Code. It was assumed that 5.0 MeV of gamma rays from the reactor core generate heat which interact and absorbed completely by the interior surface of the MNSR vessel which affects it performance due to the induced displacement damage. This displacement damage is as result of lattice defects being created which impair the vessel through formation of point defect clusters such as vacancies and interstitiaIs which can result in dislocation loops and networks, voids and bubbles and causing changes in the layers in the thickness of the vessel. The microscopic defects produced in the vessel due to γ - radiation damage are referred to as radiation damage while the defects thus produced modify the macroscopic properties of the vessel which are also known as the radiation effects. These radiation damage effects are of major concern for materials used in nuclear energy production. In this study, the overall objective was to assess the effects of gamma radiation heating and damage in GHARR - I MNSR vessel by a well-developed Mathematical model, Analytical and Numerical solutions, simulating the radiation damage in the vessel. SRIM - TRIM Code was used as a computational tool to determine the displacement per atom (dpa) associated with radiation damage while implicit Control Volume Finite Difference Method was used to determine the temperature profile within the vessel due to γ - radiation heating respectively. The methodology adopted in assessing γ - radiation heating in the vessel involved development of the One-Dimensional Steady State Fourier Heat Conduction Equation with Volumetric Heat Generation both analytical and implicit Control Volume Finite Difference Method approach to determine the maximum temperature and

  6. IMPLEMENTATION OF OPEN-SOURCE WEB MAPPING TECHNOLOGIES TO SUPPORT MONITORING OF GOVERNMENTAL SCHEMES

    Directory of Open Access Journals (Sweden)

    B. R. Pulsani

    2015-10-01

    Full Text Available Several schemes are undertaken by the government to uplift social and economic condition of people. The monitoring of these schemes is done through information technology where involvement of Geographic Information System (GIS is lacking. To demonstrate the benefits of thematic mapping as a tool for assisting the officials in making decisions, a web mapping application for three government programs such as Mother and Child Tracking system (MCTS, Telangana State Housing Corporation Limited (TSHCL and Ground Water Quality Mapping (GWQM has been built. Indeed the three applications depicted the distribution of various parameters thematically and helped in identifying the areas with higher and weaker distributions. Based on the three applications, the study tends to find similarities of many government schemes reflecting the nature of thematic mapping and hence deduces to implement this kind of approach for other schemes as well. These applications have been developed using SharpMap Csharp library which is a free and open source mapping library for developing geospatial applications. The study highlights upon the cost benefits of SharpMap and brings out the advantage of this library over proprietary vendors and further discusses its advantages over other open source libraries as well.

  7. HELIOS: An Open-source, GPU-accelerated Radiative Transfer Code for Self-consistent Exoplanetary Atmospheres

    Science.gov (United States)

    Malik, Matej; Grosheintz, Luc; Mendonça, João M.; Grimm, Simon L.; Lavie, Baptiste; Kitzmann, Daniel; Tsai, Shang-Min; Burrows, Adam; Kreidberg, Laura; Bedell, Megan; Bean, Jacob L.; Stevenson, Kevin B.; Heng, Kevin

    2017-02-01

    We present the open-source radiative transfer code named HELIOS, which is constructed for studying exoplanetary atmospheres. In its initial version, the model atmospheres of HELIOS are one-dimensional and plane-parallel, and the equation of radiative transfer is solved in the two-stream approximation with nonisotropic scattering. A small set of the main infrared absorbers is employed, computed with the opacity calculator HELIOS-K and combined using a correlated-k approximation. The molecular abundances originate from validated analytical formulae for equilibrium chemistry. We compare HELIOS with the work of Miller-Ricci & Fortney using a model of GJ 1214b, and perform several tests, where we find: model atmospheres with single-temperature layers struggle to converge to radiative equilibrium; k-distribution tables constructed with ≳ 0.01 cm-1 resolution in the opacity function (≲ {10}3 points per wavenumber bin) may result in errors ≳ 1%-10% in the synthetic spectra; and a diffusivity factor of 2 approximates well the exact radiative transfer solution in the limit of pure absorption. We construct “null-hypothesis” models (chemical equilibrium, radiative equilibrium, and solar elemental abundances) for six hot Jupiters. We find that the dayside emission spectra of HD 189733b and WASP-43b are consistent with the null hypothesis, while the latter consistently underpredicts the observed fluxes of WASP-8b, WASP-12b, WASP-14b, and WASP-33b. We demonstrate that our results are somewhat insensitive to the choice of stellar models (blackbody, Kurucz, or PHOENIX) and metallicity, but are strongly affected by higher carbon-to-oxygen ratios. The code is publicly available as part of the Exoclimes Simulation Platform (exoclime.net).

  8. Radiological Threat Reduction (RTR) program: implementing physical security to protect large radioactive sources worldwide

    International Nuclear Information System (INIS)

    Lowe, Daniel L.

    2004-01-01

    The U.S. Department of Energy's Radiological Threat Reduction (RTR) Program strives to reduce the threat of a Radiological Dispersion Device (RDD) incident that could affect U.S. interests worldwide. Sandia National Laboratories supports the RTR program on many different levels. Sandia works directly with DOE to develop strategies, including the selection of countries to receive support and the identification of radioactive materials to be protected. Sandia also works with DOE in the development of guidelines and in training DOE project managers in physical protection principles. Other support to DOE includes performing rapid assessments and providing guidance for establishing foreign regulatory and knowledge infrastructure. Sandia works directly with foreign governments to establish cooperative agreements necessary to implement the RTR Program efforts to protect radioactive sources. Once necessary agreements are in place, Sandia works with in-country organizations to implement various security related initiatives, such as installing security systems and searching for (and securing) orphaned radioactive sources. The radioactive materials of interest to the RTR program include Cobalt 60, Cesium 137, Strontium 90, Iridium 192, Radium 226, Plutonium 238, Americium 241, Californium 252, and Others. Security systems are implemented using a standardized approach that provides consistency through out the RTR program efforts at Sandia. The approach incorporates a series of major tasks that overlap in order to provide continuity. The major task sequence is to: Establish in-country contacts - integrators, Obtain material characterizations, Perform site assessments and vulnerability assessments, Develop upgrade plans, Procure and install equipment, Conduct acceptance testing and performance testing, Develop procedures, and Conduct training. Other tasks are incorporated as appropriate and commonly include such as support of reconfiguring infrastructure, and developing security

  9. Ibmdbpy-spatial : An Open-source implementation of in-database geospatial analytics in Python

    Science.gov (United States)

    Roy, Avipsa; Fouché, Edouard; Rodriguez Morales, Rafael; Moehler, Gregor

    2017-04-01

    As the amount of spatial data acquired from several geodetic sources has grown over the years and as data infrastructure has become more powerful, the need for adoption of in-database analytic technology within geosciences has grown rapidly. In-database analytics on spatial data stored in a traditional enterprise data warehouse enables much faster retrieval and analysis for making better predictions about risks and opportunities, identifying trends and spot anomalies. Although there are a number of open-source spatial analysis libraries like geopandas and shapely available today, most of them have been restricted to manipulation and analysis of geometric objects with a dependency on GEOS and similar libraries. We present an open-source software package, written in Python, to fill the gap between spatial analysis and in-database analytics. Ibmdbpy-spatial provides a geospatial extension to the ibmdbpy package, implemented in 2015. It provides an interface for spatial data manipulation and access to in-database algorithms in IBM dashDB, a data warehouse platform with a spatial extender that runs as a service on IBM's cloud platform called Bluemix. Working in-database reduces the network overload, as the complete data need not be replicated into the user's local system altogether and only a subset of the entire dataset can be fetched into memory in a single instance. Ibmdbpy-spatial accelerates Python analytics by seamlessly pushing operations written in Python into the underlying database for execution using the dashDB spatial extender, thereby benefiting from in-database performance-enhancing features, such as columnar storage and parallel processing. The package is currently supported on Python versions from 2.7 up to 3.4. The basic architecture of the package consists of three main components - 1) a connection to the dashDB represented by the instance IdaDataBase, which uses a middleware API namely - pypyodbc or jaydebeapi to establish the database connection via

  10. Criticality calculations on pebble-bed HTR-PROTEUS configuration as a validation for the pseudo-scattering tracking method implemented in the MORET 5 Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Forestier, Benoit; Miss, Joachim; Bernard, Franck; Dorval, Aurelien [Institut de Radioprotection et Surete Nucleaire, Fontenay aux Roses (France); Jacquet, Olivier [Independent consultant (France); Verboomen, Bernard [Belgian Nuclear Research Center - SCK-CEN (Belgium)

    2008-07-01

    The MORET code is a three dimensional Monte Carlo criticality code. It is designed to calculate the effective multiplication factor (k{sub eff}) of any geometrical configuration as well as the reaction rates in the various volumes and the neutron leakage out of the system. A recent development for the MORET code consists of the implementation of an alternate neutron tracking method, known as the pseudo-scattering tracking method. This method has been successfully implemented in the MORET code and its performances have been tested by mean of an extensive parametric study on very simple geometrical configurations. In this context, the goal of the present work is to validate the pseudo-scattering method against realistic configurations. In this perspective, pebble-bed cores are particularly well-adapted cases to model, as they exhibit large amount of volumes stochastically arranged on two different levels (the pebbles in the core and the TRISO particles inside each pebble). This paper will introduce the techniques and methods used to model pebble-bed cores in a realistic way. The results of the criticality calculations, as well as the pseudo-scattering tracking method performance in terms of computation time, will also be presented. (authors)

  11. Documentation of Source Code.

    Science.gov (United States)

    1988-05-12

    the "load IC" menu option. A prompt will appear in the typescript window requesting the name of the knowledge base to be loaded. Enter...highlighted and then a prompt will appear in the typescript window. The prompt will be requesting the name of the file containing the message to be read in...the file name, the system will begin reading in the message. The listified message is echoed back in the typescript window. After that, the screen

  12. An implementation of a security infrastructure compliant with the Italian Personal Data Protection Code in a web-based cooperative work system.

    Science.gov (United States)

    Eccher, Claudio; Eccher, Lorenzo; Izzo, Umberto

    2005-01-01

    In this poster we describe the security solutions implemented in a web-based cooperative work frame-work for managing heart failure patients among different health care professionals involved in the care process. The solution, developed in close collaboration with the Law Department of the University of Trento, is compliant with the new Italian Personal Data Protection Code, issued in 2003, that regulates also the storing and processing of health data.

  13. Open-source implementation of an algorithm for photopeaks search and analysis in gamma-ray spectrometry with semiconductor detectors

    International Nuclear Information System (INIS)

    Maduar, Marcelo F.; Pecequilo, Brigitte R.S.

    2009-01-01

    Radioactivity quantification of gamma-ray emitter radionuclides in samples measured by HPGe gamma spectrometers relies on the analysis of the photopeaks present in the spectra, especially on the accurate determination of their net areas. This paper presents a methodology and an algorithm description for the peak search and analysis in order to obtain the relevant peaks parameters and their uncertainties. The procedure is performed on a three step approach: a preliminary search is done by using the second-difference method; experimental peaks widths are assessed in order to obtain a width vs. channel relationship and to define regions with single or overlapping peaks; a non-linear fit is applied to each region of the spectrum with candidate peaks. The final target function is in the form G(x) = B(x) + F(x), where B(x) is the baseline composed by a sum of a weighed left-side B L (x) and right-side B R (x) base-line quadratic functions and the photopeaks term F(x) is a sum of Gaussian functions. The computational implementation is released entirely in open-source license. The code was developed in C++ language and the interface was developed with Qt GUI software toolkit. GNU scientific library, GSL, was employed to perform linear and non-linear fitting procedures as needed. Spectra previously generated at our laboratories were analyzed with the presented methodology and with the commercial software package WinnerGamma. Results obtained are consistent with those obtained with the aforementioned package, suggesting that it could be safely used in general-purpose gamma-ray spectrometry. (author)

  14. Validation of the coupling of mesh models to GEANT4 Monte Carlo code for simulation of internal sources of photons

    International Nuclear Information System (INIS)

    Caribe, Paulo Rauli Rafeson Vasconcelos; Cassola, Vagner Ferreira; Kramer, Richard; Khoury, Helen Jamil

    2013-01-01

    The use of three-dimensional models described by polygonal meshes in numerical dosimetry enables more accurate modeling of complex objects than the use of simple solid. The objectives of this work were validate the coupling of mesh models to the Monte Carlo code GEANT4 and evaluate the influence of the number of vertices in the simulations to obtain absorbed fractions of energy (AFEs). Validation of the coupling was performed to internal sources of photons with energies between 10 keV and 1 MeV for spherical geometries described by the GEANT4 and three-dimensional models with different number of vertices and triangular or quadrilateral faces modeled using Blender program. As a result it was found that there were no significant differences between AFEs for objects described by mesh models and objects described using solid volumes of GEANT4. Since that maintained the shape and the volume the decrease in the number of vertices to describe an object does not influence so meant dosimetric data, but significantly decreases the time required to achieve the dosimetric calculations, especially for energies less than 100 keV

  15. Evaluation of the scale dependent dynamic SGS model in the open source code caffa3d.MBRi in wall-bounded flows

    Science.gov (United States)

    Draper, Martin; Usera, Gabriel

    2015-04-01

    The Scale Dependent Dynamic Model (SDDM) has been widely validated in large-eddy simulations using pseudo-spectral codes [1][2][3]. The scale dependency, particularly the potential law, has been proved also in a priori studies [4][5]. To the authors' knowledge there have been only few attempts to use the SDDM in finite difference (FD) and finite volume (FV) codes [6][7], finding some improvements with the dynamic procedures (scale independent or scale dependent approach), but not showing the behavior of the scale-dependence parameter when using the SDDM. The aim of the present paper is to evaluate the SDDM in the open source code caffa3d.MBRi, an updated version of the code presented in [8]. caffa3d.MBRi is a FV code, second-order accurate, parallelized with MPI, in which the domain is divided in unstructured blocks of structured grids. To accomplish this, 2 cases are considered: flow between flat plates and flow over a rough surface with the presence of a model wind turbine, taking for this case the experimental data presented in [9]. In both cases the standard Smagorinsky Model (SM), the Scale Independent Dynamic Model (SIDM) and the SDDM are tested. As presented in [6][7] slight improvements are obtained with the SDDM. Nevertheless, the behavior of the scale-dependence parameter supports the generalization of the dynamic procedure proposed in the SDDM, particularly taking into account that no explicit filter is used (the implicit filter is unknown). [1] F. Porté-Agel, C. Meneveau, M.B. Parlange. "A scale-dependent dynamic model for large-eddy simulation: application to a neutral atmospheric boundary layer". Journal of Fluid Mechanics, 2000, 415, 261-284. [2] E. Bou-Zeid, C. Meneveau, M. Parlante. "A scale-dependent Lagrangian dynamic model for large eddy simulation of complex turbulent flows". Physics of Fluids, 2005, 17, 025105 (18p). [3] R. Stoll, F. Porté-Agel. "Dynamic subgrid-scale models for momentum and scalar fluxes in large-eddy simulations of

  16. Design and implementation of safety traceability system for candied fruits based on two-dimension code technology

    Directory of Open Access Journals (Sweden)

    ZHAO Kun

    2014-12-01

    Full Text Available Traceability is the basic principle of food safety.A food safety traceability system based on QR code and cloud computing technology was introduced in this paper.First of all we introduced the QR code technology and the concept of traceability.And then through the field investigation,we analyzed the traceability process.At the same time,we designed the system and database were found,and the consumer experiencing technology is studied.Finally we expounded the traceability information collection,transmission and final presentation style and expected the future development of traceability system.

  17. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  18. Implementation of an Imaging Spectrometer for Localization and Identification of Radioactive Sources

    International Nuclear Information System (INIS)

    Hermine, Lemaire; Carrel, Frederick; Gmar, Mehdi; Menesguen Yves; Normand, Stephane; Schoepff, Vincent; Abou-Khalil, Roger; Amgarou, Khalil; Menaa, Nabil; Tebug, Timi; Angelique, Jean-Claude; Bonnet, Florent; De-Toro, Daniel; Giarmana, Olivier; Patoz, Audrey; Talent, Philippe

    2013-06-01

    Spatial localization of radioactive sources is currently a main issue interesting nuclear industry as well as homeland security applications, and can be achieved using gamma cameras. For several years, CEA LIST has been designing a new system, called GAMPIX, with improved sensitivity, portability and ease of use. The main remaining limitation is the lack of spectrometric information, preventing radioactive materials identification. This article describes the development of an imaging spectrometer based on the GAMPIX technology. Experimental tests have been carried out according to both spectrometric methods enabled by the pixelated Timepix readout chip used in the GAMPIX gamma camera. The first method is based on the size of the impacts produced by a gamma-ray energy deposition in the detection matrix. The second one uses the Time over Threshold (ToT) mode of the Timepix chip and deals with time spent by pulses generated by charge preamplifiers over a user-specified threshold. Both energy resolution and sensitivity studies proved the superiority of the ToT approach that will consequently be further explored. Energy calibration, tests of several pixel sizes and use of the Medipix3 readout chip are tracks to improve performances of the newly implemented imaging spectrometer. (authors)

  19. Developing a Vacuum Electrospray Source To Implement Efficient Atmospheric Sampling for Miniature Ion Trap Mass Spectrometer.

    Science.gov (United States)

    Yu, Quan; Zhang, Qian; Lu, Xinqiong; Qian, Xiang; Ni, Kai; Wang, Xiaohao

    2017-12-05

    The performance of a miniature mass spectrometer in atmospheric analysis is closely related to the design of its sampling system. In this study, a simplified vacuum electrospray ionization (VESI) source was developed based on a combination of several techniques, including the discontinuous atmospheric pressure interface, direct capillary sampling, and pneumatic-assisted electrospray. Pulsed air was used as a vital factor to facilitate the operation of electrospray ionization in the vacuum chamber. This VESI device can be used as an efficient atmospheric sampling interface when coupled with a miniature rectilinear ion trap (RIT) mass spectrometer. The developed VESI-RIT instrument enables regular ESI analysis of liquid, and its qualitative and quantitative capabilities have been characterized by using various solution samples. A limit of detection of 8 ppb could be attained for arginine in a methanol solution. In addition, extractive electrospray ionization of organic compounds can be implemented by using the same VESI device, as long as the gas analytes are injected with the pulsed auxiliary air. This methodology can extend the use of the proposed VESI technique to rapid and online analysis of gaseous and volatile samples.

  20. A Web GIS Framework for Participatory Sensing Service: An Open Source-Based Implementation

    Directory of Open Access Journals (Sweden)

    Yu Nakayama

    2017-04-01

    Full Text Available Participatory sensing is the process in which individuals or communities collect and analyze systematic data using mobile phones and cloud services. To efficiently develop participatory sensing services, some server-side technologies have been proposed. Although they provide a good platform for participatory sensing, they are not optimized for spatial data management and processing. For the purpose of spatial data collection and management, many web GIS approaches have been studied. However, they still have not focused on the optimal framework for participatory sensing services. This paper presents a web GIS framework for participatory sensing service (FPSS. The proposed FPSS enables an integrated deployment of spatial data capture, storage, and data management functions. In various types of participatory sensing experiments, users can collect and manage spatial data in a unified manner. This feature is realized by the optimized system architecture and use case based on the general requirements for participatory sensing. We developed an open source GIS-based implementation of the proposed framework, which can overcome financial difficulties that are one of the major problems of deploying sensing experiments. We confirmed with the prototype that participatory sensing experiments can be performed efficiently with the proposed FPSS.

  1. Hydrolysis of ammonia borane as a hydrogen source: fundamental issues and potential solutions towards implementation.

    Science.gov (United States)

    Sanyal, Udishnu; Demirci, Umit B; Jagirdar, Balaji R; Miele, Philippe

    2011-12-16

    In today's era of energy crisis and global warming, hydrogen has been projected as a sustainable alternative to depleting CO(2)-emitting fossil fuels. However, its deployment as an energy source is impeded by many issues, one of the most important being storage. Chemical hydrogen storage materials, in particular B-N compounds such as ammonia borane, with a potential storage capacity of 19.6 wt % H(2) and 0.145 kg(H2)L(-1), have been intensively studied from the standpoint of addressing the storage issues. Ammonia borane undergoes dehydrogenation through hydrolysis at room temperature in the presence of a catalyst, but its practical implementation is hindered by several problems affecting all of the chemical compounds in the reaction scheme, including ammonia borane, water, borate byproducts, and hydrogen. In this Minireview, we exhaustively survey the state of the art, discuss the fundamental problems, and, where applicable, propose solutions with the prospect of technological applications. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. OpenSWPC: an open-source integrated parallel simulation code for modeling seismic wave propagation in 3D heterogeneous viscoelastic media

    Science.gov (United States)

    Maeda, Takuto; Takemura, Shunsuke; Furumura, Takashi

    2017-07-01

    We have developed an open-source software package, Open-source Seismic Wave Propagation Code (OpenSWPC), for parallel numerical simulations of seismic wave propagation in 3D and 2D (P-SV and SH) viscoelastic media based on the finite difference method in local-to-regional scales. This code is equipped with a frequency-independent attenuation model based on the generalized Zener body and an efficient perfectly matched layer for absorbing boundary condition. A hybrid-style programming using OpenMP and the Message Passing Interface (MPI) is adopted for efficient parallel computation. OpenSWPC has wide applicability for seismological studies and great portability to allowing excellent performance from PC clusters to supercomputers. Without modifying the code, users can conduct seismic wave propagation simulations using their own velocity structure models and the necessary source representations by specifying them in an input parameter file. The code has various modes for different types of velocity structure model input and different source representations such as single force, moment tensor and plane-wave incidence, which can easily be selected via the input parameters. Widely used binary data formats, the Network Common Data Form (NetCDF) and the Seismic Analysis Code (SAC) are adopted for the input of the heterogeneous structure model and the outputs of the simulation results, so users can easily handle the input/output datasets. All codes are written in Fortran 2003 and are available with detailed documents in a public repository.[Figure not available: see fulltext.

  3. 77 FR 35273 - Approval and Promulgation of Implementation Plans; New Mexico; Minor New Source Review (NSR...

    Science.gov (United States)

    2012-06-13

    ... industrial classification code 0724 (cotton ginning) and the North American industrial standard... practices in the rule. All burr hoppers must be completely enclosed. There can be no visible fugitive...

  4. 75 FR 8496 - Approval and Promulgation of Implementation Plans; Ohio New Source Review Rules

    Science.gov (United States)

    2010-02-25

    ... Skinner. This letter, included as Additional material in paragraph (145)(ii)(B) below, removes references... Regional Administrator Thomas Skinner, titled Request for Approval of Ohio Administrative Code (``OAC...

  5. Implementation and verification of nuclear interactions in a Monte-Carlo code for the Procom-ProGam proton therapy planning system

    International Nuclear Information System (INIS)

    Kostyuchenko, V.I.; Makarova, A.S.; Ryazantsev, O.B.; Samarin, S.I.; Uglov, A.S.

    2013-01-01

    Proton interaction with an exposed object material needs to be modeled with account for three basic processes: electromagnetic stopping of protons in matter, multiple coulomb scattering and nuclear interactions. Just the last type of processes is the topic of this paper. Monte Carlo codes are often used to simulate high-energy particle interaction with matter. However, nuclear interaction models implemented in these codes are rather extensive and their use in treatment planning systems requires huge computational resources. We have selected the IThMC code for its ability to reproduce experiments which measure the distribution of the projected ranges of nuclear secondary particles generated by proton beams in a multi-layer Faraday cup. The multi-layer Faraday cup detectors measure charge rather than dose and allow distinguishing between electromagnetic and nuclear interactions. The event generator used in the IThMC code is faster, but less accurate than any other used in testing. Our model of nuclear reactions demonstrates quite good agreement with experiment in the context of their effect on the Bragg peak in therapeutic applications

  6. Validation of the MCNP-DSP Monte Carlo code for calculating source-driven noise parameters of subcritical systems

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1995-01-01

    This paper describes calculations performed to validate the modified version of the MCNP code, the MCNP-DSP, used for: the neutron and photon spectra of the spontaneous fission of californium 252; the representation of the detection processes for scattering detectors; the timing of the detection process; and the calculation of the frequency analysis parameters for the MCNP-DSP code

  7. Configurations and implementation of payroll system using open source erp: a case study of Koperasi PT Sri

    Science.gov (United States)

    Terminanto, A.; Swantoro, H. A.; Hidayanto, A. N.

    2017-12-01

    Enterprise Resource Planning (ERP) is an integrated information system to manage business processes of companies of various business scales. Because of the high cost of ERP investment, ERP implementation is usually done in large-scale enterprises, Due to the complexity of implementation problems, the success rate of ERP implementation is still low. Open Source System ERP becomes an alternative choice of ERP application to SME companies in terms of cost and customization. This study aims to identify characteristics and configure the implementation of OSS ERP Payroll module in KKPS (Employee Cooperative PT SRI) using OSS ERP Odoo and using ASAP method. This study is classified into case study research and action research. Implementation of OSS ERP Payroll module is done because the HR section of KKPS has not been integrated with other parts. The results of this study are the characteristics and configuration of OSS ERP payroll module in KKPS.

  8. A study on Prediction of Radioactive Source-term from the Decommissioning of Domestic NPPs by using CRUDTRAN Code

    Energy Technology Data Exchange (ETDEWEB)

    Song, Jong Soon; Lee, Sang Heon; Cho, Hoon Jo [Department of Nuclear Engineering Chosun University, Gwangju (Korea, Republic of)

    2016-10-15

    For the study, the behavior mechanism of corrosion products in the primary system of the Kori no.1 was analyzed, and the volume of activated corrosion products in the primary system was assessed based on domestic plant data with the CRUDTRAN code used to predict the volume. It is expected that the study would be utilized in predicting radiation exposure of workers performing maintenance and repairs in high radiation areas and in selecting the process of decontaminations and decommissioning in the primary system. It is also expected that in the future it would be used as the baseline data to estimate the volume of radioactive wastes when decommissioning a nuclear plant in the future, which would be an important criterion in setting the level of radioactive wastes used to compute the quantity of radioactive wastes. The results of prediction of the radioactive nuclide inventory in the primary system performed in this study would be used as baseline data for the estimation of the volume of radioactive wastes when decommissioning NPPs in the future. It is also expected that the data would be important criteria used to classify the level of radioactive wastes to calculate the volume. In addition, it is expected that the data would be utilized in reducing radiation exposure of workers in charge of system maintenance and repairing in high radiation zones and also predicting the selection of decontaminations and decommissioning processes in the primary systems. In future researches, it is planned to conduct the source term assessment against other NPP types such as CANDU and OPR-1000, in addition to the Westinghouse type nuclear plants.

  9. Implementation of the thermal-hydraulic transient analysis code RELAP4/MOD5 and MOD6 on the FACOM 230/75 computer system

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Ishigai, Takahiro; Kumakura, Toshimasa; Naraoka, Ken-itsu

    1979-03-01

    Development efforts have continued on the extensively used LOCA analysis code RELAP-4, as seen in its history; that is, from the prototype version MOD2 to the latest one MOD6 which is capable of one-through calculations from blowdown to reflood phase of PWR-LOCA. Many improvements and refinements of the models have enlarged the scopes and extents of phenomena to treat. Correspondingly the size of program has increased version to version, and special programming techniques have continuously been introduced to manage the program within limited capacity of core memory. For example, the Dynamic Storage Allocation of MOD5 and the PRELOAD Preprocessor newly incorporated in MOD6 are those designed for the CDC computer with relatively small core size. Described are these programming techniques in detail and experiences on implementation of the codes on FACOM 230/75, together with some results of confirmatory calculations. (author)

  10. Implementation and verification of nuclear interactions in a Monte-Carlo code for the Procom-ProGam proton therapy planning system

    Science.gov (United States)

    Kostyuchenko, V. I.; Makarova, A. S.; Ryazantsev, O. B.; Samarin, S. I.; Uglov, A. S.

    2014-06-01

    A great breakthrough in proton therapy has happened in the new century: several tens of dedicated centers are now operated throughout the world and their number increases every year. An important component of proton therapy is a treatment planning system. To make calculations faster, these systems usually use analytical methods whose reliability and accuracy do not allow the advantages of this method of treatment to implement to the full extent. Predictions by the Monte Carlo (MC) method are a "gold" standard for the verification of calculations with these systems. At the Institute of Experimental and Theoretical Physics (ITEP) which is one of the eldest proton therapy centers in the world, an MC code is an integral part of their treatment planning system. This code which is called IThMC was developed by scientists from RFNC-VNIITF (Snezhinsk) under ISTC Project 3563.

  11. Implementation of an enlarged model of the safety valves and relief in the plant integral model for the code RELAP/SCDAPSIM

    International Nuclear Information System (INIS)

    Amador G, R.; Ortiz V, J.; Castillo D, R.; Hernandez L, E. J.; Galeana R, J. C.; Gutierrez, V. H.

    2013-10-01

    The present work refers to the implementation of a new model on the logic of the safety valves and relief in the integral model of the Nuclear Power Plant of Laguna Verde of the thermal-hydraulic compute code RELAP/SCDAPSIM Mod. 3.4. The new model was developed with the compute package SIMULINK-MATLAB and contemplates all the operation options of the safety valves and relief, besides including the availability options of the valves in all the operation ways and of blockage in the ways of relief and low-low. The implementation means the elimination of the old model of the safety valves and to analyze the group of logical variables, of discharge and available control systems to associate them to the model of package SIMULINK-MATLAB. The implementation has been practically transparent and 27 cases corresponding to a turbine discharge were analyzed with the code RELAP/SCDAPSIM Mod. 3.4. The results were satisfactory. (Author)

  12. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult

  13. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  14. The IAEA code of conduct on the safety of radiation sources and the security of radioactive materials. A step forwards or backwards?

    International Nuclear Information System (INIS)

    Boustany, K.

    2001-01-01

    About the finalization of the Code of Conduct on the Safety and Security of radioactive Sources, it appeared that two distinct but interrelated subject areas have been identified: the prevention of accidents involving radiation sources and the prevention of theft or any other unauthorized use of radioactive materials. What analysis reveals is rather that there are gaps in both the content of the Code and the processes relating to it. Nevertheless, new standards have been introduced as a result of this exercise and have thus, as an enactment of what constitutes appropriate behaviour in the field of the safety and security of radioactive sources, emerged into the arena of international relations. (N.C.)

  15. Implementation of the flow-modulated skew-upwind difference scheme in the COMMIX-1C code: A first assessment

    International Nuclear Information System (INIS)

    Bottoni, M.; Chien, T.H.; Dommanus, H.M.; Sha, W.T.; Shen, Y.; Laster, R.

    1991-01-01

    This paper explains in detail the implementation of the Flow-Modulated Skew-Upwind Difference (FMSUD) scheme in the momentum equation of the COMMIX-1C computer program, where the scheme has been used so far only in the energy equation. Because the three scalar components of the momentum equation are solved in different meshes, staggered with respect to the mesh used for the energy equation and displaced in the respective coordinate direction, implementation of the FMSUD scheme in the momentum equations is by far more demanding than the implementation of a single scalar equation in centered cells. For this reason, a new approach has been devised to treat the problem, from the mathematical viewpoint, in the maximum generality and for all flow conditions, taking into account automatically the direction of the velocity vector and thus choosing automatically the weighting factors to be associated to different cells in the skew-upwind discretization. The new mathematical approach is straightforward for the treatment of inner cells of the fluid-dynamic definition domain, but particular care must be paid to its implementation for boundary cells where the appropriate boundary conditions must be applied. The paper explains the test cases in which the implementation of the FMSUD method has been applied and discusses the quality of the numerical results against the correct solution, when the latter is known. 14 refs., 2 figs., 1 tab

  16. HELIOS–RETRIEVAL: An Open-source, Nested Sampling Atmospheric Retrieval Code; Application to the HR 8799 Exoplanets and Inferred Constraints for Planet Formation

    Energy Technology Data Exchange (ETDEWEB)

    Lavie, Baptiste; Mendonça, João M.; Malik, Matej; Demory, Brice-Olivier; Grimm, Simon L. [University of Bern, Space Research and Planetary Sciences, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Mordasini, Christoph; Oreshenko, Maria; Heng, Kevin [University of Bern, Center for Space and Habitability, Sidlerstrasse 5, CH-3012, Bern (Switzerland); Bonnefoy, Mickaël [Université Grenoble Alpes, IPAG, F-38000, Grenoble (France); Ehrenreich, David, E-mail: baptiste.lavie@space.unibe.ch, E-mail: kevin.heng@csh.unibe.ch [Observatoire de l’Université de Genève, 51 chemin des Maillettes, 1290, Sauverny (Switzerland)

    2017-09-01

    We present an open-source retrieval code named HELIOS–RETRIEVAL, designed to obtain chemical abundances and temperature–pressure profiles by inverting the measured spectra of exoplanetary atmospheres. In our forward model, we use an exact solution of the radiative transfer equation, in the pure absorption limit, which allows us to analytically integrate over all of the outgoing rays. Two chemistry models are considered: unconstrained chemistry and equilibrium chemistry (enforced via analytical formulae). The nested sampling algorithm allows us to formally implement Occam’s Razor based on a comparison of the Bayesian evidence between models. We perform a retrieval analysis on the measured spectra of the four HR 8799 directly imaged exoplanets. Chemical equilibrium is disfavored for HR 8799b and c. We find supersolar C/H and O/H values for the outer HR 8799b and c exoplanets, while the inner HR 8799d and e exoplanets have a range of C/H and O/H values. The C/O values range from being superstellar for HR 8799b to being consistent with stellar for HR 8799c and being substellar for HR 8799d and e. If these retrieved properties are representative of the bulk compositions of the exoplanets, then they are inconsistent with formation via gravitational instability (without late-time accretion) and consistent with a core accretion scenario in which late-time accretion of ices occurred differently for the inner and outer exoplanets. For HR 8799e, we find that spectroscopy in the K band is crucial for constraining C/O and C/H. HELIOS–RETRIEVAL is publicly available as part of the Exoclimes Simulation Platform (http://www.exoclime.org).

  17. Performance Demonstration Initiative U.S. implementation of ASME B and PV code section 11 Appendix 8

    International Nuclear Information System (INIS)

    Becker, F.L.; Ammirato, F.; Huffman, K.

    1994-01-01

    New requirements have now been added to Section 11 as mandatory Appendix 8, ''Performance Demonstration Requirements for Ultrasonic Examination systems''. The appendix was recently published and incorporates performance demonstration requirements for ultrasonic examination equipment, procedures, and personnel. These new requirements will have far reaching and significant impact on the conduct of ISI at all nuclear power plants. For the first time since Section 11 was issued in 1970, the effectiveness of ultrasonic examination procedures and the proficiency of examiners must be demonstrated on reactor pressure vessel (RPV), piping, and bolting markups containing real flaws, Recognizing the importance and complexity of Appendix 8 implementation, representatives from all US nuclear utilities have formed the Performance Demonstration Initiative (PDI) to implement Appendix 8 to provide for uniform implementation

  18. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  19. California State Implementation Plan; Butte County Air Quality Management District; New Source Review (NSR) Permitting Program

    Science.gov (United States)

    EPA is proposing to approve a revision to the Butte County Air Quality Management District (BCAQMD) portion of the California SIP concerning the District's New Source Review (NSR) permitting program for new and modified sources of air pollution.

  20. Implementing Open Source Platform for Education Quality Enhancement in Primary Education: Indonesia Experience

    Science.gov (United States)

    Kisworo, Marsudi Wahyu

    2016-01-01

    Information and Communication Technology (ICT)-supported learning using free and open source platform draws little attention as open source initiatives were focused in secondary or tertiary educations. This study investigates possibilities of ICT-supported learning using open source platform for primary educations. The data of this study is taken…