WorldWideScience

Sample records for system optimization codes

  1. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  2. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  3. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    Science.gov (United States)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  4. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  5. Power Optimization of Wireless Media Systems With Space-Time Block Codes

    OpenAIRE

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-01-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes in to consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and...

  6. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    International Nuclear Information System (INIS)

    Watkins, E.F.; Greenspan, E.

    1995-01-01

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN's optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN's optimization capability

  7. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  8. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  9. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  10. The SWAN-SCALE code for the optimization of critical systems

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-01-01

    The SWAN optimization code was recently developed to identify the maximum value of k eff for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k eff and on the corresponding system composition

  11. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  12. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  13. Application of Flow and Transport Optimization Codes to Groundwater Pump and Treat Systems- VOLUME 2

    National Research Council Canada - National Science Library

    Minsker, Barbara

    2004-01-01

    .... Recent studies completed by the EPA and the Navy indicate that the majority of pump and treat systems are not operating as designed, have unachievable or undefined goals, and have not been optimized since installation...

  14. Low-complexity BCH codes with optimized interleavers for DQPSK systems with laser phase noise

    DEFF Research Database (Denmark)

    Leong, Miu Yoong; Larsen, Knud J.; Jacobsen, Gunnar

    2017-01-01

    The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose...... simulations. For a target post-FEC BER of 10−6, codes selected using our method result in BERs around 3× target and achieve the target with around 0.2 dB extra signal-to-noise ratio....

  15. Optimized reversible binary-coded decimal adders

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert

    2008-01-01

    Abstract Babu and Chowdhury [H.M.H. Babu, A.R. Chowdhury, Design of a compact reversible binary coded decimal adder circuit, Journal of Systems Architecture 52 (5) (2006) 272-282] recently proposed, in this journal, a reversible adder for binary-coded decimals. This paper corrects and optimizes...... their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit....... Keywords: Reversible logic circuit; Full-adder; Half-adder; Parallel adder; Binary-coded decimal; Application of reversible logic synthesis...

  16. System Based Code: Principal Concept

    International Nuclear Information System (INIS)

    Yasuhide Asada; Masanori Tashimo; Masahiro Ueta

    2002-01-01

    This paper introduces a concept of the 'System Based Code' which has initially been proposed by the authors intending to give nuclear industry a leap of progress in the system reliability, performance improvement, and cost reduction. The concept of the System Based Code intends to give a theoretical procedure to optimize the reliability of the system by administrating every related engineering requirement throughout the life of the system from design to decommissioning. (authors)

  17. Progress on DART code optimization

    International Nuclear Information System (INIS)

    Taboada, Horacio; Solis, Diego; Rest, Jeffrey

    1999-01-01

    This work consists about the progress made on the design and development of a new optimized version of DART code (DART-P), a mechanistic computer model for the performance calculation and assessment of aluminum dispersion fuel. It is part of a collaboration agreement between CNEA and ANL in the area of Low Enriched Uranium Advanced Fuels. It is held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy, signed on October 16, 1997 between US DOE and the National Atomic Energy Commission of the Argentine Republic. DART optimization is a biannual program; it is operative since February 8, 1999 and has the following goals: 1. Design and develop a new DART calculation kernel for implementation within a parallel processing architecture. 2. Design and develop new user-friendly I/O routines to be resident on Personal Computer (PC)/WorkStation (WS) platform. 2.1. The new input interface will be designed and developed by means of a Visual interface, able to guide the user in the construction of the problem to be analyzed with the aid of a new database (described in item 3, below). The new I/O interface will include input data check controls in order to avoid corrupted input data. 2.2. The new output interface will be designed and developed by means of graphical tools, able to translate numeric data output into 'on line' graphic information. 3. Design and develop a new irradiated materials database, to be resident on PC/WS platform, so as to facilitate the analysis of the behavior of different fuel and meat compositions with DART-P. Currently, a different version of DART is used for oxide, silicide, and advanced alloy fuels. 4. Develop rigorous general inspection algorithms in order to provide valuable DART-P benchmarks. 5. Design and develop new models, such as superplasticity, elastoplastic feedback, improved models for the calculation of fuel deformation and the evolution of the fuel microstructure for

  18. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  19. Performance of an improved logarithmic phase mask with optimized parameters in a wavefront-coding system.

    Science.gov (United States)

    Zhao, Hui; Li, Yingcai

    2010-01-10

    In two papers [Proc. SPIE 4471, 272-280 (2001) and Appl. Opt. 43, 2709-2721 (2004)], a logarithmic phase mask was proposed and proved to be effective in extending the depth of field; however, according to our research, this mask is not that perfect because the corresponding defocused modulation transfer function has large oscillations in the low-frequency region, even when the mask is optimized. So, in a previously published paper [Opt. Lett. 33, 1171-1173 (2008)], we proposed an improved logarithmic phase mask by making a small modification. The new mask can not only eliminate the drawbacks to a certain extent but can also be even less sensitive to focus errors according to Fisher information criteria. However, the performance comparison was carried out with the modified mask not being optimized, which was not reasonable. In this manuscript, we optimize the modified logarithmic phase mask first before analyzing its performance and more convincing results have been obtained based on the analysis of several frequently used metrics.

  20. Optimal coding-decoding for systems controlled via a communication channel

    Science.gov (United States)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  1. Optimized logarithmic phase masks used to generate defocus invariant modulation transfer function for wavefront coding system.

    Science.gov (United States)

    Zhao, Hui; Li, Yingcai

    2010-08-01

    In a previous Letter [Opt. Lett. 33, 1171 (2008)], we proposed an improved logarithmic phase mask by making modifications to the original one designed by Sherif. However, further studies in another paper [Appl. Opt. 49, 229 (2010)] show that even when the Sherif mask and the improved one are optimized, their corresponding defocused modulation transfer functions (MTFs) are still not stable with respect to focus errors. So, by further modifying their phase profiles, we design another two logarithmic phase masks that exhibit more stable defocused MTF. However, with the defocus-induced phase effect considered, we find that the performance of the two masks proposed in this Letter is better than the Sherif mask, but worse than our previously proposed phase mask, according to the Hilbert space angle.

  2. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  3. Two-Layer Coding Rate Optimization in Relay-Aided Systems

    DEFF Research Database (Denmark)

    Sun, Fan

    2011-01-01

    We consider a three-node transmission system, where a source node conveys a data block to a destination node with the help of a half-duplex decode and-forward (DF) relay node. The whole data block is transmitted as a sequence of packets. For reliable transmission in the three-node system, a two...

  4. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  5. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  6. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  7. VVER-440 loading patterns optimization using ATHENA code

    International Nuclear Information System (INIS)

    Katovsky, K.; Sustek, J.; Bajgl, J.; Cada, R.

    2009-01-01

    In this paper the Czech optimization state-of-the-art, new code system development goals and OPAL optimization system are briefly mentioned. The algorithms, maths, present status and future developments of the ATHENA code are described. A calculation exercise of the Dukovany NPP cycles, on increased power using ATHENA, starting with on-coming 24th cycle (303 FPD) continuing with 25th (322 FPD), and 26th (336 FPD); for all cycles K R ≤1.54 is presented

  8. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  9. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  10. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  11. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  12. Optimized iterative decoding method for TPC coded CPM

    Science.gov (United States)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  13. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  14. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    Energy Technology Data Exchange (ETDEWEB)

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  15. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  16. Optimal source coding, removable noise elimination, and natural coordinate system construction for general vector sources using replicator neural networks

    Science.gov (United States)

    Hecht-Nielsen, Robert

    1997-04-01

    A new universal one-chart smooth manifold model for vector information sources is introduced. Natural coordinates (a particular type of chart) for such data manifolds are then defined. Uniformly quantized natural coordinates form an optimal vector quantization code for a general vector source. Replicator neural networks (a specialized type of multilayer perceptron with three hidden layers) are the introduced. As properly configured examples of replicator networks approach minimum mean squared error (e.g., via training and architecture adjustment using randomly chosen vectors from the source), these networks automatically develop a mapping which, in the limit, produces natural coordinates for arbitrary source vectors. The new concept of removable noise (a noise model applicable to a wide variety of real-world noise processes) is then discussed. Replicator neural networks, when configured to approach minimum mean squared reconstruction error (e.g., via training and architecture adjustment on randomly chosen examples from a vector source, each with randomly chosen additive removable noise contamination), in the limit eliminate removable noise and produce natural coordinates for the data vector portions of the noise-corrupted source vectors. Consideration regarding selection of the dimension of a data manifold source model and the training/configuration of replicator neural networks are discussed.

  17. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  18. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  19. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  20. Optimizing Extender Code for NCSX Analyses

    International Nuclear Information System (INIS)

    Richman, M.; Ethier, S.; Pomphrey, N.

    2008-01-01

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  1. Scaling Optimization of the SIESTA MHD Code

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  2. Code Differentiation for Hydrodynamic Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  3. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  4. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  5. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  6. Optimized Method for Generating and Acquiring GPS Gold Codes

    Directory of Open Access Journals (Sweden)

    Khaled Rouabah

    2015-01-01

    Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.

  7. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  8. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  9. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  10. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  11. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  12. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  13. Optimal super dense coding over memory channels

    OpenAIRE

    Shadman, Zahra; Kampermann, Hermann; Macchiavello, Chiara; Bruß, Dagmar

    2011-01-01

    We study the super dense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and non-unitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The super dense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where non-unitary encoding leads to an improvement in the super dense coding capacity.

  14. On Optimal Policies for Network-Coded Cooperation

    DEFF Research Database (Denmark)

    Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Pahlevani, Peyman

    2015-01-01

    Network-coded cooperative communication (NC-CC) has been proposed and evaluated as a powerful technology that can provide a better quality of service in the next-generation wireless systems, e.g., D2D communications. Previous contributions have focused on performance evaluation of NC-CC scenarios...... rather than searching for optimal policies that can minimize the total cost of reliable packet transmission. We break from this trend by initially analyzing the optimal design of NC-CC for a wireless network with one source, two receivers, and half-duplex erasure channels. The problem is modeled...... as a special case of Markov decision process (MDP), which is called stochastic shortest path (SSP), and is solved for any field size, arbitrary number of packets, and arbitrary erasure probabilities of the channels. The proposed MDP solution results in an optimal transmission policy per time slot, and we use...

  15. Efficient topology optimization in MATLAB using 88 lines of code

    DEFF Research Database (Denmark)

    Andreassen, Erik; Clausen, Anders; Schevenels, Mattias

    2011-01-01

    The paper presents an efficient 88 line MATLAB code for topology optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. The original code has been extended by a density filter, and a considerable improvemen...... of the basic code to include recent PDE-based and black-and-white projection filtering methods. The complete 88 line code is included as an appendix and can be downloaded from the web site www.topopt.dtu.dk....

  16. Constellation labeling optimization for bit-interleaved coded APSK

    Science.gov (United States)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  17. Using Peephole Optimization on Intermediate Code

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Staveren, H.; Stevenson, J.W.

    1982-01-01

    Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is

  18. ESCADRE and ICARE code systems

    International Nuclear Information System (INIS)

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  19. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  20. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  1. A Robust Cross Coding Scheme for OFDM Systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2010-01-01

    In wireless OFDM-based systems, coding jointly over all the sub-carriers simultaneously performs better than coding separately per sub-carrier. However, the joint coding is not always optimal because its achievable channel capacity (i.e. the maximum data rate) is inversely proportional to the

  2. SASSYS LMFBR systems analysis code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.

    1982-01-01

    The SASSYS code provides detailed steady-state and transient thermal-hydraulic analyses of the reactor core, inlet and outlet coolant plenums, primary and intermediate heat-removal systems, steam generators, and emergency shut-down heat removal systems in liquid-metal-cooled fast-breeder reactors (LMFBRs). The main purpose of the code is to analyze the consequences of failures in the shut-down heat-removal system and to determine whether this system can perform its mission adequately even with some of its components inoperable. The code is not plant-specific. It is intended for use with any LMFBR, using either a loop or a pool design, a once-through steam generator or an evaporator-superheater combination, and either a homogeneous core or a heterogeneous core with internal-blanket assemblies

  3. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  4. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-01-01

    and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design

  5. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-03-13

    In this work we have developed a restructuring software tool (RT-CUDA) following the proposed optimization specifications to bridge the gap between high-level languages and the machine dependent CUDA environment. RT-CUDA takes a C program and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design of linear algebra solvers. We expect RT-CUDA to be needed by many KSA industries dealing with science and engineering simulation on massively parallel computers like NVIDIA GPUs.

  6. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  7. Optimizing the ATLAS code with different profilers

    CERN Document Server

    Kama, S; The ATLAS collaboration

    2013-01-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...

  8. Optimization of Particle-in-Cell Codes on RISC Processors

    Science.gov (United States)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  9. System Design Description for the TMAD Code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System

  10. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  11. Code systems for effective and precise calculation of the basic neutron characteristics, core loading optimization, analysis and estimation of the operation regimes of WWER type reactors

    International Nuclear Information System (INIS)

    Apostolov, T.; Ivanov, K.; Prodanova, R.; Manolova, M.; Petrova, T.; Alekova, G.

    1993-01-01

    Two directions for investigations are suggested: 1) Analysis and evaluation of the real loading patterns and operational regimes for Kozloduy NPP WWER-440 and WWER-1000 in the frame of the recent safety criteria and nuclear power plant operating limits. 2) Development of modern code system for WWER type reactor core analysis with advanced features: new design and materials for fuel and control rods, increasing the fuel enrichment, using the integral and discrete burnable absorbers etc. The fuel technology design evolution maximizes the fuel utilization efficiency, improves operation performance and enhances safety margins. By the joint efforts of specialists from INRNE, Sofia (BG) and KAB, Berlin (GE), the codes NESSEL-IV-EC, PYTHIA and DERAB have been developed and verified. In the frame of the PHARE programme the joint project ASPERCA has been proposed intended for reactor physics calculations with PHYBER-WWER code for safety enhancement and operation reliability improvement. In-core fuel management benchmarks for 4 cycles of unit 2 (WWER-440) and 2 cycles of unit 5 (WWER-1000) have been performed. The coordination of burnable absorber design implementation, low leakage loadings usage, reloading enrichment increase and steel content reduction in the core have made the reactor core analysis more demanding and the definition of loading patterns - more difficult. This complexity requires routine use of three-dimensional fast accurate core model with extended and updated cross section libraries. To meet the needs of WWER advanced loading patterns and in-core fuel management improvements the HEXANES code systems is being developed and qualified. Some test calculations have been carried out by the HEXANES code system investigating the influence of Gd in the fuel on the main reactor physics parameters. For reevaluation of the core safety-related design limits forming the basis of licensing procedure, the code DYN3D/M2 is used. 16 refs., 3 figs. (author)

  12. Optimal Alarm Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It...

  13. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    International Nuclear Information System (INIS)

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-01-01

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system

  14. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  15. Transoptr-a second order beam transport design code with automatic internal optimization and general constraints

    International Nuclear Information System (INIS)

    Heighway, E.A.

    1980-07-01

    A second order beam transport design code with parametric optimization is described. The code analyzes the transport of charged particle beams through a user defined magnet system. The magnet system parameters are varied (within user defined limits) until the properties of the transported beam and/or the system transport matrix match those properties requested by the user. The code uses matrix formalism to represent the transport elements and optimization is achieved using the variable metric method. Any constraints that can be expressed algebraically may be included by the user as part of his design. Instruction in the use of the program is given. (auth)

  16. Italian electricity supply contracts optimization: ECO computer code

    International Nuclear Information System (INIS)

    Napoli, G.; Savelli, D.

    1993-01-01

    The ECO (Electrical Contract Optimization) code written in the Microsoft WINDOWS 3.1 language can be handled with a 286 PC and a minimum of RAM. It consists of four modules, one for the calculation of ENEL (Italian National Electricity Board) tariffs, one for contractual time-of-use tariffs optimization, a table of tariff coefficients, and a module for monthly power consumption calculations based on annual load diagrams. The optimization code was developed by ENEA (Italian Agency for New Technology, Energy and the Environment) to help Italian industrial firms comply with new and complex national electricity supply contractual regulations and tariffs. In addition to helping industrial firms determine optimum contractual arrangements, the code also assists them in optimizing their choice of equipment and production cycles

  17. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  18. The computer code system for reactor radiation shielding in design of nuclear power plant

    International Nuclear Information System (INIS)

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  19. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  20. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  1. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  2. Software exorcism a handbook for debugging and optimizing legacy code

    CERN Document Server

    Blunden, Bill

    2013-01-01

    Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code takes an unflinching, no bulls and look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that you are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our atten

  3. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  4. Optimization in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geraldo R.M. da [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia

    1994-12-31

    This paper discusses, partially, the advantages and the disadvantages of the optimal power flow. It shows some of the difficulties of implementation and proposes solutions. An analysis is made comparing the power flow, BIGPOWER/CESP, and the optimal power flow, FPO/SEL, developed by the author, when applied to the CEPEL-ELETRONORTE and CESP systems. (author) 8 refs., 5 tabs.

  5. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  6. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  7. Optimization of the FAST ICRF antenna using TOPICA code

    International Nuclear Information System (INIS)

    Sorba, M.; Milanesio, D.; Maggiora, R.; Tuccillo, A.

    2010-01-01

    Ion Cyclotron Resonance Heating is one of the most important auxiliary heating systems in most plasma confinement experiments. Because of this, the need for very accurate design of ion cyclotron (IC) launchers has dramatically grown in recent years. Furthermore, a reliable simulation tool is a crucial request in the successful design of these antennas, since full testing is impossible outside experiments. One of the most advanced and validated simulation codes is TOPICA, which offers the possibility to handle the geometrical level of detail of a real antenna in front of an accurately described plasma scenario. Adopting this essential tool made possible to reach a refined design of ion cyclotron radio frequency antenna for the FAST (Fusion Advanced Studies Torus) experiment . Starting from a streamlined antenna model and then following well-defined refinement procedures, an optimized launcher design in terms of power delivered to plasma has been finally achieved. The computer-assisted geometry refinements allowed an increase in the performances of the antenna and notably in power handling: the extent of the gained improvements were not experienced in the past, essentially due to the absence of predictive tools capable of analyzing the detailed effects of antenna geometry in plasma facing conditions. Thus, with the help of TOPICA code, it has been possible to comply with the FAST experiment requirements in terms of vacuum chamber constraints and power delivered to plasma. Once an antenna geometry was optimized with a reference plasma profile, the analysis of the performances of the launcher has been extended with respect to two plasma scenarios. Exploiting all TOPICA features, it has been possible to predict the behavior of the launcher in real operating conditions, for instance varying the position of the separatrix surface. In order to fulfil the analysis of the FAST IC antenna, the study of the RF potentials, which depend on the parallel electric field computation

  8. Expansion of the CHR bone code system

    International Nuclear Information System (INIS)

    Farnham, J.E.; Schlenker, R.A.

    1976-01-01

    This report describes the coding system used in the Center for Human Radiobiology (CHR) to identify individual bones and portions of bones of a complete skeletal system. It includes illustrations of various bones and bone segments with their respective code numbers. Codes are also presented for bone groups and for nonbone materials

  9. Development of a coupled code system based on system transient code, RETRAN, and 3-D neutronics code, MASTER

    International Nuclear Information System (INIS)

    Kim, K. D.; Jung, J. J.; Lee, S. W.; Cho, B. O.; Ji, S. K.; Kim, Y. H.; Seong, C. K.

    2002-01-01

    A coupled code system of RETRAN/MASTER has been developed for best-estimate simulations of interactions between reactor core neutron kinetics and plant thermal-hydraulics by incorporation of a 3-D reactor core kinetics analysis code, MASTER into system transient code, RETRAN. The soundness of the consolidated code system is confirmed by simulating the MSLB benchmark problem developed to verify the performance of a coupled kinetics and system transient codes by OECD/NEA

  10. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  11. Coding-Spreading Tradeoff in CDMA Systems

    National Research Council Canada - National Science Library

    Bolas, Eduardo

    2002-01-01

    .... Comparing different combinations of coding and spreading with a traditional DS-CDMA, as defined in the IS-95 standard, allows the criteria to be defined for the best coding-spreading tradeoff in CDMA systems...

  12. Spectral-Amplitude-Coded OCDMA Optimized for a Realistic FBG Frequency Response

    Science.gov (United States)

    Penon, Julien; El-Sahn, Ziad A.; Rusch, Leslie A.; Larochelle, Sophie

    2007-05-01

    We develop a methodology for numerical optimization of fiber Bragg grating frequency response to maximize the achievable capacity of a spectral-amplitude-coded optical code-division multiple-access (SAC-OCDMA) system. The optimal encoders are realized, and we experimentally demonstrate an incoherent SAC-OCDMA system with seven simultaneous users. We report a bit error rate (BER) of 2.7 x 10-8 at 622 Mb/s for a fully loaded network (seven users) using a 9.6-nm optical band. We achieve error-free transmission (BER < 1 x 10-9) for up to five simultaneous users.

  13. Distributed Optimization System

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  14. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  15. Cooperative optimization and their application in LDPC codes

    Science.gov (United States)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  16. The octopus burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L.; Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de

    1996-09-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  17. The OCTOPUS burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.).

  18. The octopus burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de.

    1996-01-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  19. The OCTOPUS burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.)

  20. Fundamentals of an Optimal Multirate Subband Coding of Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    D. Kula

    2000-06-01

    Full Text Available A consistent theory of optimal subband coding of zero mean wide-sense cyclostationary signals, with N-periodic statistics, is presented in this article. An M-channel orthonormal uniform filter bank, employing N-periodic analysis and synthesis filters, is used while an average variance condition is applied to evaluate the output distortion. In three lemmas and final theorem, the necessity of decorrelation of blocked subband signals and requirement of specific ordering of power spectral densities are proven.

  1. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2012-12-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system. © 2012 IEEE.

  2. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2013-01-10

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system.

  3. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  4. Iterative optimization of performance libraries by hierarchical division of codes

    International Nuclear Information System (INIS)

    Donadio, S.

    2007-09-01

    The increasing complexity of hardware features incorporated in modern processors makes high performance code generation very challenging. Library generators such as ATLAS, FFTW and SPIRAL overcome this issue by empirically searching in the space of possible program versions for the one that performs the best. This thesis explores fully automatic solution to adapt a compute-intensive application to the target architecture. By mimicking complex sequences of transformations useful to optimize real codes, we show that generative programming is a practical tool to implement a new hierarchical compilation approach for the generation of high performance code relying on the use of state-of-the-art compilers. As opposed to ATLAS, this approach is not application-dependant but can be applied to fairly generic loop structures. Our approach relies on the decomposition of the original loop nest into simpler kernels. These kernels are much simpler to optimize and furthermore, using such codes makes the performance trade off problem much simpler to express and to solve. Finally, we propose a new approach for the generation of performance libraries based on this decomposition method. We show that our method generates high-performance libraries, in particular for BLAS. (author)

  5. High dynamic range coding imaging system

    Science.gov (United States)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  6. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  7. Random mask optimization for fast neutron coded aperture imaging

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, Kyle [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Univ. of California, Los Angeles, CA (United States); Marleau, Peter [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brubaker, Erik [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-05-01

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed image quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.

  8. BWROPT: A multi-cycle BWR fuel cycle optimization code

    Energy Technology Data Exchange (ETDEWEB)

    Ottinger, Keith E.; Maldonado, G. Ivan, E-mail: Ivan.Maldonado@utk.edu

    2015-09-15

    Highlights: • A multi-cycle BWR fuel cycle optimization algorithm is presented. • New fuel inventory and core loading pattern determination. • The parallel simulated annealing algorithm was used for the optimization. • Variable sampling probabilities were compared to constant sampling probabilities. - Abstract: A new computer code for performing BWR in-core and out-of-core fuel cycle optimization for multiple cycles simultaneously has been developed. Parallel simulated annealing (PSA) is used to optimize the new fuel inventory and placement of new and reload fuel for each cycle considered. Several algorithm improvements were implemented and evaluated. The most significant of these are variable sampling probabilities and sampling new fuel types from an ordered array. A heuristic control rod pattern (CRP) search algorithm was also implemented, which is useful for single CRP determinations, however, this feature requires significant computational resources and is currently not practical for use in a full multi-cycle optimization. The PSA algorithm was demonstrated to be capable of significant objective function reduction and finding candidate loading patterns without constraint violations. The use of variable sampling probabilities was shown to reduce runtime while producing better results compared to using constant sampling probabilities. Sampling new fuel types from an ordered array was shown to have a mixed effect compared to random new fuel type sampling, whereby using both random and ordered sampling produced better results but required longer runtimes.

  9. Office lighting systems: Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Dagnino, U. (ENEL, Milan (Italy))

    1990-09-01

    Relative to office lighting systems, in particular, those making use of tubular fluorescent lamps, currently available on the international market, this paper tries to develop lighting system, design optimization criteria. The comparative assessment of the various design possibilities considers operating cost, energy consumption, and occupational comfort/safety aspects such as lighting level uniformity and equilibrium, reduction of glare and reflection, natural/artificial lighting balance, programmed switching, computerized control systems for multi-use requirements in large areas, programmed maintenance for greater efficiency and reliability.

  10. Investigation of Navier-Stokes Code Verification and Design Optimization

    Science.gov (United States)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  11. CRM System Optimization

    OpenAIRE

    Fučík, Ivan

    2015-01-01

    This thesis is focused on CRM solutions in small and medium-sized organizations with respect to the quality of their customer relationship. The main goal of this work is to design an optimal CRM solution in the environment of real organization. To achieve this goal it is necessary to understand the theoretical basis of several topics, such as organizations and their relationship with customers, CRM systems, their features and trends. On the basis of these theoretical topics it is possible to ...

  12. Power system optimization

    International Nuclear Information System (INIS)

    Bogdan, Zeljko; Cehil, Mislav

    2007-01-01

    Long-term gas purchase contracts usually determine delivery and payment for gas on the regular hourly basis, independently of demand side consumption. In order to use fuel gas in an economically viable way, optimization of gas distribution for covering consumption must be introduced. In this paper, a mathematical model of the electric utility system which is used for optimization of gas distribution over electric generators is presented. The utility system comprises installed capacity of 1500 MW of thermal power plants, 400 MW of combined heat and power plants, 330 MW of a nuclear power plant and 1600 MW of hydro power plants. Based on known demand curve the optimization model selects plants according to the prescribed criteria. Firstly it engages run-of-river hydro plants, then the public cogeneration plants, the nuclear plant and thermal power plants. Storage hydro plants are used for covering peak load consumption. In case of shortage of installed capacity, the cross-border purchase is allowed. Usage of dual fuel equipment (gas-oil), which is available in some thermal plants, is also controlled by the optimization procedure. It is shown that by using such a model it is possible to properly plan the amount of fuel gas which will be contracted. The contracted amount can easily be distributed over generators efficiently and without losses (no breaks in delivery). The model helps in optimizing of fuel gas-oil ratio for plants with combined burners and enables planning of power plants overhauls over a year in a viable and efficient way. (author)

  13. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  14. Optimization and Validation of the Developed Uranium Isotopic Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    γ-ray spectroscopy is a representative non-destructive assay for nuclear material, and less time-consuming and less expensive than the destructive analysis method. The destructive technique is more precise than NDA technique, however, there is some correction algorithm which can improve the performance of γ-spectroscopy. For this reason, an analysis code for uranium isotopic analysis is developed by Applied Nuclear Physics Group in Seoul National University. Overlapped γ- and x-ray peaks in the 89-101 keV X{sub α}-region are fitted with Gaussian and Lorentzian distribution peak functions, tail and background functions. In this study, optimizations for the full-energy peak efficiency calibration and fitting parameters of peak tail and background are performed, and validated with 24 hour acquisition of CRM uranium samples. The optimization of peak tail and background parameters are performed with the validation by using CRM uranium samples. The analysis performance is improved in HEU samples, but more optimization of fitting parameters is required in LEU sample analysis. In the future, the optimization research about the fitting parameters with various type of uranium samples will be performed. {sup 234}U isotopic analysis algorithms and correction algorithms (coincidence effect, self-attenuation effect) will be developed.

  15. Turbine Airfoil Optimization Using Quasi-3D Analysis Codes

    Directory of Open Access Journals (Sweden)

    Sanjay Goel

    2009-01-01

    Full Text Available A new approach to optimize the geometry of a turbine airfoil by simultaneously designing multiple 2D sections of the airfoil is presented in this paper. The complexity of 3D geometry modeling is circumvented by generating multiple 2D airfoil sections and constraining their geometry in the radial direction using first- and second-order polynomials that ensure smoothness in the radial direction. The flow fields of candidate geometries obtained during optimization are evaluated using a quasi-3D, inviscid, CFD analysis code. An inviscid flow solver is used to reduce the execution time of the analysis. Multiple evaluation criteria based on the Mach number profile obtained from the analysis of each airfoil cross-section are used for computing a quality metric. A key contribution of the paper is the development of metrics that emulate the perception of the human designer in visually evaluating the Mach Number distribution. A mathematical representation of the evaluation criteria coupled with a parametric geometry generator enables the use of formal optimization techniques in the design. The proposed approach is implemented in the optimal design of a low-pressure turbine nozzle.

  16. Data exchange between zero dimensional code and physics platform in the CFETR integrated system code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Guoliang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Shi, Nan [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Zhou, Yifu; Mao, Shifeng [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Jian, Xiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Jiale [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Liu, Li; Chan, Vincent [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Ye, Minyou, E-mail: yemy@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China)

    2016-11-01

    Highlights: • The workflow of the zero dimensional code and the multi-dimension physics platform of CFETR integrated system codeis introduced. • The iteration process among the codes in the physics platform. • The data transfer between the zero dimensionalcode and the physical platform, including data iteration and validation, and justification for performance parameters.. - Abstract: The China Fusion Engineering Test Reactor (CFETR) integrated system code contains three parts: a zero dimensional code, a physics platform and an engineering platform. We use the zero dimensional code to identify a set of preliminary physics and engineering parameters for CFETR, which is used as input to initiate multi-dimension studies using the physics and engineering platform for design, verification and validation. Effective data exchange between the zero dimensional code and the physical platform is critical for the optimization of CFETR design. For example, in evaluating the impact of impurity radiation on core performance, an open field line code is used to calculate the impurity transport from the first-wall boundary to the pedestal. The impurity particle in the pedestal are used as boundary conditions in a transport code for calculating impurity transport in the core plasma and the impact of core radiation on core performance. Comparison of the results from the multi-dimensional study to those from the zero dimensional code is used to further refine the controlled radiation model. The data transfer between the zero dimensional code and the physical platform, including data iteration and validation, and justification for performance parameters will be presented in this paper.

  17. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  18. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    International Nuclear Information System (INIS)

    Li, Jia; Jiang, Kecheng; Zhang, Xiaokang; Nie, Xingchen; Zhu, Qinjun; Liu, Songlin

    2016-01-01

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  19. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jia, E-mail: lijia@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Jiang, Kecheng; Zhang, Xiaokang [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China); Nie, Xingchen [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Zhu, Qinjun; Liu, Songlin [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China)

    2016-12-15

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  20. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  1. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  2. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  3. ETF system code: composition and applications

    International Nuclear Information System (INIS)

    Reid, R.L.; Wu, K.F.

    1980-01-01

    A computer code has been developed for application to ETF tokamak system and conceptual design studies. The code determines cost, performance, configuration, and technology requirements as a function of tokamak parameters. The ETF code is structured in a modular fashion in order to allow independent modeling of each major tokamak component. The primary benefit of modularization is that it allows updating of a component module, such as the TF coil module, without disturbing the remainder of the system code as long as the input/output to the modules remains unchanged. The modules may be run independently to perform specific design studies, such as determining the effect of allowable strain on TF coil structural requirements, or the modules may be executed together as a system to determine global effects, such as defining the impact of aspect ratio on the entire tokamak system

  4. A bar coding system for environmental projects

    International Nuclear Information System (INIS)

    Barber, R.B.; Hunt, B.J.; Burgess, G.M.

    1988-01-01

    This paper presents BeCode systems, a bar coding system which provides both nuclear and commercial clients with a data capture and custody management program that is accurate, timely, and beneficial to all levels of project operations. Using bar code identifiers is an essentially paperless and error-free method which provides more efficient delivery of data through its menu card-driven structure, which speeds collection of essential data for uploading to a compatible device. The effects of this sequence include real-time information for operator analysis, management review, audits, planning, scheduling, and cost control

  5. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...and also English) natural language processing ( NLP ), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  6. Fusion PIC code performance analysis on the Cori KNL system

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Raman, Karthic [INTEL Corp. (United States)

    2017-05-25

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization is shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.

  7. Optimization and Optimal Control in Automotive Systems

    NARCIS (Netherlands)

    Waschl, H.; Kolmanovsky, I.V.; Steinbuch, M.; Re, del L.

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier approaches, based on some degree of heuristics, to the use of more and

  8. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    International Nuclear Information System (INIS)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant

  9. Optimal Training Systems STTR

    National Research Council Canada - National Science Library

    Best, Brad; Lovett, Marsha

    2005-01-01

    .... Using an optimal model of task performance subject to human constraints may be a more efficient way to develop models of skilled human performance for use in training, especially since optimal models...

  10. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  11. User effects on the transient system code calculations. Final report

    International Nuclear Information System (INIS)

    Aksan, S.N.; D'Auria, F.

    1995-01-01

    Large thermal-hydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants to optimize operational procedures and the plant design itself. Evaluation of the capabilities of these codes are accomplished by comparing the code predictions with the measured experimental data obtained from various types of separate effects and integral test facilities. In recent years, some attempts have been made to establish methodologies to evaluate the accuracy and the uncertainty of the code predictions and consequently judgement on the acceptability of the codes. In none of the methodologies has the influence of the code user on the calculated results been directly addressed. In this paper, the results of the investigations on the user effects for the thermal-hydraulic transient system codes is presented and discussed on the basis of some case studies. The general findings of the investigations show that in addition to user effects, there are other reasons that affect the results of the calculations and which are hidden under user effects. Both the hidden factors and the direct user effects are discussed in detail and general recommendations and conclusions are presented to control and limit them

  12. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  13. Optimization strategies in complex systems

    NARCIS (Netherlands)

    Bussolari, L.; Contucci, P.; Giardinà, C.; Giberti, C.; Unguendoli, F.; Vernia, C.

    2003-01-01

    We consider a class of combinatorial optimization problems that emerge in a variety of domains among which: condensed matter physics, theory of financial risks, error correcting codes in information transmissions, molecular and protein conformation, image restoration. We show the performances of two

  14. Fast decoding algorithms for coded aperture systems

    International Nuclear Information System (INIS)

    Byard, Kevin

    2014-01-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques

  15. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  16. Code system for fast reactor neutronics analysis

    International Nuclear Information System (INIS)

    Nakagawa, Masayuki; Abe, Junji; Sato, Wakaei.

    1983-04-01

    A code system for analysis of fast reactor neutronics has been developed for the purpose of handy use and error reduction. The JOINT code produces the input data file to be used in the neutronics calculation code and also prepares the cross section library file with an assigned format. The effective cross sections are saved in the PDS file with an unified format. At the present stage, this code system includes the following codes; SLAROM, ESELEM5, EXPANDA-G for the production of effective cross sections and CITATION-FBR, ANISN-JR, TWOTRAN2, PHENIX, 3DB, MORSE, CIPER and SNPERT. In the course of the development, some utility programs and service programs have been additionaly developed. These are used for access of PDS file, edit of the cross sections and graphic display. Included in this report are a description of input data format of the JOINT and other programs, and of the function of each subroutine and utility programs. The usage of PDS file is also explained. In Appendix A, the input formats are described for the revised version of the CIPER code. (author)

  17. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    Science.gov (United States)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  18. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  19. Plotting system for the MINCS code

    International Nuclear Information System (INIS)

    Watanabe, Tadashi

    1990-08-01

    The plotting system for the MINCS code is described. The transient two-phase flow analysis code MINCS has been developed to provide a computational tool for analysing various two-phase flow phenomena in one-dimensional ducts. Two plotting systems, namely the SPLPLOT system and the SDPLOT system, can be used as the plotting functions. The SPLPLOT system is used for plotting time transients of variables, while the SDPLOT system is for spatial distributions. The SPLPLOT system is based on the SPLPACK system, which is used as a general tool for plotting results of transient analysis codes or experiments. The SDPLOT is based on the GPLP program, which is also regarded as one of the general plotting programs. In the SPLPLOT and the SDPLOT systems, the standardized data format called the SPL format is used in reading data to be plotted. The output data format of MINCS is translated into the SPL format by using the conversion system called the MINTOSPL system. In this report, how to use the plotting functions is described. (author)

  20. Topology optimized permanent magnet systems

    Science.gov (United States)

    Bjørk, R.; Bahl, C. R. H.; Insinga, A. R.

    2017-09-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a Λcool figure of merit of 0.472 is reached, which is an increase of 100% compared to a previous optimized design.

  1. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  2. Optimal Control of Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Vadim Azhmyakov

    2007-01-01

    Full Text Available In the present work, we consider a class of nonlinear optimal control problems, which can be called “optimal control problems in mechanics.” We deal with control systems whose dynamics can be described by a system of Euler-Lagrange or Hamilton equations. Using the variational structure of the solution of the corresponding boundary-value problems, we reduce the initial optimal control problem to an auxiliary problem of multiobjective programming. This technique makes it possible to apply some consistent numerical approximations of a multiobjective optimization problem to the initial optimal control problem. For solving the auxiliary problem, we propose an implementable numerical algorithm.

  3. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...... decoders perform repeated decoding trials and decoding information is exchanged between them...

  4. NALAP: an LMFBR system transient code

    International Nuclear Information System (INIS)

    Martin, B.A.; Agrawal, A.K.; Albright, D.C.; Epel, L.G.; Maise, G.

    1975-07-01

    NALAP is a LMFBR system transient code. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic response of sodium cooled fast breeder reactors when subjected to postulated accidents such as a massive pipe break as well as a variety of other upset conditions that do not disrupt the system geometry. Various components of the plant are represented by control volumes. These control volumes are connected by junctions some of which may be leak or fill junctions. The fluid flow equations are modeled as compressible, single-stream flow with momentum flux in one dimension. The transient response is computed by integrating the thermal-hydraulic conservation equations from user-initialized operating conditions by an implicit numerical scheme. Point kinetics approximation is used to represent the time dependent heat generation in the reactor core

  5. An engineering code to analyze hypersonic thermal management systems

    Science.gov (United States)

    Vangriethuysen, Valerie J.; Wallace, Clark E.

    1993-01-01

    Thermal loads on current and future aircraft are increasing and as a result are stressing the energy collection, control, and dissipation capabilities of current thermal management systems and technology. The thermal loads for hypersonic vehicles will be no exception. In fact, with their projected high heat loads and fluxes, hypersonic vehicles are a prime example of systems that will require thermal management systems (TMS) that have been optimized and integrated with the entire vehicle to the maximum extent possible during the initial design stages. This will not only be to meet operational requirements, but also to fulfill weight and performance constraints in order for the vehicle to takeoff and complete its mission successfully. To meet this challenge, the TMS can no longer be two or more entirely independent systems, nor can thermal management be an after thought in the design process, the typical pervasive approach in the past. Instead, a TMS that was integrated throughout the entire vehicle and subsequently optimized will be required. To accomplish this, a method that iteratively optimizes the TMS throughout the vehicle will not only be highly desirable, but advantageous in order to reduce the manhours normally required to conduct the necessary tradeoff studies and comparisons. A thermal management engineering computer code that is under development and being managed at Wright Laboratory, Wright-Patterson AFB, is discussed. The primary goal of the code is to aid in the development of a hypersonic vehicle TMS that has been optimized and integrated on a total vehicle basis.

  6. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  7. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  8. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  9. Bar-code automated waste tracking system

    International Nuclear Information System (INIS)

    Hull, T.E.

    1994-10-01

    The Bar-Code Automated Waste Tracking System was designed to be a site-Specific program with a general purpose application for transportability to other facilities. The system is user-friendly, totally automated, and incorporates the use of a drive-up window that is close to the areas dealing in container preparation, delivery, pickup, and disposal. The system features ''stop-and-go'' operation rather than a long, tedious, error-prone manual entry. The system is designed for automation but allows operators to concentrate on proper handling of waste while maintaining manual entry of data as a backup. A large wall plaque filled with bar-code labels is used to input specific details about any movement of waste

  10. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  11. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    . Zulfikar

    2012-10-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  12. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    Zulfikar .

    2015-05-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  13. Topology optimized permanent magnet systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian; Insinga, Andrea Roberto

    2017-01-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron...... and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a ΛcoolΛcool figure of merit of 0...

  14. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...

  15. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  16. CASKETSS: a computer code system for thermal and structural analysis of nuclear fuel shipping casks

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1989-02-01

    A computer program CASKETSS has been developed for the purpose of thermal and structural analysis of nuclear fuel shipping casks. CASKETSS measn a modular code system for CASK Evaluation code system Thermal and Structural Safety. Main features of CASKETSS are as follow; (1) Thermal and structural analysis computer programs for one-, two-, three-dimensional geometries are contained in the code system. (2) Some of the computer programs in the code system has been programmed to provide near optimal speed on vector processing computers. (3) Data libralies fro thermal and structural analysis are provided in the code system. (4) Input data generator is provided in the code system. (5) Graphic computer program is provided in the code system. In the paper, brief illustration of calculation method, input data and sample calculations are presented. (author)

  17. Optimizing queries in distributed systems

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2006-01-01

    Full Text Available This research presents the main elements of query optimizations in distributed systems. First, data architecture according with system level architecture in a distributed environment is presented. Then the architecture of a distributed database management system (DDBMS is described on conceptual level followed by the presentation of the distributed query execution steps on these information systems. The research ends with presentation of some aspects of distributed database query optimization and strategies used for that.

  18. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    Science.gov (United States)

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  19. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Jing Li (Tiffany

    2008-04-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified “convergence-constraint” density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional “threshold-constraint” method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  20. System floorplanning optimization

    KAUST Repository

    Browning, David W.; Elansary, Ayman; Shalaby, Mohamed

    2012-01-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the 'outside in' with the focus on how the system

  1. System floorplanning optimization

    KAUST Repository

    Browning, David W.; Ansary, Ayman M. El; Shalaby, Mohamed

    2013-01-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the 'outside in' with the focus on how the system

  2. HELIAS module development for systems codes

    Energy Technology Data Exchange (ETDEWEB)

    Warmer, F., E-mail: Felix.Warmer@ipp.mpg.de; Beidler, C.D.; Dinklage, A.; Egorov, K.; Feng, Y.; Geiger, J.; Schauer, F.; Turkin, Y.; Wolf, R.; Xanthopoulos, P.

    2015-02-15

    In order to study and design next-step fusion devices such as DEMO, comprehensive systems codes are commonly employed. In this work HELIAS-specific models are proposed which are designed to be compatible with systems codes. The subsequently developed models include: a geometry model based on Fourier coefficients which can represent the complex 3-D plasma shape, a basic island divertor model which assumes diffusive cross-field transport and high radiation at the X-point, and a coil model which combines scaling aspects based on the HELIAS 5-B reactor design in combination with analytic inductance and field calculations. In addition, stellarator-specific plasma transport is discussed. A strategy is proposed which employs a predictive confinement time scaling derived from 1-D neoclassical and 3-D turbulence simulations. This paper reports on the progress of the development of the stellarator-specific models while an implementation and verification study within an existing systems code will be presented in a separate work. This approach is investigated to ultimately allow one to conduct stellarator system studies, develop design points of HELIAS burning plasma devices, and to facilitate a direct comparison between tokamak and stellarator DEMO and power plant designs.

  3. An Optimal Lower Eigenvalue System

    Directory of Open Access Journals (Sweden)

    Yingfan Liu

    2011-01-01

    Full Text Available An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

  4. Outage Analysis and Optimization of SWIPT in Network-Coded Two-Way Relay Networks

    Directory of Open Access Journals (Sweden)

    Ruihong Jiang

    2017-01-01

    Full Text Available This paper investigates the outage performance of simultaneous wireless information and power transfer (SWIPT in network-coded two-way relay systems, where a relay first harvests energy from the signals transmitted by two sources and then uses the harvested energy to forward the received information to the two sources. We consider two transmission protocols, power splitting two-way relay (PS-TWR and time switching two-way relay (TS-TWR protocols. We present two explicit expressions for the system outage probability of the two protocols and further derive approximate expressions for them in high and low SNR cases. To explore the system performance limits, two optimization problems are formulated to minimize the system outage probability. Since the problems are nonconvex and have no known solution methods, a genetic algorithm- (GA- based algorithm is designed. Numerical and simulation results validate our theoretical analysis. It is shown that, by jointly optimizing the time assignment and SWIPT receiver parameters, a great performance gain can be achieved for both PS-TWR and TS-TWR. Moreover, the optimized PS-TWR always outperforms the optimized TS-TWR in terms of outage performance. Additionally, the effects of parameters including relay location and transmit powers are also discussed, which provide some insights for the SWIPT-enabled two-way relay networks.

  5. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...

  6. Analog system for computing sparse codes

    Science.gov (United States)

    Rozell, Christopher John; Johnson, Don Herrick; Baraniuk, Richard Gordon; Olshausen, Bruno A.; Ortman, Robert Lowell

    2010-08-24

    A parallel dynamical system for computing sparse representations of data, i.e., where the data can be fully represented in terms of a small number of non-zero code elements, and for reconstructing compressively sensed images. The system is based on the principles of thresholding and local competition that solves a family of sparse approximation problems corresponding to various sparsity metrics. The system utilizes Locally Competitive Algorithms (LCAs), nodes in a population continually compete with neighboring units using (usually one-way) lateral inhibition to calculate coefficients representing an input in an over complete dictionary.

  7. A Consistent System for Coding Laboratory Samples

    Science.gov (United States)

    Sih, John C.

    1996-07-01

    A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.

  8. Burnup calculation code system COMRAD96

    International Nuclear Information System (INIS)

    Suyama, Kenya; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu.

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, 'Cross Section Treatment', 'Generation and Depletion Calculation', and 'Post Process'. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the γ Spectrum on a terminal. This report is the general description and user's manual of COMRAD96. (author)

  9. Burnup calculation code system COMRAD96

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, `Cross Section Treatment`, `Generation and Depletion Calculation`, and `Post Process`. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the {gamma} Spectrum on a terminal. This report is the general description and user`s manual of COMRAD96. (author)

  10. A mean field theory of coded CDMA systems

    International Nuclear Information System (INIS)

    Yano, Toru; Tanaka, Toshiyuki; Saad, David

    2008-01-01

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems

  11. A mean field theory of coded CDMA systems

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Toru [Graduate School of Science and Technology, Keio University, Hiyoshi, Kohoku-ku, Yokohama-shi, Kanagawa 223-8522 (Japan); Tanaka, Toshiyuki [Graduate School of Informatics, Kyoto University, Yoshida Hon-machi, Sakyo-ku, Kyoto-shi, Kyoto 606-8501 (Japan); Saad, David [Neural Computing Research Group, Aston University, Birmingham B4 7ET (United Kingdom)], E-mail: yano@thx.appi.keio.ac.jp

    2008-08-15

    We present a mean field theory of code-division multiple-access (CDMA) systems with error-control coding. On the basis of the relation between the free energy and mutual information, we obtain an analytical expression of the maximum spectral efficiency of the coded CDMA system, from which a mean-field description of the coded CDMA system is provided in terms of a bank of scalar Gaussian channels whose variances in general vary at different code symbol positions. Regular low-density parity-check (LDPC)-coded CDMA systems are also discussed as an example of the coded CDMA systems.

  12. SRAC95; general purpose neutronics code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Tsuchihashi, Keichiro; Kaneko, Kunio.

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author)

  13. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  14. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    OpenAIRE

    Zulfikar, Z

    2012-01-01

    A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared....

  15. Scaling of Thermal-Hydraulic Phenomena and System Code Assessment

    International Nuclear Information System (INIS)

    Wolfert, K.

    2008-01-01

    In the last five decades large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in nuclear power plants. Many separate effects tests and integral system tests were carried out to establish a data base for code development and code validation. In this context the question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behaviour expected in a full-scale nuclear reactor under accidental conditions. Scaling principles, developed by many scientists and engineers, present a scientific technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a down-scaled facility to reproduce all physical phenomena in the correct temporal sequence and in the kind and strength of their occurrence. The designer needs to optimize a down-scaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Taking into account these weak points, a goal oriented code validation strategy is required, based on the analyses of separate effects tests and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices are an excellent basis for the fulfilling of this task. Separate effects tests in full scale play here an important role.

  16. Optimizing electrical distribution systems

    International Nuclear Information System (INIS)

    Scott, W.G.

    1990-01-01

    Electrical utility distribution systems are in the middle of an unprecedented technological revolution in planning, design, maintenance and operation. The prime movers of the revolution are the major economic shifts that affect decision making. The major economic influence on the revolution is the cost of losses (technical and nontechnical). The vehicle of the revolution is the computer, which enables decision makers to examine alternatives in greater depth and detail than their predecessors could. The more important elements of the technological revolution are: system planning, computers, load forecasting, analytical systems (primary systems, transformers and secondary systems), system losses and coming technology. The paper is directed towards the rather unique problems encountered by engineers of utilities in developing countries - problems that are being solved through high technology, such as the recent World Bank-financed engineering computer system for Sri Lanka. This system includes a DEC computer, digitizer, plotter and engineering software to model the distribution system via a digitizer, analyse the system and plot single-line diagrams. (author). 1 ref., 4 tabs., 6 figs

  17. SALT [System Analysis Language Translater]: A steady state and dynamic systems code

    International Nuclear Information System (INIS)

    Berry, G.; Geyer, H.

    1983-01-01

    SALT (System Analysis Language Translater) is a lumped parameter approach to system analysis which is totally modular. The modules are all precompiled and only the main program, which is generated by SALT, needs to be compiled for each unique system configuration. This is a departure from other lumped parameter codes where all models are written by MACROS and then compiled for each unique configuration, usually after all of the models are lumped together and sorted to eliminate undetermined variables. The SALT code contains a robust and sophisticated steady-sate finder (non-linear equation solver), optimization capability and enhanced GEAR integration scheme which makes use of sparsity and algebraic constraints. The SALT systems code has been used for various technologies. The code was originally developed for open-cycle magnetohydrodynamic (MHD) systems. It was easily extended to liquid metal MHD systems by simply adding the appropriate models and property libraries. Similarly, the model and property libraries were expanded to handle fuel cell systems, flue gas desulfurization systems, combined cycle gasification systems, fluidized bed combustion systems, ocean thermal energy conversion systems, geothermal systems, nuclear systems, and conventional coal-fired power plants. Obviously, the SALT systems code is extremely flexible to be able to handle all of these diverse systems. At present, the dynamic option has only been used for LMFBR nuclear power plants and geothermal power plants. However, it can easily be extended to other systems and can be used for analyzing control problems. 12 refs

  18. Integrated burnup calculation code system SWAT

    International Nuclear Information System (INIS)

    Suyama, Kenya; Hirakawa, Naohiro; Iwasaki, Tomohiko.

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user's manual of SWAT. (author)

  19. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  20. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  1. The Structural Optimization System CAOS

    DEFF Research Database (Denmark)

    Rasmussen, John

    1990-01-01

    CAOS is a system for structural shape optimization. It is closely integrated in a Computer Aided Design environment and controlled entirely from the CAD-system AutoCAD. The mathematical foundation of the system is briefly presented and a description of the CAD-integration strategy is given together...

  2. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  3. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  4. Development of System Based Code: Case Study of Life-Cycle Margin Evaluation

    International Nuclear Information System (INIS)

    Tai Asayama; Masaki Morishita; Masanori Tashimo

    2006-01-01

    For a leap of progress in structural deign of nuclear plant components, The late Professor Emeritus Yasuhide Asada proposed the System Based Code. The key concepts of the System Based Code are; (1) life-cycle margin optimization, (2) expansion of technical options as well as combinations of technical options beyond the current codes and standards, and (3) designing to clearly defined target reliabilities. Those concepts are very new to most of the nuclear power plant designers who are naturally obliged to design to current codes and standards; the application of the concepts of the System Based Code to design will lead to entire change of practices that designers have long been accustomed to. On the other hand, experienced designers are supposed to have expertise that can support and accelerate the development of the System Based Code. Therefore, interfacing with experienced designers is of crucial importance for the development of the System Based Code. The authors conducted a survey on the acceptability of the System Based Code concept. The results were analyzed from the possibility of improving structural design both in terms of reliability and cost effectiveness by the introduction of the System Based Code concept. It was concluded that the System Based Code is beneficial for those purposes. Also described is the expertise elicited from the results of the survey that can be reflected to the development of the System Based Code. (authors)

  5. Information rates of next-generation long-haul optical fiber systems using coded modulation

    NARCIS (Netherlands)

    Liga, G.; Alvarado, A.; Agrell, E.; Bayvel, P.

    2017-01-01

    A comprehensive study of the coded performance of long-haul spectrally-efficient WDM optical fiber transmission systems with different coded modulation decoding structures is presented. Achievable information rates are derived for three different square QAM formats and the optimal format is

  6. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng; Alloum, Amira; Al-Naffouri, Tareq Y.

    2016-01-01

    present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first

  7. Systemization of burnup sensitivity analysis code

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2004-02-01

    To practical use of fact reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoints of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor core 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, development of a analysis code for burnup sensitivity, SAGEP-BURN, has been done and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to user due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functionalities in the existing large system. It is not sufficient to unify each computational component for some reasons; computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For this

  8. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  9. Truss systems and shape optimization

    Science.gov (United States)

    Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana

    2017-07-01

    Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.

  10. Buncher system parameter optimization

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1981-01-01

    A least-squares algorithm is presented to calculate the RF amplitudes and cavity spacings for a series of buncher cavities each resonating at a frequency that is a multiple of a fundamental frequency of interest. The longitudinal phase-space distribution, obtained by particle tracing through the bunching system, is compared to a desired distribution function of energy and phase. The buncher cavity parameters are adjusted to minimize the difference between these two distributions. Examples are given for zero space charge. The manner in which the method can be extended to include space charge using the 3-D space-charge calculation procedure is indicated

  11. Transmission over UWB channels with OFDM system using LDPC coding

    Science.gov (United States)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  12. Optimization theory for large systems

    CERN Document Server

    Lasdon, Leon S

    2002-01-01

    Important text examines most significant algorithms for optimizing large systems and clarifying relations between optimization procedures. Much data appear as charts and graphs and will be highly valuable to readers in selecting a method and estimating computer time and cost in problem-solving. Initial chapter on linear and nonlinear programming presents all necessary background for subjects covered in rest of book. Second chapter illustrates how large-scale mathematical programs arise from real-world problems. Appendixes. List of Symbols.

  13. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  14. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  15. PlayNCool: Opportunistic Network Coding for Local Optimization of Routing in Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces PlayNCool, an opportunistic protocol with local optimization based on network coding to increase the throughput of a wireless mesh network (WMN). PlayNCool aims to enhance current routing protocols by (i) allowing random linear network coding transmissions end-to-end, (ii) r...

  16. Multidisciplinary Aerospace Systems Optimization: Computational AeroSciences (CAS) Project

    Science.gov (United States)

    Kodiyalam, S.; Sobieski, Jaroslaw S. (Technical Monitor)

    2001-01-01

    The report describes a method for performing optimization of a system whose analysis is so expensive that it is impractical to let the optimization code invoke it directly because excessive computational cost and elapsed time might result. In such situation it is imperative to have user control the number of times the analysis is invoked. The reported method achieves that by two techniques in the Design of Experiment category: a uniform dispersal of the trial design points over a n-dimensional hypersphere and a response surface fitting, and the technique of krigging. Analyses of all the trial designs whose number may be set by the user are performed before activation of the optimization code and the results are stored as a data base. That code is then executed and referred to the above data base. Two applications, one of the airborne laser system, and one of an aircraft optimization illustrate the method application.

  17. Code system BCG for gamma-ray skyshine calculation

    International Nuclear Information System (INIS)

    Ryufuku, Hiroshi; Numakunai, Takao; Miyasaka, Shun-ichi; Minami, Kazuyoshi.

    1979-03-01

    A code system BCG has been developed for calculating conveniently and efficiently gamma-ray skyshine doses using the transport calculation codes ANISN and DOT and the point-kernel calculation codes G-33 and SPAN. To simplify the input forms to the system, the forms for these codes are unified, twelve geometric patterns are introduced to give material regions, and standard data are available as a library. To treat complex arrangements of source and shield, it is further possible to use successively the code such that the results from one code may be used as input data to the same or other code. (author)

  18. Concatenated coding system with iterated sequential inner decoding

    DEFF Research Database (Denmark)

    Jensen, Ole Riis; Paaske, Erik

    1995-01-01

    We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder......We describe a concatenated coding system with iterated sequential inner decoding. The system uses convolutional codes of very long constraint length and operates on iterations between an inner Fano decoder and an outer Reed-Solomon decoder...

  19. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    Science.gov (United States)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  20. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto, E-mail: alan@lmp.ufrj.br, E-mail: andressa@lmp.ufrj.br, E-mail: schirru@lmp.ufrj.br, E-mail: ffreire@eletronuclear.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Eletrobras Termonuclear S.A. (ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10{sup 13} combinations and 10{sup 11} great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  1. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto

    2017-01-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10"1"3 combinations and 10"1"1 great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  2. Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel

    Directory of Open Access Journals (Sweden)

    Declercq David

    2007-01-01

    Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.

  3. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  4. Development of a graphical interface computer code for reactor fuel reloading optimization

    International Nuclear Information System (INIS)

    Do Quang Binh; Nguyen Phuoc Lan; Bui Xuan Huy

    2007-01-01

    This report represents the results of the project performed in 2007. The aim of this project is to develop a graphical interface computer code that allows refueling engineers to design fuel reloading patterns for research reactor using simulated graphical model of reactor core. Besides, this code can perform refueling optimization calculations based on genetic algorithms as well as simulated annealing. The computer code was verified based on a sample problem, which relies on operational and experimental data of Dalat research reactor. This code can play a significant role in in-core fuel management practice at nuclear research reactor centers and in training. (author)

  5. A code system for ADS transmutation studies

    International Nuclear Information System (INIS)

    Brolly, A.; Vertes, P.

    2001-01-01

    An accelerator driven reactor physical system can be divided into two different subsystems. One is the neutron source the other is the subcritical reactor. Similarly, the modelling of such system is also split into two parts. The first step is the determination of the spatial distribution and angle-energy spectrum of neutron source in the target region; the second one is the calculation of neutron flux which is responsible for the transmutation process in the subcritical system. Accelerators can make neutrons from high energy protons by spallation or photoneutrons from accelerated electrons by Bremsstrahlung (e-n converter). The Monte Carlo approach is the only way of modelling such processes and it might be extended to the whole subcritical system as well. However, a subcritical reactor may be large, it may contain thermal regions and the lifetime of neutrons may be long. Therefore a comprehensive Monte Carlo modelling of such system is a very time consuming computational process. It is unprofitable as well when applied to system optimization that requires a comparative study of large number of system variants. An appropriate method of deterministic transport calculation may adequately satisfy these requirements. Thus, we have built up a coupled calculational model for ADS to be used for transmutation of nuclear waste which we refer further as M-c-T system. Flow chart is shown in Figure. (author)

  6. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki; Kaneko, Kunio.

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author)

  7. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author).

  8. Quo vadis code optimization in high energy physics

    International Nuclear Information System (INIS)

    Jarp, S.

    1994-01-01

    Although performance tuning and optimization can be considered less critical than in the past, there are still many High Energy Physics (HEP) applications and application domains that can profit from such an undertaking. In CERN's CORE (Centrally Operated RISC Environment) where all major RISC vendors are present, this implies an understanding of the various computer architectures, instruction sets and performance analysis tools from each of these vendors. This paper discusses some initial observations after having evaluated the situation and makes some recommendations for further progress

  9. Optical code division multiple access secure communications systems with rapid reconfigurable polarization shift key user code

    Science.gov (United States)

    Gao, Kaiqiang; Wu, Chongqing; Sheng, Xinzhi; Shang, Chao; Liu, Lanlan; Wang, Jian

    2015-09-01

    An optical code division multiple access (OCDMA) secure communications system scheme with rapid reconfigurable polarization shift key (Pol-SK) bipolar user code is proposed and demonstrated. Compared to fix code OCDMA, by constantly changing the user code, the performance of anti-eavesdropping is greatly improved. The Pol-SK OCDMA experiment with a 10 Gchip/s user code and a 1.25 Gb/s user data of payload has been realized, which means this scheme has better tolerance and could be easily realized.

  10. Status of reactor core design code system in COSINE code package

    International Nuclear Information System (INIS)

    Chen, Y.; Yu, H.; Liu, Z.

    2014-01-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  11. Status of reactor core design code system in COSINE code package

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.; Yu, H.; Liu, Z., E-mail: yuhui@snptc.com.cn [State Nuclear Power Software Development Center, SNPTC, National Energy Key Laboratory of Nuclear Power Software (NEKLS), Beijiing (China)

    2014-07-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  12. Development and verification of a coupled code system RETRAN-MASTER-TORC

    International Nuclear Information System (INIS)

    Cho, J.Y.; Song, J.S.; Joo, H.G.; Zee, S.Q.

    2004-01-01

    Recently, coupled thermal-hydraulics (T-H) and three-dimensional kinetics codes have been widely used for the best-estimate simulations such as the main steam line break (MSLB) and locked rotor problems. This work is to develop and verify one of such codes by coupling the system T-H code RETRAN, the 3-D kinetics code MASTER and sub-channel analysis code TORC. The MASTER code has already been applied to such simulations after coupling with the MARS or RETRAN-3D multi-dimensional system T-H codes. The MASTER code contains a sub-channel analysis code COBRA-III C/P, and the coupled systems MARSMASTER-COBRA and RETRAN-MASTER-COBRA had been already developed and verified. With these previous studies, a new coupled system of RETRAN-MASTER-TORC is to be developed and verified for the standard best-estimate simulation code package in Korea. The TORC code has already been applied to the thermal hydraulics design of the several ABB/CE type plants and Korean Standard Nuclear Power Plants (KSNP). This justifies the choice of TORC rather than COBRA. Because the coupling between RETRAN and MASTER codes are already established and verified, this work is simplified to couple the TORC sub-channel T-H code with the MASTER neutronics code. The TORC code is a standalone code that solves the T-H equations for a given core problem from reading the input file and finally printing the converged solutions. However, in the coupled system, because TORC receives the pin power distributions from the neutronics code MASTER and transfers the T-H results to MASTER iteratively, TORC needs to be controlled by the MASTER code and does not need to solve the given problem completely at each iteration step. By this reason, the coupling of the TORC code with the MASTER code requires several modifications in the I/O treatment, flow iteration and calculation logics. The next section of this paper describes the modifications in the TORC code. The TORC control logic of the MASTER code is then followed. The

  13. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  14. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  15. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    Science.gov (United States)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  16. SCALE Code System 6.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.

  17. SCALE Code System 6.2.2

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.

  18. SCALE Code System 6.2.1

    International Nuclear Information System (INIS)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    2016-01-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE's graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.

  19. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  20. Integrated solar energy system optimization

    Science.gov (United States)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  1. The role of stochasticity in an information-optimal neural population code

    International Nuclear Information System (INIS)

    Stocks, N G; Nikitin, A P; McDonnell, M D; Morse, R P

    2009-01-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  2. The role of stochasticity in an information-optimal neural population code

    Energy Technology Data Exchange (ETDEWEB)

    Stocks, N G; Nikitin, A P [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); McDonnell, M D [Institute for Telecommunications Research, University of South Australia, SA 5095 (Australia); Morse, R P, E-mail: n.g.stocks@warwick.ac.u [School of Life and Health Sciences, Aston University, Birmingham B4 7ET (United Kingdom)

    2009-12-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  3. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  4. Next generation Zero-Code control system UI

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Developing ergonomic user interfaces for control systems is challenging, especially during machine upgrade and commissioning where several small changes may suddenly be required. Zero-code systems, such as *Inspector*, provide agile features for creating and maintaining control system interfaces. More so, these next generation Zero-code systems bring simplicity and uniformity and brake the boundaries between Users and Developers. In this talk we present *Inspector*, a CERN made Zero-code application development system, and we introduce the major differences and advantages of using Zero-code control systems to develop operational UI.

  5. Variable-length code construction for incoherent optical CDMA systems

    Science.gov (United States)

    Lin, Jen-Yung; Jhou, Jhih-Syue; Wen, Jyh-Horng

    2007-04-01

    The purpose of this study is to investigate the multirate transmission in fiber-optic code-division multiple-access (CDMA) networks. In this article, we propose a variable-length code construction for any existing optical orthogonal code to implement a multirate optical CDMA system (called as the multirate code system). For comparison, a multirate system where the lower-rate user sends each symbol twice is implemented and is called as the repeat code system. The repetition as an error-detection code in an ARQ scheme in the repeat code system is also investigated. Moreover, a parallel approach for the optical CDMA systems, which is proposed by Marić et al., is also compared with other systems proposed in this study. Theoretical analysis shows that the bit error probability of the proposed multirate code system is smaller than other systems, especially when the number of lower-rate users is large. Moreover, if there is at least one lower-rate user in the system, the multirate code system accommodates more users than other systems when the error probability of system is set below 10 -9.

  6. System for optimizing activation measurements

    International Nuclear Information System (INIS)

    Antonov, V.A.

    1993-01-01

    Optimization procedures make it possible to perform committed activation investigations, reduce the number of experiments, make them less laborious, and increase their productivity. Separate mathematical functions were investigated for given optimization conditions, and these enable numerical optimal parameter values to be established only in the particular cases of specific techniques and mathematical computer programs. In the known mathematical models insufficient account is taken of the variety and complexity of real nuclide mixtures, the influence of background radiation, and the wide diversity of activation measurement conditions, while numerical methods for solving the optimization problem fail to reveal the laws governing the variations of the activation parameters and their functional interdependences. An optimization method was proposed in which was mainly used to estimate the time intervals for activation measurements of a mononuclide, binary or ternary nuclide mixture. However, by forming a mathematical model of activation processes it becomes possible to extend the number of nuclides in the mixture and to take account of the influence of background radiation and the diversity of the measurement alternatives. The analytical expressions and nomograms obtained can be used to determine the number of measurements, their minimum errors, their sensitivities when estimating the quantity of the tracer nuclide, the permissible quantity of interfering nuclides, the permissible background radiation intensity, and the flux of activating radiation. In the worker described herein these investigations are generalized to include spectrally resolved detection of the activation effect in the presence of the tracer and the interfering nuclides. The analytical expressions are combined into a system from which the optimal activation parameters can be found under different given conditions

  7. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...

  8. THE OPTIMAL CONTROL IN THE MODELOF NETWORK SECURITY FROM MALICIOUS CODE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The paper deals with a mathematical model of network security. The model is described in terms of the nonlinear optimal control. As a criterion of the control problem quality the price of the summary damage inflicted by the harmful codes is chosen, under additional restriction: the number of recovered nodes is maximized. The Pontryagin maximum principle for construction of the optimal decisions is formulated. The number of switching points of the optimal control is found. The explicit form of optimal control is given using the Lagrange multipliers method.

  9. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware speci...

  10. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.

  11. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T.; Rollstin, J.A.; Chanin, D.I.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs

  12. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Rollstin, J.A.; Chanin, D.I.; Jow, H.N.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management

  13. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  14. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Li (Tiffany Jing

    2008-01-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified "convergence-constraint" density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional "threshold-constraint" method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  15. Variable code gamma ray imaging system

    International Nuclear Information System (INIS)

    Macovski, A.; Rosenfeld, D.

    1979-01-01

    A gamma-ray source distribution in the body is imaged onto a detector using an array of apertures. The transmission of each aperture is modulated using a code such that the individual views of the source through each aperture can be decoded and separated. The codes are chosen to maximize the signal to noise ratio for each source distribution. These codes determine the photon collection efficiency of the aperture array. Planar arrays are used for volumetric reconstructions and circular arrays for cross-sectional reconstructions. 14 claims

  16. Channel coding in the space station data system network

    Science.gov (United States)

    Healy, T.

    1982-01-01

    A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.

  17. Recent developments in the Los Alamos radiation transport code system

    International Nuclear Information System (INIS)

    Forster, R.A.; Parsons, K.

    1997-01-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results

  18. Module type plant system dynamics analysis code (MSG-COPD). Code manual

    International Nuclear Information System (INIS)

    Sakai, Takaaki

    2002-11-01

    MSG-COPD is a module type plant system dynamics analysis code which involves a multi-dimensional thermal-hydraulics calculation module to analyze pool type of fast breeder reactors. Explanations of each module and the methods for the input data are described in this code manual. (author)

  19. Cogeneration system simulation/optimization

    International Nuclear Information System (INIS)

    Puppa, B.A.; Chandrashekar, M.

    1992-01-01

    Companies are increasingly turning to computer software programs to improve and streamline the analysis o cogeneration systems. This paper introduces a computer program which originated with research at the University of Waterloo. The program can simulate and optimize any type of layout of cogeneration plant. An application of the program to a cogeneration feasibility study for a university campus is described. The Steam and Power Plant Optimization System (SAPPOS) is a PC software package which allows users to model any type of steam/power plant on a component-by-component basis. Individual energy/steam balances can be done quickly to model any scenario. A typical days per month cogeneration simulation can also be carried out to provide a detailed monthly cash flow and energy forecast. This paper reports that SAPPOS can be used for scoping, feasibility, and preliminary design work, along with financial studies, gas contract studies, and optimizing the operation of completed plants. In the feasibility study presented, SAPPOS is used to evaluate both diesel engine and gas turbine combined cycle options

  20. The application of LDPC code in MIMO-OFDM system

    Science.gov (United States)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  1. SRAC2006: A comprehensive neutronics calculation code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Kugo, Teruhiko; Kaneko, Kunio; Tsuchihashi, Keichiro

    2007-02-01

    The SRAC is a code system applicable to neutronics analysis of a variety of reactor types. Since the publication of the second version of the users manual (JAERI-1302) in 1986 for the SRAC system, a number of additions and modifications to the functions and the library data have been made to establish a comprehensive neutronics code system. The current system includes major neutron data libraries (JENDL-3.3, JENDL-3.2, ENDF/B-VII, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, etc.), and integrates five elementary codes for neutron transport and diffusion calculation; PIJ based on the collision probability method applicable to 16 kind of lattice models, S N transport codes ANISN(1D) and TWOTRN(2D), diffusion codes TUD(1D) and CITATION(multi-D). The system also includes an auxiliary code COREBN for multi-dimensional core burn-up calculation. (author)

  2. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  3. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  4. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    Science.gov (United States)

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  5. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  6. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  7. Demonstration of Vibrational Braille Code Display Using Large Displacement Micro-Electro-Mechanical Systems Actuators

    Science.gov (United States)

    Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa

    2012-06-01

    In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.

  8. Basic concept of common reactor physics code systems. Final report of working party on common reactor physics code systems (CCS)

    International Nuclear Information System (INIS)

    2004-03-01

    A working party was organized for two years (2001-2002) on common reactor physics code systems under the Research Committee on Reactor Physics of JAERI. This final report is compilation of activity of the working party on common reactor physics code systems during two years. Objectives of the working party is to clarify basic concept of common reactor physics code systems to improve convenience of reactor physics code systems for reactor physics researchers in Japan on their various field of research and development activities. We have held four meetings during 2 years, investigated status of reactor physics code systems and innovative software technologies, and discussed basic concept of common reactor physics code systems. (author)

  9. Los Alamos radiation transport code system on desktop computing platforms

    International Nuclear Information System (INIS)

    Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.; West, J.T.

    1990-01-01

    The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. The current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines

  10. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Chanin, D.I.; Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems

  11. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  12. Performance enhancement of optical code-division multiple-access systems using transposed modified Walsh code

    Science.gov (United States)

    Sikder, Somali; Ghosh, Shila

    2018-02-01

    This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.

  13. The Effect of Slot-Code Optimization in Warehouse Order Picking

    Directory of Open Access Journals (Sweden)

    Andrea Fumi

    2013-07-01

    most appropriate material handling resource configuration. Building on previous work on the effect of slot-code optimization on travel times in single/dual command cycles, the authors broaden the scope to include the most general picking case, thus widening the range of applicability and realising former suggestions for future research.

  14. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  15. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  16. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  17. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  18. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng

    2016-12-24

    In computer storage, RAID 6 is a level of RAID that can tolerate two failed drives. When RAID-6 is implemented by Reed-Solomon (RS) codes, the penalty of the writing performance is on the field multiplications in the second parity. In this paper, we present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first parity is also utilized to calculate the second parity. To the best of our knowledge, this is the first approach supporting the RAID-6 RS codes to approach the optimal arithmetic complexity.

  19. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  20. Optimal concentrations in transport systems

    Science.gov (United States)

    Jensen, Kaare H.; Kim, Wonjung; Holbrook, N. Michele; Bush, John W. M.

    2013-01-01

    Many biological and man-made systems rely on transport systems for the distribution of material, for example matter and energy. Material transfer in these systems is determined by the flow rate and the concentration of material. While the most concentrated solutions offer the greatest potential in terms of material transfer, impedance typically increases with concentration, thus making them the most difficult to transport. We develop a general framework for describing systems for which impedance increases with concentration, and consider material flow in four different natural systems: blood flow in vertebrates, sugar transport in vascular plants and two modes of nectar drinking in birds and insects. The model provides a simple method for determining the optimum concentration copt in these systems. The model further suggests that the impedance at the optimum concentration μopt may be expressed in terms of the impedance of the pure (c = 0) carrier medium μ0 as μopt∼2αμ0, where the power α is prescribed by the specific flow constraints, for example constant pressure for blood flow (α = 1) or constant work rate for certain nectar-drinking insects (α = 6). Comparing the model predictions with experimental data from more than 100 animal and plant species, we find that the simple model rationalizes the observed concentrations and impedances. The model provides a universal framework for studying flows impeded by concentration, and yields insight into optimization in engineered systems, such as traffic flow. PMID:23594815

  1. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  2. The in-core fuel management code system for VVER reactors

    International Nuclear Information System (INIS)

    Cada, R.; Krysl, V.; Mikolas, P.; Sustek, J.; Svarny, J.

    2004-01-01

    The structure and methodology of a fuel management system for NPP VVER 1000 (NPP Temelin) and VVER 440 (NPP Dukovany) is described. It is under development in SKODA JS a.s. and is followed by practical applications. The general objectives of the system are maximization of end of cycle reactivity, the minimization of fresh fuel inventory for the minimization of fed enrichment and minimization of burnable poisons (BPs) inventory. They are also safety related constraints in witch minimization of power peaking plays a dominant role. General structure of the system consists in preparation of input data for macrocode calculation, algorithms (codes) for optimization of fuel loading, calculation of fuel enrichment and BPs assignment. At present core loading can be calculated (optimized) by Tabu search algorithm (code ATHENA), genetic algorithm (code Gen1) and hybrid algorithm - simplex procedure with application of Tabu search algorithm on binary shuffling (code OPAL B ). Enrichment search is realized by the application of simplex algorithm (OPAL B code) and BPs assignment by module BPASS and simplex algorithm in OPAL B code. Calculations of the real core loadings are presented and a comparison of different optimization methods is provided. (author)

  3. Uncertainty and sensitivity analysis using probabilistic system assessment code. 1

    International Nuclear Information System (INIS)

    Honma, Toshimitsu; Sasahara, Takashi.

    1993-10-01

    This report presents the results obtained when applying the probabilistic system assessment code under development to the PSACOIN Level 0 intercomparison exercise organized by the Probabilistic System Assessment Code User Group in the Nuclear Energy Agency (NEA) of OECD. This exercise is one of a series designed to compare and verify probabilistic codes in the performance assessment of geological radioactive waste disposal facilities. The computations were performed using the Monte Carlo sampling code PREP and post-processor code USAMO. The submodels in the waste disposal system were described and coded with the specification of the exercise. Besides the results required for the exercise, further additional uncertainty and sensitivity analyses were performed and the details of these are also included. (author)

  4. 14 CFR Sec. 1-4 - System of accounts coding.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false System of accounts coding. Sec. 1-4 Section... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is assigned for each balance sheet and profit and loss account. Each balance sheet account is numbered...

  5. Performance Analysis of Optical Code Division Multiplex System

    Science.gov (United States)

    Kaur, Sandeep; Bhatia, Kamaljit Singh

    2013-12-01

    This paper presents the Pseudo-Orthogonal Code generator for Optical Code Division Multiple Access (OCDMA) system which helps to reduce the need of bandwidth expansion and improve spectral efficiency. In this paper we investigate the performance of multi-user OCDMA system to achieve data rate more than 1 Tbit/s.

  6. Biometric iris image acquisition system with wavefront coding technology

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  7. Modern Nuclear Data Evaluation with the TALYS Code System

    Science.gov (United States)

    Koning, A. J.; Rochman, D.

    2012-12-01

    This paper presents a general overview of nuclear data evaluation and its applications as developed at NRG, Petten. Based on concepts such as robustness, reproducibility and automation, modern calculation tools are exploited to produce original nuclear data libraries that meet the current demands on quality and completeness. This requires a system which comprises differential measurements, theory development, nuclear model codes, resonance analysis, evaluation, ENDF formatting, data processing and integral validation in one integrated approach. Software, built around the TALYS code, will be presented in which all these essential nuclear data components are seamlessly integrated. Besides the quality of the basic data and its extensive format testing, a second goal lies in the diversity of processing for different type of users. The implications of this scheme are unprecedented. The most important are: 1. Complete ENDF-6 nuclear data files, in the form of the TENDL library, including covariance matrices, for many isotopes, particles, energies, reaction channels and derived quantities. All isotopic data files are mutually consistent and are supposed to rival those of the major world libraries. 2. More exact uncertainty propagation from basic nuclear physics to applied (reactor) calculations based on a Monte Carlo approach: "Total" Monte Carlo (TMC), using random nuclear data libraries. 3. Automatic optimization in the form of systematic feedback from integral measurements back to the basic data. This method of work also opens a new way of approaching the analysis of nuclear applications, with consequences in both applied nuclear physics and safety of nuclear installations, and several examples are given here. This applied experience and feedback is integrated in a final step to improve the quality of the nuclear data, to change the users vision and finally to orchestrate their integration into simulation codes.

  8. Modern Nuclear Data Evaluation with the TALYS Code System

    International Nuclear Information System (INIS)

    Koning, A.J.; Rochman, D.

    2012-01-01

    This paper presents a general overview of nuclear data evaluation and its applications as developed at NRG, Petten. Based on concepts such as robustness, reproducibility and automation, modern calculation tools are exploited to produce original nuclear data libraries that meet the current demands on quality and completeness. This requires a system which comprises differential measurements, theory development, nuclear model codes, resonance analysis, evaluation, ENDF formatting, data processing and integral validation in one integrated approach. Software, built around the TALYS code, will be presented in which all these essential nuclear data components are seamlessly integrated. Besides the quality of the basic data and its extensive format testing, a second goal lies in the diversity of processing for different type of users. The implications of this scheme are unprecedented. The most important are: 1. Complete ENDF-6 nuclear data files, in the form of the TENDL library, including covariance matrices, for many isotopes, particles, energies, reaction channels and derived quantities. All isotopic data files are mutually consistent and are supposed to rival those of the major world libraries. 2. More exact uncertainty propagation from basic nuclear physics to applied (reactor) calculations based on a Monte Carlo approach: “Total” Monte Carlo (TMC), using random nuclear data libraries. 3. Automatic optimization in the form of systematic feedback from integral measurements back to the basic data. This method of work also opens a new way of approaching the analysis of nuclear applications, with consequences in both applied nuclear physics and safety of nuclear installations, and several examples are given here. This applied experience and feedback is integrated in a final step to improve the quality of the nuclear data, to change the users vision and finally to orchestrate their integration into simulation codes.

  9. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  10. Accelerator-driven transmutation reactor analysis code system (ATRAS)

    Energy Technology Data Exchange (ETDEWEB)

    Sasa, Toshinobu; Tsujimoto, Kazufumi; Takizuka, Takakazu; Takano, Hideki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1999-03-01

    JAERI is proceeding a design study of the hybrid type minor actinide transmutation system which mainly consist of an intense proton accelerator and a fast subcritical core. Neutronics and burnup characteristics of the accelerator-driven system is important from a view point of the maintenance of subcriticality and energy balance during the system operation. To determine those characteristics accurately, it is necessary to involve reactions at high-energy region, which are not treated on ordinary reactor analysis codes. The authors developed a code system named ATRAS to analyze the neutronics and burnup characteristics of accelerator-driven subcritical reactor systems. ATRAS has a function of burnup analysis taking account of the effect of spallation neutron source. ATRAS consists of a spallation analysis code, a neutron transport codes and a burnup analysis code. Utility programs for fuel exchange, pre-processing and post-processing are also incorporated. (author)

  11. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J M; Ahnert, C; Gomez Santamaria, J; Rodriguez Olabarria, I

    1985-07-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs.

  12. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs

  13. Optimizing cooling systems in Egyptian arid urbans

    International Nuclear Information System (INIS)

    Medhat, Ahmed A.; Khalil, Essam E.

    2006-01-01

    Present study is devoted to climatic and site oriented investigations that were carried out in a new rural development in the Upper-Egypt. Bioclimatic classifications considered Upper Egypt region, near Sudan border, as a Hot and Dry climatic region. [1]. that is affected by solar heat intensities that can reach 900 W/m2 for a period ranged from 5-to-7 hours per day with the presence of study storms. Cooling season extends up to eight months per year having Upper-day-bulb temperature ranged from 400 degree centigrade - to - 470 degree centigrade while Lower-dry-bulb-temperature ranged from 280 degree centigrade - to - 320 degree centigrade with the relative humidity ranged from 10%-to-37% RH. [2]. Site surveys and field experimental and analyses of the commonly used cooling systems were investigated, evaluated and optimized for optimum indoor comfort conditions at efficient energy efficiency. [3]. Extensive analyses were performed based on Psychrometric formulae to evaluate the impact of energy consumptions related to different cooling systems such as direct expansion, chilled water, and evaporative systems. the present study enables the critical investigations of the influence of arid outdoor conditions and the required indoor thermal parameters on the energy efficiencies of HVAC-system. This work; focuses on the suggestion of suitable system that should be implemented by local energy codes in these arid urban.(Author)

  14. SWAT2: The improved SWAT code system by incorporating the continuous energy Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Mochizuki, Hiroki; Suyama, Kenya; Okuno, Hiroshi

    2003-01-01

    SWAT is a code system, which performs the burnup calculation by the combination of the neutronics calculation code, SRAC95 and the one group burnup calculation code, ORIGEN2.1. The SWAT code system can deal with the cell geometry in SRAC95. However, a precise treatment of resonance absorptions by the SRAC95 code using the ultra-fine group cross section library is not directly applicable to two- or three-dimensional geometry models, because of restrictions in SRAC95. To overcome this problem, SWAT2 which newly introduced the continuous energy Monte Carlo code, MVP into SWAT was developed. Thereby, the burnup calculation by the continuous energy in any geometry became possible. Moreover, using the 147 group cross section library called SWAT library, the reactions which are not dealt with by SRAC95 and MVP can be treated. OECD/NEA burnup credit criticality safety benchmark problems Phase-IB (PWR, a single pin cell model) and Phase-IIIB (BWR, fuel assembly model) were calculated as a verification of SWAT2, and the results were compared with the average values of calculation results of burnup calculation code of each organization. Through two benchmark problems, it was confirmed that SWAT2 was applicable to the burnup calculation of the complicated geometry. (author)

  15. Development of the next generation reactor analysis code system, MARBLE

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Hazama, Taira; Nagaya, Yasunobu; Chiba, Go; Kugo, Teruhiko; Ishikawa, Makoto; Tatsumi, Masahiro; Hirai, Yasushi; Hyoudou, Hideaki; Numata, Kazuyuki; Iwai, Takehiko; Jin, Tomoyuki

    2011-03-01

    A next generation reactor analysis code system, MARBLE, has been developed. MARBLE is a successor of the fast reactor neutronics analysis code systems, JOINT-FR and SAGEP-FR (conventional systems), which were developed for so-called JUPITER standard analysis methods. MARBLE has the equivalent analysis capability to the conventional system because MARBLE can utilize sub-codes included in the conventional system without any change. On the other hand, burnup analysis functionality for power reactors is improved compared with the conventional system by introducing models on fuel exchange treatment and control rod operation and so on. In addition, MARBLE has newly developed solvers and some new features of burnup calculation by the Krylov sub-space method and nuclear design accuracy evaluation by the extended bias factor method. In the development of MARBLE, the object oriented technology was adopted from the view-point of improvement of the software quality such as flexibility, expansibility, facilitation of the verification by the modularization and assistance of co-development. And, software structure called the two-layer system consisting of scripting language and system development language was applied. As a result, MARBLE is not an independent analysis code system which simply receives input and returns output, but an assembly of components for building an analysis code system (i.e. framework). Furthermore, MARBLE provides some pre-built analysis code systems such as the fast reactor neutronics analysis code system. SCHEME, which corresponds to the conventional code and the fast reactor burnup analysis code system, ORPHEUS. (author)

  16. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  17. Performance Analysis of Spectral Amplitude Coding Based OCDMA System with Gain and Splitter Mismatch

    Science.gov (United States)

    Umrani, Fahim A.; Umrani, A. Waheed; Umrani, Naveed A.; Memon, Kehkashan A.; Kalwar, Imtiaz Hussain

    2013-09-01

    This paper presents the practical analysis of the optical code-division multiple-access (O-CDMA) systems based on perfect difference codes. The work carried out use SNR criterion to select the optimal value of avalanche photodiodes (APD) gain and shows how the mismatch in the splitters and gains of the APD used in the transmitters and receivers of network can degrade the BER performance of the system. The investigations also reveal that higher APD gains are not suitable for such systems even at higher powers. The system performance, with consideration of shot noise, thermal noise, bulk and surface leakage currents is also investigated.

  18. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    International Nuclear Information System (INIS)

    Baratta, A.J.

    1997-01-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together

  19. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  20. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  1. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Qing [Univ. of Colorado, Colorado Springs, CO (United States); Whaley, Richard Clint [Univ. of Texas, San Antonio, TX (United States); Qasem, Apan [Texas State Univ., San Marcos, TX (United States); Quinlan, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  2. Locally Minimum Storage Regenerating Codes in Distributed Cloud Storage Systems

    Institute of Scientific and Technical Information of China (English)

    Jing Wang; Wei Luo; Wei Liang; Xiangyang Liu; Xiaodai Dong

    2017-01-01

    In distributed cloud storage sys-tems, inevitably there exist multiple node fail-ures at the same time. The existing methods of regenerating codes, including minimum storage regenerating (MSR) codes and mini-mum bandwidth regenerating (MBR) codes, are mainly to repair one single or several failed nodes, unable to meet the repair need of distributed cloud storage systems. In this paper, we present locally minimum storage re-generating (LMSR) codes to recover multiple failed nodes at the same time. Specifically, the nodes in distributed cloud storage systems are divided into multiple local groups, and in each local group (4, 2) or (5, 3) MSR codes are constructed. Moreover, the grouping method of storage nodes and the repairing process of failed nodes in local groups are studied. The-oretical analysis shows that LMSR codes can achieve the same storage overhead as MSR codes. Furthermore, we verify by means of simulation that, compared with MSR codes, LMSR codes can reduce the repair bandwidth and disk I/O overhead effectively.

  3. Optimization of maintenance periodicity of complex of NPP safety systems

    International Nuclear Information System (INIS)

    Kolykhanov, V.; Skalozubov, V.; Kovrigkin, Y.

    2006-01-01

    The analysis of the positive and negative aspects connected to maintenance of the safety systems equipment which basically is in a standby state is executed. Tests of systems provide elimination of the latent failures and raise their reliability. Poor quality of carrying out the tests can be a source of the subsequent failures. Therefore excess frequency of tests can result in reducing reliability of safety systems. The method of optimization of maintenance periodicity of the equipment taking into account factors of its reliability and restoration procedures quality is submitted. The unavailability factor is used as a criterion of optimization of maintenance periodicity. It is offered to use parameters of reliability of the equipment and each of safety systems of NPPs received at developing PSA. And it is offered to carry out the concordance of maintenance periodicity of systems within the NPP maintenance program taking into account a significance factor of the system received on the basis of the contribution of system in CDF. Basing on the submitted method the small computer code is developed. This code allows to calculate reliability factors of a separate safety system and to determine optimum maintenance periodicity of its equipment. Optimization of maintenance periodicity of a complex of safety systems is stipulated also. As an example results of optimization of maintenance periodicity at Zaporizhzhya NPP are presented. (author)

  4. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  5. SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code

    International Nuclear Information System (INIS)

    Bjerke, M.A.

    1983-02-01

    A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible

  6. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  7. Optimizing the IAEA safeguards system

    International Nuclear Information System (INIS)

    Drobysz, Sonia; Sitt, Bernard

    2011-09-01

    During the 2010 Non-Proliferation Treaty Review Conference, States parties recognized that the Additional Protocol (AP) provides increased confidence about the absence of undeclared nuclear material and activities in a State as a whole. They agreed in action 28 of the final document to encourage 'all States parties that have not yet done so to conclude and bring into force an AP as soon as possible and to implement them provisionally pending their entry into force'. Today, 109 out of 189 States parties to the NPT have brought an AP in force. The remaining outliers have not yet done so for three types of reasons: they do not clearly understand what the AP entails; when they do, they refuse to accept new non-proliferation obligations either on the ground of lack of progress in the realm of disarmament, or simply because they are not ready to bear the burden of additional safeguards measures. Strong incentives are thus needed in order to facilitate universalization of the AP. While external incentives would help make the AP a de facto norm and encourage its conclusion by reducing the deplored imbalanced implementation of non-proliferation and disarmament obligations, internal incentives developed by the Agency and its member States can also play an important role. In this respect, NPT States parties recommended in action 32 of the Review Conference final document 'that IAEA safeguards should be assessed and evaluated regularly. Decisions adopted by the IAEA policy bodies aimed at further strengthening the effectiveness and improving the efficiency of IAEA safeguards should be supported and implemented'. The safeguards system should therefore be optimized: the most effective use of safeguards measures as well as safeguards human, financial and technical resources would indeed help enhance the acceptability and even attractiveness of the AP. Optimization can be attractive for States committed to a stronger verification regime independently from other claims, but still

  8. WASA-BOSS. Development and application of Severe Accident Codes. Evaluation and optimization of accident management measures. Subproject D. Study on water film cooling for PWR's passive containment cooling system. Final report

    International Nuclear Information System (INIS)

    Huang, Xi

    2016-07-01

    In the present study, a new phenomenological model was developed, to describe the water film flow under conditions of a passive containment cooling system (PCCS). The new model takes two different flow regimes into consideration, i.e. continuous water film and rivulets. For water film flow, the traditional Nusselt's was modified, to consider orientation angle and surface sheer stress. The transition from water film to rivulet as well as the structure of the stable rivulet at its onset point was modeled by using the minimum energy principle (MEP) combined with conservation equations. In addition, two different contact angles, i.e. advancing angle and retreating angle, were applied to take the hysteresis effect into consideration. The models of individual processes were validated as far as possible based on experimental data selected from open literature and from collaboration partner as well. With the models a new program module was developed and implemented into the COCOSYS program. The extended COCOSYS program was applied to analyze the containment behavior of the European generic containment and the performance of the passive containment cooling system ofthe AP1000. The results indicate clearly the importance of the new model and provide information for the optimization of the PCCS of AP1000.

  9. WASA-BOSS. Development and application of Severe Accident Codes. Evaluation and optimization of accident management measures. Subproject D. Study on water film cooling for PWR's passive containment cooling system. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xi

    2016-07-15

    In the present study, a new phenomenological model was developed, to describe the water film flow under conditions of a passive containment cooling system (PCCS). The new model takes two different flow regimes into consideration, i.e. continuous water film and rivulets. For water film flow, the traditional Nusselt's was modified, to consider orientation angle and surface sheer stress. The transition from water film to rivulet as well as the structure of the stable rivulet at its onset point was modeled by using the minimum energy principle (MEP) combined with conservation equations. In addition, two different contact angles, i.e. advancing angle and retreating angle, were applied to take the hysteresis effect into consideration. The models of individual processes were validated as far as possible based on experimental data selected from open literature and from collaboration partner as well. With the models a new program module was developed and implemented into the COCOSYS program. The extended COCOSYS program was applied to analyze the containment behavior of the European generic containment and the performance of the passive containment cooling system ofthe AP1000. The results indicate clearly the importance of the new model and provide information for the optimization of the PCCS of AP1000.

  10. Reliability and optimization of structural systems

    International Nuclear Information System (INIS)

    Thoft-Christensen, P.

    1987-01-01

    The proceedings contain 28 papers presented at the 1st working conference. The working conference was organized by the IFIP Working Group 7.5. The proceedings also include 4 papers which were submitted, but for various reasons not presented at the working conference. The working conference was attended by 50 participants from 18 countries. The conference was the first scientific meeting of the new IFIP Working Group 7.5 on 'Reliability and Optimization of Structural Systems'. The purpose of the Working Group 7.5 is to promote modern structural system optimization and reliability theory, to advance international cooperation in the field of structural system optimization and reliability theory, to stimulate research, development and application of structural system optimization and reliability theory, to further the dissemination and exchange of information on reliability and optimization of structural system optimization and reliability theory, and to encourage education in structural system optimization and reliability theory. (orig./HP)

  11. Code system to compute radiation dose in human phantoms

    International Nuclear Information System (INIS)

    Ryman, J.C.; Cristy, M.; Eckerman, K.F.; Davis, J.L.; Tang, J.S.; Kerr, G.D.

    1986-01-01

    Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods

  12. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  13. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  14. The JAERI code system for evaluation of BWR ECCS performance

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Akimoto, Masayuki; Asahi, Yoshiro; Abe, Kiyoharu; Muramatsu, Ken; Araya, Fumimasa; Sato, Kazuo

    1982-12-01

    Development of respective computer code system of BWR and PWR for evaluation of ECCS has been conducted since 1973 considering the differences of the reactor cooling system, core structure and ECCS. The first version of the BWR code system, of which developmental work started earlier than that of the PWR, has been completed. The BWR code system is designed to provide computational tools to analyze all phases of LOCAs and to evaluate the performance of the ECCS including an ''Evaluation Model (EM)'' feature in compliance with the requirements of the current Japanese Evaluation Guideline of ECCS. The BWR code system could be used for licensing purpose, i.e. for ECCS performance evaluation or audit calculations to cross-examine the methods and results of applicants or vendors. The BWR code system presented in this report comprises several computer codes, each of which analyzes a particular phase of a LOCA or a system blowdown depending on a range of LOCAs, i.e. large and small breaks in a variety of locations in the reactor system. The system includes ALARM-B1, HYDY-B1 and THYDE-B1 for analysis of the system blowdown for various break sizes, THYDE-B-REFLOOD for analysis of the reflood phase and SCORCH-B2 for the calculation of the fuel assembl hot plane temperature. When the multiple codes are used to analyze a broad range of LOCA as stated above, it is very important to evaluate the adequacy and consistency between the codes used to cover an entire break spectrum. The system consistency together with the system performance are discussed for a large commercial BWR. (author)

  15. Linear quadratic optimization for positive LTI system

    Science.gov (United States)

    Muhafzan, Yenti, Syafrida Wirma; Zulakmal

    2017-05-01

    Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.

  16. A computerized energy systems code and information library at Soreq

    Energy Technology Data Exchange (ETDEWEB)

    Silverman, I; Shapira, M; Caner, D; Sapier, D [Israel Atomic Energy Commission, Yavne (Israel). Soreq Nuclear Research Center

    1996-12-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors).

  17. A computerized energy systems code and information library at Soreq

    International Nuclear Information System (INIS)

    Silverman, I.; Shapira, M.; Caner, D.; Sapier, D.

    1996-01-01

    In the framework of the contractual agreement between the Ministry of Energy and Infrastructure and the Division of Nuclear Engineering of the Israel Atomic Energy Commission, both Soreq-NRC and Ben-Gurion University have agreed to establish, in 1991, a code center. This code center contains a library of computer codes and relevant data, with particular emphasis on nuclear power plant research and development support. The code center maintains existing computer codes and adapts them to the ever changing computing environment, keeps track of new code developments in the field of nuclear engineering, and acquires the most recent revisions of computer codes of interest. An attempt is made to collect relevant codes developed in Israel and to assure that proper documentation and application instructions are available. En addition to computer programs, the code center collects sample problems and international benchmarks to verify the codes and their applications to various areas of interest to nuclear power plant engineering and safety evaluation. Recently, the reactor simulation group at Soreq acquired, using funds provided by the Ministry of Energy and Infrastructure, a PC work station operating under a Linux operating system to give users of the library an easy on-line way to access resources available at the library. These resources include the computer codes and their documentation, reports published by the reactor simulation group, and other information databases available at Soreq. Registered users set a communication line, through a modem, between their computer and the new workstation at Soreq and use it to download codes and/or information or to solve their problems, using codes from the library, on the computer at Soreq (authors)

  18. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  19. Strategies for Optimal Design of Structural Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1992-01-01

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  20. Concatenated coding systems employing a unit-memory convolutional code and a byte-oriented decoding algorithm

    Science.gov (United States)

    Lee, L.-N.

    1977-01-01

    Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.

  1. OPTIMIZATION OF ADVANCED FILTER SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    R.A. Newby; G.J. Bruck; M.A. Alvin; T.E. Lippert

    1998-04-30

    Reliable, maintainable and cost effective hot gas particulate filter technology is critical to the successful commercialization of advanced, coal-fired power generation technologies, such as IGCC and PFBC. In pilot plant testing, the operating reliability of hot gas particulate filters have been periodically compromised by process issues, such as process upsets and difficult ash cake behavior (ash bridging and sintering), and by design issues, such as cantilevered filter elements damaged by ash bridging, or excessively close packing of filtering surfaces resulting in unacceptable pressure drop or filtering surface plugging. This test experience has focused the issues and has helped to define advanced hot gas filter design concepts that offer higher reliability. Westinghouse has identified two advanced ceramic barrier filter concepts that are configured to minimize the possibility of ash bridge formation and to be robust against ash bridges should they occur. The ''inverted candle filter system'' uses arrays of thin-walled, ceramic candle-type filter elements with inside-surface filtering, and contains the filter elements in metal enclosures for complete separation from ash bridges. The ''sheet filter system'' uses ceramic, flat plate filter elements supported from vertical pipe-header arrays that provide geometry that avoids the buildup of ash bridges and allows free fall of the back-pulse released filter cake. The Optimization of Advanced Filter Systems program is being conducted to evaluate these two advanced designs and to ultimately demonstrate one of the concepts in pilot scale. In the Base Contract program, the subject of this report, Westinghouse has developed conceptual designs of the two advanced ceramic barrier filter systems to assess their performance, availability and cost potential, and to identify technical issues that may hinder the commercialization of the technologies. A plan for the Option I, bench

  2. The Impact of Diagnostic Code Misclassification on Optimizing the Experimental Design of Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Steven J. Schrodi

    2017-01-01

    Full Text Available Diagnostic codes within electronic health record systems can vary widely in accuracy. It has been noted that the number of instances of a particular diagnostic code monotonically increases with the accuracy of disease phenotype classification. As a growing number of health system databases become linked with genomic data, it is critically important to understand the effect of this misclassification on the power of genetic association studies. Here, I investigate the impact of this diagnostic code misclassification on the power of genetic association studies with the aim to better inform experimental designs using health informatics data. The trade-off between (i reduced misclassification rates from utilizing additional instances of a diagnostic code per individual and (ii the resulting smaller sample size is explored, and general rules are presented to improve experimental designs.

  3. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  4. Performance Evaluation of a Novel Optimization Sequential Algorithm (SeQ Code for FTTH Network

    Directory of Open Access Journals (Sweden)

    Fazlina C.A.S.

    2017-01-01

    Full Text Available The SeQ codes has advantages, such as variable cross-correlation property at any given number of users and weights, as well as effectively suppressed the impacts of phase induced intensity noise (PIIN and multiple access interference (MAI cancellation property. The result revealed, at system performance analysis of BER = 10-09, the SeQ code capable to achieved 1 Gbps up to 60 km.

  5. Dependability of self-optimizing mechatronic systems

    CERN Document Server

    Rammig, Franz; Schäfer, Wilhelm; Sextro, Walter

    2014-01-01

    Intelligent technical systems, which combine mechanical, electrical and software engineering with control engineering and advanced mathematics, go far beyond the state of the art in mechatronics and open up fascinating perspectives. Among these systems are so-called self-optimizing systems, which are able to adapt their behavior autonomously and flexibly to changing operating conditions. Self-optimizing systems create high value for example in terms of energy and resource efficiency as well as reliability. The Collaborative Research Center 614 "Self-optimizing Concepts and Structures in Mechanical Engineering" pursued the long-term aim to open up the active paradigm of self-optimization for mechanical engineering and to enable others to develop self-optimizing systems. This book is directed to researchers and practitioners alike. It provides a design methodology for the development of self-optimizing systems consisting of a reference process, methods, and tools. The reference process is divided into two phase...

  6. Joint Transmitter-Receiver Optimization in the Downlink CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mohammad Saquib

    2002-08-01

    Full Text Available To maximize the downlink code-division multiple access (CDMA system capacity, we propose to minimize the total transmitted power of the system subject to users′ signal-to-interference ratio (SIR requirements via designing optimum transmitter sequences and utilizing linear optimum receivers (minimum mean square error (MMSE receiver. In our work on joint transmitter-receiver design for the downlink CDMA systems with multiple antennas and multipath channels, we develop several optimization algorithms by considering various system constraints and prove their convergence. We empirically observed that under the optimization algorithm with no constraint on the system, the optimum receiver structure matches the received transmitter sequences. A simulation study is performed to see how the different practical system constraints penalize the system with respect to the optimum algorithm with no constraint on the system.

  7. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  8. Sequence Coding and Search System Backfit Quality Assurance Program Plan

    International Nuclear Information System (INIS)

    Lovell, C.J.; Stepina, P.L.

    1985-03-01

    The Sequence Coding and Search System is a computer-based encoding system for events described in Licensee Event Reports. This data system contains LERs from 1981 to present. Backfit of the data system to include LERs prior to 1981 is required. This report documents the Quality Assurance Program Plan that EG and G Idaho, Inc. will follow while encoding 1980 LERs

  9. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  10. Development of a nuclear power plant system analysis code

    International Nuclear Information System (INIS)

    Sim, Suk K.; Jeong, J. J.; Ha, K. S.; Moon, S. K.; Park, J. W.; Yang, S. K.; Song, C. H.; Chun, S. Y.; Kim, H. C.; Chung, B. D.; Lee, W. J.; Kwon, T. S.

    1997-07-01

    During the period of this study, TASS 1.0 code has been prepared for the non-LOCA licensing and reload safety analyses of the Westinghouse and the Korean Standard Nuclear Power Plants (KSNPP) type reactors operating in Korea. TASS-NPA also has been developed for a real time simulation of the Kori-3/4 transients using on-line graphical interactions. TASS 2.0 code has been further developed to timely apply the TASS 2.0 code for the design certification of the KNGR. The COBRA/RELAP5 code, a multi-dimensional best estimate system code, has been developed by integrating the realistic three-dimensional reactor vessel model with the RELAP5 /MOD3.2 code, a one-dimensional system code. Also, a 3D turbulent two-phase flow analysis code, FEMOTH-TF, has been developed using finite element technique to analyze local thermal hydraulic phenomena in support of the detailed design analysis for the development of the advanced reactors. (author). 84 refs., 27 tabs., 83 figs

  11. On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    KAUST Repository

    Sapenov, Yerzhan

    2017-07-06

    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

  12. Dynamic detection technology of malicious code for Android system

    Directory of Open Access Journals (Sweden)

    Li Boya

    2017-02-01

    Full Text Available With the increasing popularization of mobile phones,people's dependence on them is rising,the security problems become more and more prominent.According to the calling of the APK file permission and the API function in Android system,this paper proposes a dynamic detecting method based on API interception technology to detect the malicious code.The experimental results show that this method can effectively detect the malicious code in Android system.

  13. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  14. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Science.gov (United States)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  15. Introduction of thermal-hydraulic analysis code and system analysis code for HTGR

    International Nuclear Information System (INIS)

    Tanaka, Mitsuhiro; Izaki, Makoto; Koike, Hiroyuki; Tokumitsu, Masashi

    1984-01-01

    Kawasaki Heavy Industries Ltd. has advanced the development and systematization of analysis codes, aiming at lining up the analysis codes for heat transferring flow and control characteristics, taking up HTGR plants as the main object. In order to make the model of flow when shock waves propagate to heating tubes, SALE-3D which can analyze a complex system was developed, therefore, it is reported in this paper. Concerning the analysis code for control characteristics, the method of sensitivity analysis in a topological space including an example of application is reported. The flow analysis code SALE-3D is that for analyzing the flow of compressible viscous fluid in a three-dimensional system over the velocity range from incompressibility limit to supersonic velocity. The fundamental equations and fundamental algorithm of the SALE-3D, the calculation of cell volume, the plotting of perspective drawings and the analysis of the three-dimensional behavior of shock waves propagating in heating tubes after their rupture accident are described. The method of sensitivity analysis was added to the analysis code for control characteristics in a topological space, and blow-down phenomena was analyzed by its application. (Kako, I.)

  16. Nonterminals and codings in defining variations of OL-systems

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The use of nonterminals versus the use of codings in variations of OL-systems is studied. It is shown that the use of nonterminals produces a comparatively low generative capacity in deterministic systems while it produces a comparatively high generative capacity in nondeterministic systems. Fina....... Finally it is proved that the family of context-free languages is contained in the family generated by codings on propagating OL-systems with a finite set of axioms, which was one of the open problems in [10]. All the results in this paper can be found in [71] and [72].......The use of nonterminals versus the use of codings in variations of OL-systems is studied. It is shown that the use of nonterminals produces a comparatively low generative capacity in deterministic systems while it produces a comparatively high generative capacity in nondeterministic systems...

  17. Fuzzy logic control and optimization system

    Science.gov (United States)

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  18. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  19. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  20. Optimized Evaluation System to Athletic Food Safety

    OpenAIRE

    Shanshan Li

    2015-01-01

    This study presented a new method of optimizing evaluation function in athletic food safety information programming by particle swarm optimization. The process of food information evaluation function is to automatically adjust these parameters in the evaluation function by self-optimizing method accomplished through competition, which is a food information system plays against itself with different evaluation functions. The results show that the particle swarm optimization is successfully app...

  1. Lossless Coding Standards for Space Data Systems

    Science.gov (United States)

    Rice, R. F.

    1996-01-01

    The International Consultative Committee for Space Data Systems (CCSDS) is preparing to issue its first recommendation for a digital data compression standard. Because the space data systems of primary interest are employed to support scientific investigations requiring accurate representation, this initial standard will be restricted to lossless compression.

  2. Development of realistic thermal hydraulic system analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Chung, B. D; Kim, K. D. [and others

    2002-05-01

    The realistic safety analysis system is essential for nuclear safety research, advanced reactor development, safety analysis in nuclear industry and 'in-house' plant design capability development. In this project, we have developed a best-estimate multi-dimensional thermal-hydraulic system code, MARS, which is based on the integrated version of the RELAP5 and COBRA-TF codes. To improve the realistic analysis capability, we have improved the models for multi-dimensional two-phase flow phenomena and for advanced two-phase flow modeling. In addition, the GUI (Graphic User Interface) feature were developed to enhance the user's convenience. To develop the coupled analysis capability, the MARS code were linked with the three-dimensional reactor kinetics code (MASTER), the core thermal analysis code (COBRA-III/CP), and the best-estimate containment analysis code (CONTEMPT), resulting in MARS/MASTER/COBRA/CONTEMPT. Currently, the MARS code system has been distributed to 18 domestic organizations, including research, industrial, regulatory organizations and universities. The MARS has been being widely used for the safety research of existing PWRs, advanced PWR, CANDU and research reactor, the pre-test analysis of TH experiments, and others.

  3. Development of realistic thermal hydraulic system analysis code

    International Nuclear Information System (INIS)

    Lee, Won Jae; Chung, B. D; Kim, K. D.

    2002-05-01

    The realistic safety analysis system is essential for nuclear safety research, advanced reactor development, safety analysis in nuclear industry and 'in-house' plant design capability development. In this project, we have developed a best-estimate multi-dimensional thermal-hydraulic system code, MARS, which is based on the integrated version of the RELAP5 and COBRA-TF codes. To improve the realistic analysis capability, we have improved the models for multi-dimensional two-phase flow phenomena and for advanced two-phase flow modeling. In addition, the GUI (Graphic User Interface) feature were developed to enhance the user's convenience. To develop the coupled analysis capability, the MARS code were linked with the three-dimensional reactor kinetics code (MASTER), the core thermal analysis code (COBRA-III/CP), and the best-estimate containment analysis code (CONTEMPT), resulting in MARS/MASTER/COBRA/CONTEMPT. Currently, the MARS code system has been distributed to 18 domestic organizations, including research, industrial, regulatory organizations and universities. The MARS has been being widely used for the safety research of existing PWRs, advanced PWR, CANDU and research reactor, the pre-test analysis of TH experiments, and others

  4. A nuclear reload optimization approach using a real coded genetic algorithm with random keys

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison

  5. Sequence Coding and Search System for licensee event reports: code listings. Volume 2

    International Nuclear Information System (INIS)

    Gallaher, R.B.; Guymon, R.H.; Mays, G.T.; Poore, W.P.; Cagle, R.J.; Harrington, K.H.; Johnson, M.P.

    1985-04-01

    Operating experience data from nuclear power plants are essential for safety and reliability analyses, especially analyses of trends and patterns. The licensee event reports (LERs) that are submitted to the Nuclear Regulatory Commission (NRC) by the nuclear power plant utilities contain much of this data. The NRC's Office for Analysis and Evaluation of Operational Data (AEOD) has developed, under contract with NSIC, a system for codifying the events reported in the LERs. The primary objective of the Sequence Coding and Search System (SCSS) is to reduce the descriptive text of the LERs to coded sequences that are both computer-readable and computer-searchable. This system provides a structured format for detailed coding of component, system, and unit effects as well as personnel errors. The database contains all current LERs submitted by nuclear power plant utilities for events occurring since 1981 and is updated on a continual basis. Volume 2 contains all valid and acceptable codes used for searching and encoding the LER data. This volume contains updated material through amendment 1 to revision 1 of the working version of ORNL/NSIC-223, Vol. 2

  6. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  7. A dual-sided coded-aperture radiation detection system

    International Nuclear Information System (INIS)

    Penny, R.D.; Hood, W.E.; Polichar, R.M.; Cardone, F.H.; Chavez, L.G.; Grubbs, S.G.; Huntley, B.P.; Kuharski, R.A.; Shyffer, R.T.; Fabris, L.; Ziock, K.P.; Labov, S.E.; Nelson, K.

    2011-01-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5x5x50 cm 3 cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  8. Perspectives on the development of next generation reactor systems safety analysis codes

    International Nuclear Information System (INIS)

    Zhang, H.

    2015-01-01

    tackle the deficiencies in the existing codes. 3) Software design of the next-generation codes needs to take into consideration of having the flexibility to add new models if necessary, as well as to allow for embedded uncertainty quantification, and capability of multi-physics coupling with other codes. 4) The next generation codes need proper verification and validation (V & V) before they can be used to plant applications. New approaches need to be developed to verify and validate complex multi-physics models with multiple time and length scales and advanced modeling techniques. 5) The next generation system analysis codes should be designed to be integrated into probabilistic evaluation to enable a risk-informed safety margin characterization (RISMC) process in order to optimize plant safety and performance by incorporating plant impacts, aging, and degradation processes into the safety analysis. (author)

  9. Perspectives on the development of next generation reactor systems safety analysis codes

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, H., E-mail: Hongbin.Zhang@inl.gov [Idaho National Laboratory, Idaho Falls, ID (United States)

    2015-07-01

    tackle the deficiencies in the existing codes. 3) Software design of the next-generation codes needs to take into consideration of having the flexibility to add new models if necessary, as well as to allow for embedded uncertainty quantification, and capability of multi-physics coupling with other codes. 4) The next generation codes need proper verification and validation (V & V) before they can be used to plant applications. New approaches need to be developed to verify and validate complex multi-physics models with multiple time and length scales and advanced modeling techniques. 5) The next generation system analysis codes should be designed to be integrated into probabilistic evaluation to enable a risk-informed safety margin characterization (RISMC) process in order to optimize plant safety and performance by incorporating plant impacts, aging, and degradation processes into the safety analysis. (author)

  10. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  11. Hydrogen detection systems leak response codes

    International Nuclear Information System (INIS)

    Desmas, T.; Kong, N.; Maupre, J.P.; Schindler, P.; Blanc, D.

    1990-01-01

    A loss in tightness of a water tube inside a Steam Generator Unit of a Fast Reactor is usually monitored by hydrogen detection systems. Such systems have demonstrated in the past their ability to detect a leak in a SGU. However, the increase in size of the SGU or the choice of ferritic material entails improvement of these systems in order to avoid secondary leak or to limit damages to the tube bundle. The R and D undertaken in France on this subject is presented. (author). 11 refs, 10 figs

  12. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  13. JEMs and incompatible occupational coding systems: Effect of manual and automatic recoding of job codes on exposure assignment

    NARCIS (Netherlands)

    Koeman, T.; Offermans, N.S.M.; Christopher-De Vries, Y.; Slottje, P.; Brandt, P.A. van den; Goldbohm, R.A.; Kromhout, H.; Vermeulen, R.

    2013-01-01

    Background: In epidemiological studies, occupational exposure estimates are often assigned through linkage of job histories to job-exposure matrices (JEMs). However, available JEMs may have a coding system incompatible with the coding system used to code the job histories, necessitating a

  14. Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map

    Directory of Open Access Journals (Sweden)

    Sen Han

    2016-06-01

    Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.

  15. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation

  16. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm 3 , which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm 3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique

  17. System verification and validation report for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the Verification and Validation Report for the TMAD code system, which includes the TMAD code and the LIBMAKR Code. The TMAD code was commissioned to facilitate the interpretation of moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, can also be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code will then interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data. The primary purpose of this document is to provide the results of the system testing and the conclusions based thereon. The results of the testing process are documented in the body of the report. Appendix A gives the test plan, including test procedures, used in conducting the tests. Appendix B lists the input data required to conduct the tests, and Appendices C and 0 list the numerical results of the tests

  18. Simulation-based optimization of thermal systems

    International Nuclear Information System (INIS)

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  19. Integrated Validation System for a Thermal-hydraulic System Code, TASS/SMR-S

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hee-Kyung; Kim, Hyungjun; Kim, Soo Hyoung; Hwang, Young-Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyeon-Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-10-15

    Development including enhancement and modification of thermal-hydraulic system computer code is indispensable to a new reactor, SMART. Usually, a thermal-hydraulic system code validation is achieved by a comparison with the results of corresponding physical effect tests. In the reactor safety field, a similar concept, referred to as separate effect tests has been used for a long time. But there are so many test data for comparison because a lot of separate effect tests and integral effect tests are required for a code validation. It is not easy to a code developer to validate a computer code whenever a code modification is occurred. IVS produces graphs which shown the comparison the code calculation results with the corresponding test results automatically. IVS was developed for a validation of TASS/SMR-S code. The code validation could be achieved by a comparison code calculation results with corresponding test results. This comparison was represented as a graph for convenience. IVS is useful before release a new code version. The code developer can validate code result easily using IVS. Even during code development, IVS could be used for validation of code modification. The code developer could gain a confidence about his code modification easily and fast and could be free from tedious and long validation work. The popular software introduced in IVS supplies better usability and portability.

  20. Optimization of Parameters of Asymptotically Stable Systems

    Directory of Open Access Journals (Sweden)

    Anna Guerman

    2011-01-01

    Full Text Available This work deals with numerical methods of parameter optimization for asymptotically stable systems. We formulate a special mathematical programming problem that allows us to determine optimal parameters of a stabilizer. This problem involves solutions to a differential equation. We show how to chose the mesh in order to obtain discrete problem guaranteeing the necessary accuracy. The developed methodology is illustrated by an example concerning optimization of parameters for a satellite stabilization system.

  1. Establishment of computer code system for nuclear reactor design - analysis

    International Nuclear Information System (INIS)

    Subki, I.R.; Santoso, B.; Syaukat, A.; Lee, S.M.

    1996-01-01

    Establishment of computer code system for nuclear reactor design analysis is given in this paper. This establishment is an effort to provide the capability in running various codes from nuclear data to reactor design and promote the capability for nuclear reactor design analysis particularly from neutronics and safety points. This establishment is also an effort to enhance the coordination of nuclear codes application and development existing in various research centre in Indonesia. Very prospective results have been obtained with the help of IAEA technical assistance. (author). 6 refs, 1 fig., 1 tab

  2. The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics

    Science.gov (United States)

    Ganander, Hans

    2003-10-01

    For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.

  3. Grid-code of Croatian power system

    International Nuclear Information System (INIS)

    Toljan, I.; Mesic, M.; Kalea, M.; Koscak, Z.

    2003-01-01

    Grid Rules by the Croatian Electricity Utility deal with the control and usage of the Croatian power system's transmission and distribution grid. Furthermore, these rules include obligations and permissions of power grid users and owners, with the aim of a reliable electricity supply.(author)

  4. Multivariate optimization of production systems

    International Nuclear Information System (INIS)

    Carroll, J.A.; Horne, R.N.

    1992-01-01

    This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = ∫(rvec x 1 , rvec x 2 ,rvec x 3 ,→x n ), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc

  5. Summary description of the scale modular code system

    International Nuclear Information System (INIS)

    Parks, C.V.

    1987-12-01

    SCALE - a modular code system for Standardized Computer Analyses for Licensing Evaluation - has been developed at Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission staff. The SCALE system utilizes well-established computer codes and methods within standard analytic sequences that allow simplified free-form input, automate the data processing and coupling between codes, and provide accurate and reliable results. System development has been directed at criticality safety, shielding, and heat transfer analysis of spent fuel transport and/or storage casks. However, only a few of the sequences (and none of the individual functional modules) are restricted to cask applications. This report will provide a background on the history of the SCALE development and review the components and their function within the system. The available data libraries are also discussed, together with the automated features that standardize the data processing and systems analysis. 83 refs., 32 figs., 11 tabs

  6. On the automated assessment of nuclear reactor systems code accuracy

    International Nuclear Information System (INIS)

    Kunz, Robert F.; Kasmala, Gerald F.; Mahaffy, John H.; Murray, Christopher J.

    2002-01-01

    An automated code assessment program (ACAP) has been developed to provide quantitative comparisons between nuclear reactor systems (NRS) code results and experimental measurements. The tool provides a suite of metrics for quality of fit to specific data sets, and the means to produce one or more figures of merit (FOM) for a code, based on weighted averages of results from the batch execution of a large number of code-experiment and code-code data comparisons. Accordingly, this tool has the potential to significantly streamline the verification and validation (V and V) processes in NRS code development environments which are characterized by rapidly evolving software, many contributing developers and a large and growing body of validation data. In this paper, a survey of data conditioning and analysis techniques is summarized which focuses on their relevance to NRS code accuracy assessment. A number of methods are considered for their applicability to the automated assessment of the accuracy of NRS code simulations. A variety of data types and computational modeling methods are considered from a spectrum of mathematical and engineering disciplines. The goal of the survey was to identify needs, issues and techniques to be considered in the development of an automated code assessment procedure, to be used in United States Nuclear Regulatory Commission (NRC) advanced thermal-hydraulic T/H code consolidation efforts. The ACAP software was designed based in large measure on the findings of this survey. An overview of this tool is summarized and several NRS data applications are provided. The paper is organized as follows: The motivation for this work is first provided by background discussion that summarizes the relevance of this subject matter to the nuclear reactor industry. Next, the spectrum of NRS data types are classified into categories, in order to provide a basis for assessing individual comparison methods. Then, a summary of the survey is provided, where each

  7. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    Science.gov (United States)

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  8. Analytical considerations in the code qualification of piping systems

    International Nuclear Information System (INIS)

    Antaki, G.A.

    1995-01-01

    The paper addresses several analytical topics in the design and qualification of piping systems which have a direct bearing on the prediction of stresses in the pipe and hence on the application of the equations of NB, NC and ND-3600 of the ASME Boiler and Pressure Vessel Code. For each of the analytical topics, the paper summarizes the current code requirements, if any, and the industry practice

  9. Trajectory Optimization for Differential Flat Systems

    OpenAIRE

    Kahina Louadj; Benjamas Panomruttanarug; Alexandre Carlos Brandao Ramos; Felix Mora-Camino

    2016-01-01

    International audience; The purpose of this communication is to investigate the applicability of Variational Calculus to the optimization of the operation of differentially flat systems. After introducingcharacteristic properties of differentially flat systems, the applicability of variational calculus to the optimization of flat output trajectories is displayed. Two illustrative examples are also presented.

  10. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  11. Modular ORIGEN-S for multi-physics code systems

    International Nuclear Information System (INIS)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack

    2011-01-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  12. Modular ORIGEN-S for multi-physics code systems

    Energy Technology Data Exchange (ETDEWEB)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C., E-mail: yesilyurtg@ornl.gov, E-mail: clarnokt@ornl.gov, E-mail: gauldi@ornl.gov [Oak Ridge National Laboratory, TN (United States); Galloway, Jack, E-mail: jack@galloways.net [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2011-07-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  13. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    of large times as an asymptotically stable point of the dynamics. The obtained solutions are often not globally optimal but good approximations of it. Dynamical system and neural network approaches are appropriate methods for distributed and parallel processing. Because of the parallelization......Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...... thereof can be used as models for many industrial problems like manufacturing planning and optimization of flexible manufacturing systems. This is illustrated for an example in distributed robotic systems....

  14. Electric power system applications of optimization

    CERN Document Server

    Momoh, James A

    2008-01-01

    Introduction Structure of a Generic Electric Power System  Power System Models  Power System Control Power System Security Assessment  Power System Optimization as a Function of Time  Review of Optimization Techniques Applicable to Power Systems Electric Power System Models  Complex Power Concepts Three-Phase Systems Per Unit Representation  Synchronous Machine Modeling Reactive Capability Limits Prime Movers and Governing Systems  Automatic Gain Control Transmission Subsystems  Y-Bus Incorporating the Transformer Effect  Load Models  Available Transfer Capability  Illustrative Examples  Power

  15. Optimizing Unmanned Aircraft System Scheduling

    Science.gov (United States)

    2008-06-01

    ASC-U uses a deterministic algorithm to optimize over a given finite time horizon to obtain near-optimal UAS mission area assignments. ASC-U...the details of the algorithm . We set an upper bound on the total number of schedules that can be generated, so as not to create unsolvable ILPs. We...COL_MISSION_NAME)) If Trim( CStr (rMissions(iRow, COL_MISSION_REQUIRED))) <> "" Then If CLng(rMissions(iRow, COL_MISSION_REQUIRED)) > CLng

  16. Analysis of Coded FHSS Systems with Multiple Access Interference over Generalized Fading Channels

    Directory of Open Access Journals (Sweden)

    Salam A. Zummo

    2009-02-01

    Full Text Available We study the effect of interference on the performance of coded FHSS systems. This is achieved by modeling the physical channel in these systems as a block fading channel. In the derivation of the bit error probability over Nakagami fading channels, we use the exact statistics of the multiple access interference (MAI in FHSS systems. Due to the mathematically intractable expression of the Rician distribution, we use the Gaussian approximation to derive the error probability of coded FHSS over Rician fading channel. The effect of pilot-aided channel estimation is studied for Rician fading channels using the Gaussian approximation. From this, the optimal hopping rate in coded FHSS is approximated. Results show that the performance loss due to interference increases as the hopping rate decreases.

  17. [How can the coding quality in the DRG system be determined?].

    Science.gov (United States)

    Kahlmeyer, A; Volkmer, B

    2014-01-01

    The permanent adjustments ​​since 2003 to the G-DRG system have made the system even less understandable, so that many users have the feeling of feeding data into a black box which gives them a result without them being able to actively use the system itself. While chief physicians, senior physicians, and nursing managers are responsible to management for the results of the billing, they are in most cases not involved in the steps of DRG coding and billing. From this situation, a common question arises: "How well does my department code?" This uncertainty is exploited by many commercial vendors, who offer a wide variety of approaches for DRG optimization. The goal of this work is to provide advice as to how coding quality can be determined.

  18. Transient and fuel performance analysis with VTT's coupled code system

    International Nuclear Information System (INIS)

    Daavittila, A.; Hamalainen, A.; Raty, H.

    2005-01-01

    VTT (technical research center of Finland) maintains and further develops a comprehensive safety analysis code system ranging from the basic neutronic libraries to 3-dimensional transient analysis and fuel behaviour analysis codes. The code system is based on various types of couplings between the relevant physical phenomena. The main tools for analyses of reactor transients are presently the 3-dimensional reactor dynamics code HEXTRAN for cores with a hexagonal fuel assembly geometry and TRAB-3D for cores with a quadratic fuel assembly geometry. HEXTRAN has been applied to safety analyses of VVER type reactors since early 1990's. TRAB-3D is the latest addition to the code system, and has been applied to BWR and PWR analyses in recent years. In this paper it is shown that TRAB-3D has calculated accurately the power distribution during the Olkiluoto-1 load rejection test. The results from the 3-dimensional analysis can be used as boundary conditions for more detailed fuel rod analysis. For this purpose a general flow model GENFLO, developed at VTT, has been coupled with USNRC's FRAPTRAN fuel accident behaviour model. The example case for FRAPTRAN-GENFLO is for an ATWS at a BWR plant. The basis for the analysis is an oscillation incident in the Olkiluoto-1 BWR during reactor startup on February 22, 1987. It is shown that the new coupled code FRAPTRAN/GENFLO is quite a promising tool that can handle flow situations and give a detailed analysis of reactor transients

  19. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  20. Implantation of multigroup diffusion code 2DB in the IEAv CDC CYBER 170/750 system, and its preliminary evaluation

    International Nuclear Information System (INIS)

    Prati, A.; Anaf, J.

    1988-09-01

    The IBM version of the multigroup diffusion code 2DB was implemented in the IEAv CDC CYBER 170/750 system. It was optimized relative to the use of the central memory, limited to 132 K-words, through the memory manager CMM and its partition into three source codes: rectangular and cylindrical geometries, triangular geometry and hexagonal geometry. The reactangular, triangular and hexagonal geometry nodal options were revised and optimized. A fast reactor and a PWR type thermal reactor sample cases were studied. The results are presented and analized. An updated 2DB code user's manual was written in Portugueses and published separately. (author) [pt

  1. Logistics systems optimization under competition

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Ma, Lijun

    2015-01-01

    environment, decision making for all these critical areas requires more sophisticated mathematical modeling and analysis. Since finding the optimal solution of MCVRP is computationally expensive, they design a few guiding rules, which employ the searching history, to enhance the searching. They conduct...

  2. FAST: An advanced code system for fast reactor transient analysis

    International Nuclear Information System (INIS)

    Mikityuk, Konstantin; Pelloni, Sandro; Coddington, Paul; Bubelis, Evaldas; Chawla, Rakesh

    2005-01-01

    One of the main goals of the FAST project at PSI is to establish a unique analytical code capability for the core and safety analysis of advanced critical (and sub-critical) fast-spectrum systems for a wide range of different coolants. Both static and transient core physics, as well as the behaviour and safety of the power plant as a whole, are studied. The paper discusses the structure of the code system, including the organisation of the interfaces and data exchange. Examples of validation and application of the individual programs, as well as of the complete code system, are provided using studies carried out within the context of designs for experimental accelerator-driven, fast-spectrum systems

  3. Two-dimensional core calculation research for fuel management optimization based on CPACT code

    International Nuclear Information System (INIS)

    Chen Xiaosong; Peng Lianghui; Gang Zhi

    2013-01-01

    Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)

  4. Application bar-code system for solid radioactive waste management

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y. H.; Kim, T. K.; Kang, I. S.; Cho, H. S.; Son, J. S. [KAERI, Taejon (Korea, Republic of)

    2004-07-01

    Solid radioactive wastes are generated from the post-irradiated fuel examination facility, the irradiated material examination facility, the research reactor, and the laboratories at KAERI. A bar-code system for a solid radioactive waste management of a research organization became necessary while developing the RAWMIS(Radioactive Waste Management Integration System) which it can generate personal history management for efficient management of a waste, documents, all kinds of statistics. This paper introduces an input and output application program design to do to database with data in the results and a stream process of a treatment that analyzed the waste occurrence present situation and data by bar-code system.

  5. Improvement of JRR-4 core management code system

    International Nuclear Information System (INIS)

    Izumo, H.; Watanabe, S.; Nagatomi, H.; Hori, N.

    2000-01-01

    In the modification of JRR-4, the fuel was changed from 93% high enrichment uranium aluminized fuel to 20% low enriched uranium silicide fuel in conformity with the framework of reduced enrichment program on JAERI research reactors. As changing of this, JRR-4 core management code system which estimates excess reactivity of core, fuel burn-up and so on, was improved too. It had been difficult for users to operate the former code system because its input-output form was text-form. But, in the new code system (COMMAS-JRR), users are able to operate the code system without using difficult text-form input. The estimation results of excess reactivity of JRR-4 LEU fuel core were showed very good agreements with the measured value. It is the strong points of this new code system to be operated simply by using the windows form pictures act on a personal workstation equip with the graphical-user-interface (GUI), and to estimate accurately the specific characteristics of the LEU core. (author)

  6. Development of the vacuum system pressure responce analysis code PRAC

    International Nuclear Information System (INIS)

    Horie, Tomoyoshi; Kawasaki, Kouzou; Noshiroya, Shyoji; Koizumi, Jun-ichi.

    1985-03-01

    In this report, we show the method and numerical results of the vacuum system pressure responce analysis code. Since fusion apparatus is made up of many vacuum components, it is required to analyze pressure responce at any points of the system when vacuum system is designed or evaluated. For that purpose evaluating by theoretical solution is insufficient. Numerical analysis procedure such as finite difference method is usefull. In the PRAC code (Pressure Responce Analysis Code), pressure responce is obtained solving derivative equations which is obtained from the equilibrium relation of throughputs and contain the time derivative of pressure. As it considers both molecular and viscous flows, the coefficients of the equation depend on the pressure and the equations become non-linear. This non-linearity is treated as piece-wise linear within each time step. Verification of the code is performed for the simple problems. The agreement between numerical and theoretical solutions is good. To compare with the measured results, complicated model of gas puffing system is analyzed. The agreement is well for practical use. This code will be a useful analytical tool for designing and evaluating vacuum systems such as fusion apparatus. (author)

  7. General structure and functions of the OPAL optimization system

    International Nuclear Information System (INIS)

    Mikolas, P.; Sustek, J.; Svarny, J.

    2005-01-01

    Presented version of OPAL - the in core fuel management system is under development also for core loading optimization of NPP Temelin (WWER-1000 type reactor). Description of the algorithm of separate modules was presented in several AER papers. The optimization process of NPP Temelin loading patterns comprises problems like preparation input data for NPP SW, loading searching, fixing and splitting of fuel enrichments, BP-assignment, FA rotation and fuel cycle economics. In application for NPP Temelin has been used NPP Temelin code system (spectral code with macrocode). The objective of fuel management is to design a fuel-loading scheme that is capable of producing the required energy at the minimum cost while satisfying the safety constraints. Usually the objectives are: a) To meet the energy production requirements (loaded fuel should have sufficient reactivity that covers reactivity defects associated with startup as well as reactivity loss due to the fuel depletion); b) To satisfy all safety-related limits (loaded fuel should preserve adequate power peaking limits (given namely LOCA), shutdown margins and no positive Moderator Temperature Coefficient (MTC); c) To minimize the power generation cost ($/kWh(e)). Flow of optimization process OPAL management system is in detail described and application for NPP Temelin cores optimization presented (Authors)

  8. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  9. An Order Coding Genetic Algorithm to Optimize Fuel Reloads in a Nuclear Boiling Water Reactor

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A genetic algorithm is used to optimize the nuclear fuel reload for a boiling water reactor, and an order coding is proposed for the chromosomes and appropriate crossover and mutation operators. The fitness function was designed so that the genetic algorithm creates fuel reloads that, on one hand, satisfy the constrictions for the radial power peaking factor, the minimum critical power ratio, and the maximum linear heat generation rate while optimizing the effective multiplication factor at the beginning and end of the cycle. To find the values of these variables, a neural network trained with the behavior of a reactor simulator was used to predict them. The computation time is therefore greatly decreased in the search process. We validated this method with data from five cycles of the Laguna Verde Nuclear Power Plant in Mexico

  10. GLOBAL OPTIMIZATION METHODS FOR GRAVITATIONAL LENS SYSTEMS WITH REGULARIZED SOURCES

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2012-01-01

    Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters; the second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions, we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach by applying our code to a subset of the lens systems included in the SLACS survey.

  11. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Velkov, K. [GRS, Garching (Germany); Lizorkin, M. [Kurchatov-Institute, Moscow (Russian Federation)] [and others

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  12. Application of neutron/gamma transport codes for the design of explosive detection systems

    International Nuclear Information System (INIS)

    Elias, E.; Shayer, Z.

    1994-01-01

    Applications of neutron and gamma transport codes to the design of nuclear techniques for detecting concealed explosives material are discussed. The methodology of integrating radiation transport computations in the development, optimization and analysis phases of these new technologies is discussed. Transport and Monte Carlo codes are used for proof of concepts, guide the system integration, reduce the extend of experimental program and provide insight into the physical problem involved. The paper concentrates on detection techniques based on thermal and fast neutron interactions in the interrogated object. (authors). 6 refs., 1 tab., 5 figs

  13. Artificial intelligence in power system optimization

    CERN Document Server

    Ongsakul, Weerakorn

    2013-01-01

    With the considerable increase of AI applications, AI is being increasingly used to solve optimization problems in engineering. In the past two decades, the applications of artificial intelligence in power systems have attracted much research. This book covers the current level of applications of artificial intelligence to the optimization problems in power systems. This book serves as a textbook for graduate students in electric power system management and is also be useful for those who are interested in using artificial intelligence in power system optimization.

  14. Optimization of some eco-energetic systems

    International Nuclear Information System (INIS)

    Purica, I.; Pavelescu, M.; Stoica, M.

    1976-01-01

    An optimization problem of two eco-energetic systems is described. The first one is close to the actual eco-energetic system in Romania, while the second is a new one, based on nuclear energy as primary source and hydrogen energy as secondary source. The optimization problem solved is to find the optimal structure of the systems so that the objective functions adopted, namely unitary energy cost C and total pollution P, to be minimum at the same time. The problem can be modelated with a bimatrix cooperative mathematical game without side payments. We demonstrate the superiority of the new eco-energetic system. (author)

  15. A Coding System for Analysing a Spoken Text Database.

    Science.gov (United States)

    Cutting, Joan

    1994-01-01

    This paper describes a coding system devised to analyze conversations of graduate students in applied linguistics at Edinburgh University. The system was devised to test the hypothesis that as shared knowledge among conversation participants grows, the textual density of in-group members has more cues than that of strangers. The informal…

  16. Physical-layer network coding in coherent optical OFDM systems.

    Science.gov (United States)

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  17. Tritium module for ITER/Tiber system code

    International Nuclear Information System (INIS)

    Finn, P.A.; Willms, S.; Busigin, A.; Kalyanam, K.M.

    1988-01-01

    A tritium module was developed for the ITER/Tiber system code to provide information on capital costs, tritium inventory, power requirements and building volumes for these systems. In the tritium module, the main tritium subsystems/emdash/plasma processing, atmospheric cleanup, water cleanup, blanket processing/emdash/are each represented by simple scaleable algorithms. 6 refs., 2 tabs

  18. Progress on China nuclear data processing code system

    Science.gov (United States)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  19. Code conversion for system design and safety analysis of NSSS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hae Cho; Kim, Young Tae; Choi, Young Gil; Kim, Hee Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-01-01

    This report describes overall project works related to conversion, installation and validation of computer codes which are used in NSSS design and safety analysis of nuclear power plants. Domain/os computer codes for system safety analysis are installed and validated on Apollo DN10000, and then Apollo version are converted and installed again on HP9000/700 series with appropriate validation. Also, COOLII and COAST which are cyber version computer codes are converted into versions of Apollo DN10000 and HP9000/700, and installed with validation. This report details whole processes of work involved in the computer code conversion and installation, as well as software verification and validation results which are attached to this report. 12 refs., 8 figs. (author)

  20. Validation of the VTT's reactor physics code system

    International Nuclear Information System (INIS)

    Tanskanen, A.

    1998-01-01

    At VTT Energy several international reactor physics codes and nuclear data libraries are used in a variety of applications. The codes and libraries are under constant development and every now and then new updated versions are released, which are taken in use as soon as they have been validated at VTT Energy. The primary aim of the validation is to ensure that the code works properly, and that it can be used correctly. Moreover, the applicability of the codes and libraries are studied in order to establish their advantages and weak points. The capability of generating program-specific nuclear data for different reactor physics codes starting from the same evaluated data is sometimes of great benefit. VTT Energy has acquired a nuclear data processing system based on the NJOY-94.105 and TRANSX-2.15 processing codes. The validity of the processing system has been demonstrated by generating pointwise (MCNP) and groupwise (ANISN) temperature-dependent cross section sets for the benchmark calculations of the Doppler coefficient of reactivity. At VTT Energy the KENO-VI three-dimensional Monte Carlo code is used in criticality safety analyses. The KENO-VI code and the 44GROUPNDF5 data library have been validated at VTT Energy against the ZR-6 and LR-0 critical experiments. Burnup Credit refers to the reduction in reactivity of burned nuclear fuel due to the change in composition during irradiation. VTT Energy has participated in the calculational VVER-440 burnup credit benchmark in order to validate criticality safety calculation tools. (orig.)

  1. General productivity code: productivity optimization of gaseous diffusion cascades. The programmer's guide

    International Nuclear Information System (INIS)

    Tunstall, J.N.

    1975-05-01

    The General Productivity Code is a FORTRAN IV computer program for the IBM System 360. With its model of the productivity of gaseous diffusion cascades, the program is used to determine optimum cascade performance based on specified operating conditions and to aid in the calculation of optimum operating conditions for a complex of diffusion cascades. This documentation of the program is directed primarily to programmers who will be responsible for updating the code as requested by the users. It is also intended to be an aid in training new Productivity Code users and to serve as a general reference manual. Elements of the mathematical model, the input data requirements, the definitions of the various tasks (Instructions) that can be performed, and a detailed description of most FORTRAN variables and program subroutines are presented. A sample problem is also included. (auth)

  2. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  3. Embedded Systems Design: Optimization Challenges

    DEFF Research Database (Denmark)

    Pop, Paul

    2005-01-01

    Summary form only given. Embedded systems are everywhere: from alarm clocks to PDAs, from mobile phones to cars, almost all the devices we use are controlled by embedded systems. Over 99% of the microprocessors produced today are used in embedded systems, and recently the number of embedded systems...

  4. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  5. Numerical optimization of the ramp-down phase with the RAPTOR code

    Science.gov (United States)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

  6. Simulation of water hammer phenomena using the system code ATHLET

    Energy Technology Data Exchange (ETDEWEB)

    Bratfisch, Christoph; Koch, Marco K. [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2017-07-15

    Water Hammer Phenomena can endanger the integrity of structures leading to a possible failure of pipes in nuclear power plants as well as in many industrial applications. These phenomena can arise in nuclear power plants in the course of transients and accidents induced by the start-up of auxiliary feed water systems or emergency core cooling systems in combination with rapid acting valves and pumps. To contribute to further development and validation of the code ATHLET (Analysis of Thermalhydraulics of Leaks and Transients), an experiment performed in the test facility Pilot Plant Pipework (PPP) at Fraunhofer UMSICHT is simulated using the code version ATHLET 3.0A.

  7. Simulation of water hammer phenomena using the system code ATHLET

    International Nuclear Information System (INIS)

    Bratfisch, Christoph; Koch, Marco K.

    2017-01-01

    Water Hammer Phenomena can endanger the integrity of structures leading to a possible failure of pipes in nuclear power plants as well as in many industrial applications. These phenomena can arise in nuclear power plants in the course of transients and accidents induced by the start-up of auxiliary feed water systems or emergency core cooling systems in combination with rapid acting valves and pumps. To contribute to further development and validation of the code ATHLET (Analysis of Thermalhydraulics of Leaks and Transients), an experiment performed in the test facility Pilot Plant Pipework (PPP) at Fraunhofer UMSICHT is simulated using the code version ATHLET 3.0A.

  8. Development of the versatile reactor analysis code system, MARBLE2

    International Nuclear Information System (INIS)

    Yokoyama, Kenji; Jin, Tomoyuki; Hazama, Taira; Hirai, Yasushi

    2015-07-01

    The second version of the versatile reactor analysis code system, MARBLE2, has been developed. A lot of new functions have been added in MARBLE2 by using the base technology developed in the first version (MARBLE1). Introducing the remaining functions of the conventional code system (JOINT-FR and SAGEP-FR), MARBLE2 enables one to execute almost all analysis functions of the conventional code system with the unified user interfaces of its subsystem, SCHEME. In particular, the sensitivity analysis functionality is available in MARBLE2. On the other hand, new built-in solvers have been developed, and existing ones have been upgraded. Furthermore, some other analysis codes and libraries developed in JAEA have been consolidated and prepared in SCHEME. In addition, several analysis codes developed in the other institutes have been additionally introduced as plug-in solvers. Consequently, gamma-ray transport calculation and heating evaluation become available. As for another subsystem, ORPHEUS, various functionality updates and speed-up techniques have been applied based on user experience of MARBLE1 to enhance its usability. (author)

  9. OSCAR-4 Code System Application to the SAFARI-1 Reactor

    International Nuclear Information System (INIS)

    Stander, Gerhardt; Prinsloo, Rian H.; Tomasevic, Djordje I.; Mueller, Erwin

    2008-01-01

    The OSCAR reactor calculation code system consists of a two-dimensional lattice code, the three-dimensional nodal core simulator code MGRAC and related service codes. The major difference between the new version of the OSCAR system, OSCAR-4, and its predecessor, OSCAR-3, is the new version of MGRAC which contains many new features and model enhancements. In this work some of the major improvements in the nodal diffusion solution method, history tracking, nuclide transmutation and cross section models are described. As part of the validation process of the OSCAR-4 code system (specifically the new MGRAC version), some of the new models are tested by comparing computational results to SAFARI-1 reactor plant data for a number of operational cycles and for varying applications. A specific application of the new features allows correct modeling of, amongst others, the movement of fuel-follower type control rods and dynamic in-core irradiation schedules. It is found that the effect of the improved control rod model, applied over multiple cycles of the SAFARI-1 reactor operation history, has a significant effect on in-cycle reactivity prediction and fuel depletion. (authors)

  10. Development of FBR integrity system code. Basic concept

    International Nuclear Information System (INIS)

    Asayama, Tai

    2001-05-01

    For fast breeder reactors to be commercialized, they must be more reliable, safer, and at the same, economically competitive with future light water reactors. Innovation of elevated temperature structural design standard is necessary to achieve this goal. The most powerful way is to enlarge the scope of structural integrity code to cover items other than design evaluation that has been addressed in existing codes. Items that must be newly covered are prerequisites of design, fabrication, examination, operation and maintenance, etc. This allows designers to choose the most economical combination of design variations to achieve specific reliability that is needed for a particular component. Designing components by this concept, a cost-minimum design of a whole plant can be realized. By determining the reliability that must be achieved for a component by risk technologies, further economical improvement can be expected by avoiding excessive quality. Recognizing the necessity for the codes based on the new concept, the development of 'FBR integrity system code' began in 2000. Research and development will last 10 years. For this development, the basic logistics and system as well as technologies that materialize the concept are necessary. Original logistics and system must be developed, because no existing researches are available in and out of Japan. This reports presents the results of the work done in the first year regarding the basic idea, methodology, and structure of the code. (author)

  11. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  12. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  13. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  14. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  15. Optimizing the Gating System for Steel Castings

    Directory of Open Access Journals (Sweden)

    Jan Jezierski

    2018-04-01

    Full Text Available The article presents the attempt to optimize a gating system to produce cast steel castings. It is based on John Campbell’s theory and presents the original results of computer modelling of typical and optimized gating systems for cast steel castings. The current state-of-the-art in cast steel casting foundry was compared with several proposals of optimization. The aim was to find a compromise between the best, theoretically proven gating system version, and a version that would be affordable in industrial conditions. The results show that it is possible to achieve a uniform and slow pouring process even for heavy castings to preserve their internal quality.

  16. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  17. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  18. Analysis of a 12-Finger Rod Drop using RETRAN/MASTER Code System for APR1400

    International Nuclear Information System (INIS)

    Yu, Keuk Jong; You, Sung Chang; Kim, Han Gon

    2009-01-01

    The Optimized Power Reactor 1000 (OPR1000) has 4-finger and 12-finger Control Element Assemblies (CEAs). When the 12-finger CEA is dropped, Core Protection Calculator System (CPCS) shuts down the reactor to prevent fuel damage that could occur from the sudden reactor power peaking. By contrast, the improved CPCS of Advanced Power Reactor 1400 (APR1400), which has systems similar to those of the OPR1000, decreases reactor power rapidly using its Reactor Power Cutback System (RPCS) to avoid unwanted reactor trips caused by the CPCS during a 12- finger CEA drop event. RETRAN is a best-estimate code for transient analysis of Non-LOCA. The RETRAN control logic, which includes the function of reducing reactor power during a 12-Finger CEA drop, has been developed for the APR1400. A MATRAN program has also been developed. MATRAN is the interface program for realtime processing to connect RETRAN with MASTER code which is a nuclear analysis and design code. MATRAN supplies adequate feedback reactivities from the MASTER code to RETRAN code. The purpose of this study is to analyze the behavior of a nuclear reactor core and its primary system using conventional RETRAN analysis procedure and MATRAN program analysis procedure during a 12- finger CEA drop. In addition, the axial power distribution and Axial Shape Index (ASI) are produced by the MATRAN program and they are confirmed as within operation limits

  19. Opacity calculations for extreme physical systems: code RACHEL

    Science.gov (United States)

    Drska, Ladislav; Sinor, Milan

    1996-08-01

    Computer simulations of physical systems under extreme conditions (high density, temperature, etc.) require the availability of extensive sets of atomic data. This paper presents basic information on a self-consistent approach to calculations of radiative opacity, one of the key characteristics of such systems. After a short explanation of general concepts of the atomic physics of extreme systems, the structure of the opacity code RACHEL is discussed and some of its applications are presented.

  20. Analysis of an XADS Target with the System Code TRACE

    International Nuclear Information System (INIS)

    Jaeger, Wadim; Sanchez Espinoza, Victor H.; Feng, Bo

    2008-01-01

    Accelerator-driven systems (ADS) present an option to reduce the radioactive waste of the nuclear industry. The experimental Accelerator-Driven System (XADS) has been designed to investigate the feasibility of using ADS on an industrial scale to burn minor actinides. The target section lies in the middle of the subcritical core and is bombarded by a proton beam to produce spallation neutrons. The thermal energy produced from this reaction requires a heat removal system for the target section. The target is cooled by liquid lead-bismuth-eutectics (LBE) in the primary system which in turn transfers the heat via a heat exchanger (HX) to the secondary coolant, Diphyl THT (DTHT), a synthetic diathermic fluid. Since this design is still in development, a detailed investigation of the system is necessary to evaluate the behavior during normal and transient operations. Due to the lack of experimental facilities and data for ADS, the analyses are mostly done using thermal hydraulic codes. In addition to evaluating the thermal hydraulics of the XADS, this paper also benchmarks a new code developed by the NRC, TRACE, against other established codes. The events used in this study are beam power switch-on/off transients and a loss of heat sink accident. The obtained results from TRACE were in good agreement with the results of various other codes. (authors)

  1. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    Science.gov (United States)

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  2. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  3. Optimization of a computer simulation code to analyse thermal systems for solar energy water heating; Aperfeicoamento de um programa de simulacao computacional para analise de sistemas termicos de aquecimento de agua por energia solar

    Energy Technology Data Exchange (ETDEWEB)

    Pozzebon, Felipe Barin

    2009-02-15

    The potential of solar water heating systems through solar energy in Brazil is excellent due to the climatic features of the country. The performance of these systems is highly influenced also by the materials used to build it and by the dimension of its equipment and components. In face of global warming, solar energy gains more attention, since it is one of the renewable energy that will be largely used to replace some of the existing polluting types of energy. This paper presents the improvement of a software that conducts simulations of water heating systems using solar energy in thermosyphon regime or forced circulation. TermoSim, as it is called, was initiated at the Solar Labs, and is in its version 3.0. The current version is capable of simulating 6 different arrangements' possibilities combined with auxiliary energy: systems with solar collectors with auxiliary energy with gas, electric energy, internal electric energy, electric energy in series with the consumption line, and no auxiliary energy. The software is a tool to aid studies and analysis of solar heating systems, it has a friendly interface that is easy to comprehend and results are simple to use. Besides that, this version also allows simulations that consider heat losses at night, situation in which a reverse circulation can occur and mean efficiency loss, depending on the simulated system type. There were many simulations with the mathematical models used and comparisons were made with the climatic data of the city of Caxias do Sul, in Rio Grande do Sul state, in Brazil, determining the system with the most efficient configuration for the simulated water consume profile. The paper is finalized with simple economic analyses with the intention of foreseeing the time for payback on the investment, taking into account the current prices for electrical energy in the simulated area and the possible monthly economy provided with the use of a solar energy heating system. (author)

  4. Revised SWAT. The integrated burnup calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Mochizuki, Hiroki [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kiyosumi, Takehide [The Japan Research Institute, Ltd., Tokyo (Japan)

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  5. Revised SWAT. The integrated burnup calculation code system

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Kiyosumi, Takehide

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  6. Coded aperture imaging system for nuclear fuel motion detection

    International Nuclear Information System (INIS)

    Stalker, K.T.; Kelly, J.G.

    1980-01-01

    A Coded Aperature Imaging System (CAIS) has been developed at Sandia National Laboratories to image the motion of nuclear fuel rods undergoing tests simulating accident conditions within a liquid metal fast breeder reactor. The tests require that the motion of the test fuel be monitored while it is immersed in a liquid sodium coolant precluding the use of normal optical means of imaging. However, using the fission gamma rays emitted by the fuel itself and coded aperture techniques, images with 1.5 mm radial and 5 mm axial resolution have been attained. Using an electro-optical detection system coupled to a high speed motion picture camera a time resolution of one millisecond can be achieved. This paper will discuss the application of coded aperture imaging to the problem, including the design of the one-dimensional Fresnel zone plate apertures used and the special problems arising from the reactor environment and use of high energy gamma ray photons to form the coded image. Also to be discussed will be the reconstruction techniques employed and the effect of various noise sources on system performance. Finally, some experimental results obtained using the system will be presented

  7. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  8. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  9. Network coding and its applications to satellite systems

    DEFF Research Database (Denmark)

    Vieira, Fausto; Roetter, Daniel Enrique Lucani

    2015-01-01

    Network coding has its roots in information theory where it was initially proposed as a way to improve a two-node communication using a (broadcasting) relay. For this theoretical construct, a satellite communications system was proposed as an illustrative example, where the relay node would be a ...

  10. PCS a code system for generating production cross section libraries

    International Nuclear Information System (INIS)

    Cox, L.J.

    1997-01-01

    This document outlines the use of the PCS Code System. It summarizes the execution process for generating FORMAT2000 production cross section files from FORMAT2000 reaction cross section files. It also describes the process of assembling the ASCII versions of the high energy production files made from ENDL and Mark Chadwick's calculations. Descriptions of the function of each code along with its input and output and use are given. This document is under construction. Please submit entries, suggestions, questions, and corrections to (ljc at sign llnl.gov) 3 tabs

  11. PHITS-a particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji; Sato, Tatsuhiko; Iwase, Hiroshi; Nose, Hiroyuki; Nakashima, Hiroshi; Sihver, Lembit

    2006-01-01

    The paper presents a summary of the recent development of the multi-purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS. In particular, we discuss in detail the development of two new models, JAM and JQMD, for high energy particle interactions, incorporated in PHITS, and show comparisons between model calculations and experiments for the validations of these models. The paper presents three applications of the code including spallation neutron source, heavy ion therapy and space radiation. The results and examples shown indicate PHITS has great ability of carrying out the radiation transport analysis of almost all particles including heavy ions within a wide energy range

  12. ERRORJ. Covariance processing code system for JENDL. Version 2

    International Nuclear Information System (INIS)

    Chiba, Gou

    2003-09-01

    ERRORJ is the covariance processing code system for Japanese Evaluated Nuclear Data Library (JENDL) that can produce group-averaged covariance data to apply it to the uncertainty analysis of nuclear characteristics. ERRORJ can treat the covariance data for cross sections including resonance parameters as well as angular distributions and energy distributions of secondary neutrons which could not be dealt with by former covariance processing codes. In addition, ERRORJ can treat various forms of multi-group cross section and produce multi-group covariance file with various formats. This document describes an outline of ERRORJ and how to use it. (author)

  13. Stochastic optimization of GeantV code by use of genetic algorithms

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  14. Vehicle systems design optimization study

    Science.gov (United States)

    Gilmour, J. L.

    1980-01-01

    The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.

  15. Vehicle systems design optimization study

    Energy Technology Data Exchange (ETDEWEB)

    Gilmour, J. L.

    1980-04-01

    The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current production internal combustion engine vehicles. It is possible to achieve this goal and also provide passenger and cargo space comparable to a selected current production sub-compact car either in a unique new design or by utilizing the production vehicle as a base. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages - one at front under the hood and a second at the rear under the cargo area - in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passsenger and cargo space for a given size vehicle.

  16. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    International Nuclear Information System (INIS)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin

    2013-01-01

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors

  17. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin [POSCO Nuclear Technology, Seoul (Korea, Republic of)

    2013-05-15

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors.

  18. Results from the Coded Aperture Neutron Imaging System (CANIS)

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Hilton, Nathan R.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging- a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  19. Results from the coded aperture neutron imaging system

    International Nuclear Information System (INIS)

    Brubaker, Erik; Steele, John T.; Brennan, James S.; Marleau, Peter

    2010-01-01

    Because of their penetrating power, energetic neutrons and gamma rays (∼1 MeV) offer the best possibility of detecting highly shielded or distant special nuclear material (SNM). Of these, fast neutrons offer the greatest advantage due to their very low and well understood natural background. We are investigating a new approach to fast-neutron imaging - a coded aperture neutron imaging system (CANIS). Coded aperture neutron imaging should offer a highly efficient solution for improved detection speed, range, and sensitivity. We have demonstrated fast neutron and gamma ray imaging with several different configurations of coded masks patterns and detectors including an 'active' mask that is composed of neutron detectors. Here we describe our prototype detector and present some initial results from laboratory tests and demonstrations.

  20. Development of particle and heavy ion transport code system

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Particle and heavy ion transport code system (PHITS) is 3 dimension general purpose Monte Carlo simulation codes for description of transport and reaction of particle and heavy ion in materials. It is developed on the basis of NMTC/JAM for design and safety of J-PARC. What is PHITS, it's physical process, physical models and development process of PHITC code are described. For examples of application, evaluation of neutron optics, cancer treatment by heavy particle ray and cosmic radiation are stated. JAM and JQMD model are used as the physical model. Neutron motion in six polar magnetic field and gravitational field, PHITC simulation of trace of C 12 beam and secondary neutron track of small model of cancer treatment device in HIMAC and neutron flux in Space Shuttle are explained. (S.Y.)

  1. Design of Thermal Systems Using Topology Optimization

    DEFF Research Database (Denmark)

    Haertel, Jan Hendrik Klaas

    printeddry-cooled power plant condensers using a simpliffed thermouid topology optimizationmodel is presented in another study. A benchmarking of the optimized geometriesagainst a conventional heat exchanger design is conducted and the topologyoptimized designs show a superior performance. A thermouid......The goalof this thesis is to apply topology optimization to the design of differentthermal systems such as heat sinks and heat exchangers in order to improve thethermal performance of these systems compared to conventional designs. Thedesign of thermal systems is a complex task that has...... of optimized designs are presentedwithin this thesis.  The maincontribution of the thesis is the development of several numerical optimizationmodels that are applied to different design challenges within thermalengineering.  Topology optimization isapplied in an industrial project to design the heat rejection...

  2. COSY 5.0 - the fifth order code for corpuscular optical systems

    International Nuclear Information System (INIS)

    Berz, M.; Hoffmann, H.C.; Wollnik, H.

    1987-01-01

    COSY 5.0 is a new computer code for the design of corpuscular optical systems based on the principle of transfer matrices. The particle optical calculations include all image aberrations through fifth order. COSY 5.0 uses canonical coordinates and exploits the symplectic condition to increase the speed of computation. COSY 5.0 contains a library for the computation of matrix elements of all commonly used corpuscular optical elements such as electric and magnetic multipoles and sector fields. The corresponding formulas were generated algebraically by the computer code HAMILTON. Care was taken that the optimization of optical elements is achieved with minimal numerical effort. Finally COSY 5.0 has a very general mnemonic input code resembling a higher programming language. (orig.)

  3. The Vulnerability Assessment Code for Physical Protection System

    International Nuclear Information System (INIS)

    Jang, Sung Soon; Yoo, Ho Sik

    2007-01-01

    To neutralize the increasing terror threats, nuclear facilities have strong physical protection system (PPS). PPS includes detectors, door locks, fences, regular guard patrols, and a hot line to a nearest military force. To design an efficient PPS and to fully operate it, vulnerability assessment process is required. Evaluating PPS of a nuclear facility is complicate process and, hence, several assessment codes have been developed. The estimation of adversary sequence interruption (EASI) code analyzes vulnerability along a single intrusion path. To evaluate many paths to a valuable asset in an actual facility, the systematic analysis of vulnerability to intrusion (SAVI) code was developed. KAERI improved SAVI and made the Korean analysis of vulnerability to intrusion (KAVI) code. Existing codes (SAVI and KAVI) have limitations in representing the distance of a facility because they use the simplified model of a PPS called adversary sequence diagram. In adversary sequence diagram the position of doors, sensors and fences is described just as the locating area. Thus, the distance between elements is inaccurate and we cannot reflect the range effect of sensors. In this abstract, we suggest accurate and intuitive vulnerability assessment based on raster map modeling of PPS. The raster map of PPS accurately represents the relative position of elements and, thus, the range effect of sensor can be easily incorporable. Most importantly, the raster map is easy to understand

  4. Optimization of a wearable power system

    Energy Technology Data Exchange (ETDEWEB)

    Kovacevic, I.; Round, S. D.; Kolar, J. W.; Boulouchos, K.

    2008-07-01

    In this paper the optimization of wearable power system comprising of an internal combustion engine, motor/generator, inverter/rectifier, Li-battery pack, DC/DC converters, and controller is performed. The Wearable Power System must have the capability to supply an average 20 W for 4 days with peak power of 200 W and have a system weight less then 4 kg. The main objectives are to select the engine, fuel and battery type, to match the weight of fuel and the number of battery cells, to find the optimal working point of engine and minimizing the system weight. The minimization problem is defined in Matlab as a nonlinear constrained optimization task. The optimization procedure returns the optimal system design parameters: the Li-polymer battery with eight cells connected in series for a 28 V DC output voltage, the selection of gasoline/oil fuel mixture and the optimal engine working point of 12 krpm for a 4.5 cm{sup 3} 4-stroke engine. (author)

  5. Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems

    Science.gov (United States)

    2010-12-01

    technical competence for the type of tests and calibrations SCALe undertakes. Testing and calibration laboratories that comply with ISO / IEC 17025 ...and exec t [ ISO / IEC 2005]. f a software system indicates that the SCALe analysis di by a CERT secure coding standard. Successful conforma antees that...to be more secure than non- systems. However, no study has yet been performed to p t ssment in accordance with ISO / IEC 17000: “a demonstr g to a

  6. Particle and heavy ion transport code system; PHITS

    International Nuclear Information System (INIS)

    Niita, Koji

    2004-01-01

    Intermediate and high energy nuclear data are strongly required in design study of many facilities such as accelerator-driven systems, intense pulse spallation neutron sources, and also in medical and space technology. There is, however, few evaluated nuclear data of intermediate and high energy nuclear reactions. Therefore, we have to use some models or systematics for the cross sections, which are essential ingredients of high energy particle and heavy ion transport code to estimate neutron yield, heat deposition and many other quantities of the transport phenomena in materials. We have developed general purpose particle and heavy ion transport Monte Carlo code system, PHITS (Particle and Heavy Ion Transport code System), based on the NMTC/JAM code by the collaboration of Tohoku University, JAERI and RIST. The PHITS has three important ingredients which enable us to calculate (1) high energy nuclear reactions up to 200 GeV, (2) heavy ion collision and its transport in material, (3) low energy neutron transport based on the evaluated nuclear data. In the PHITS, the cross sections of high energy nuclear reactions are obtained by JAM model. JAM (Jet AA Microscopic Transport Model) is a hadronic cascade model, which explicitly treats all established hadronic states including resonances and all hadron-hadron cross sections parametrized based on the resonance model and string model by fitting the available experimental data. The PHITS can describe the transport of heavy ions and their collisions by making use of JQMD and SPAR code. The JQMD (JAERI Quantum Molecular Dynamics) is a simulation code for nucleus nucleus collisions based on the molecular dynamics. The SPAR code is widely used to calculate the stopping powers and ranges for charged particles and heavy ions. The PHITS has included some part of MCNP4C code, by which the transport of low energy neutron, photon and electron based on the evaluated nuclear data can be described. Furthermore, the high energy nuclear

  7. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    Science.gov (United States)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  8. Optimization of biotechnological systems through geometric programming

    Directory of Open Access Journals (Sweden)

    Torres Nestor V

    2007-09-01

    Full Text Available Abstract Background In the past, tasks of model based yield optimization in metabolic engineering were either approached with stoichiometric models or with structured nonlinear models such as S-systems or linear-logarithmic representations. These models stand out among most others, because they allow the optimization task to be converted into a linear program, for which efficient solution methods are widely available. For pathway models not in one of these formats, an Indirect Optimization Method (IOM was developed where the original model is sequentially represented as an S-system model, optimized in this format with linear programming methods, reinterpreted in the initial model form, and further optimized as necessary. Results A new method is proposed for this task. We show here that the model format of a Generalized Mass Action (GMA system may be optimized very efficiently with techniques of geometric programming. We briefly review the basics of GMA systems and of geometric programming, demonstrate how the latter may be applied to the former, and illustrate the combined method with a didactic problem and two examples based on models of real systems. The first is a relatively small yet representative model of the anaerobic fermentation pathway in S. cerevisiae, while the second describes the dynamics of the tryptophan operon in E. coli. Both models have previously been used for benchmarking purposes, thus facilitating comparisons with the proposed new method. In these comparisons, the geometric programming method was found to be equal or better than the earlier methods in terms of successful identification of optima and efficiency. Conclusion GMA systems are of importance, because they contain stoichiometric, mass action and S-systems as special cases, along with many other models. Furthermore, it was previously shown that algebraic equivalence transformations of variables are sufficient to convert virtually any types of dynamical models into

  9. Joint Power Allocation for Multicast Systems with Physical-Layer Network Coding

    Directory of Open Access Journals (Sweden)

    Zhu Wei-Ping

    2010-01-01

    Full Text Available This paper addresses the joint power allocation issue in physical-layer network coding (PLNC of multicast systems with two sources and two destinations communicating via a large number of distributed relays. By maximizing the achievable system rate, a constrained optimization problem is first formulated to jointly allocate powers for the source and relay terminals. Due to the nonconvex nature of the cost function, an iterative algorithm with guaranteed convergence is developed to solve the joint power allocation problem. As an alternative, an upper bound of the achievable rate is also derived to modify the original cost function in order to obtain a convex optimization solution. This approximation is shown to be asymptotically optimal in the sense of maximizing the achievable rate. It is confirmed through Monte Carlo simulations that the proposed joint power allocation schemes are superior to the existing schemes in terms of achievable rate and cumulative distribution function (CDF.

  10. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.

    1975-01-01

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  11. Optimal control applications in electric power systems

    CERN Document Server

    Christensen, G S; Soliman, S A

    1987-01-01

    Significant advances in the field of optimal control have been made over the past few decades. These advances have been well documented in numerous fine publications, and have motivated a number of innovations in electric power system engineering, but they have not yet been collected in book form. Our purpose in writing this book is to provide a description of some of the applications of optimal control techniques to practical power system problems. The book is designed for advanced undergraduate courses in electric power systems, as well as graduate courses in electrical engineering, applied mathematics, and industrial engineering. It is also intended as a self-study aid for practicing personnel involved in the planning and operation of electric power systems for utilities, manufacturers, and consulting and government regulatory agencies. The book consists of seven chapters. It begins with an introductory chapter that briefly reviews the history of optimal control and its power system applications and also p...

  12. Assessment of systems codes and their coupling with CFD codes in thermal–hydraulic applications to innovative reactors

    Energy Technology Data Exchange (ETDEWEB)

    Bandini, G., E-mail: giacomino.bandini@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) (Italy); Polidori, M. [Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA) (Italy); Gerschenfeld, A.; Pialla, D.; Li, S. [Commissariat à l’Energie Atomique (CEA) (France); Ma, W.M.; Kudinov, P.; Jeltsov, M.; Kööp, K. [Royal Institute of Technology (KTH) (Sweden); Huber, K.; Cheng, X.; Bruzzese, C.; Class, A.G.; Prill, D.P. [Karlsruhe Institute of Technology (KIT) (Germany); Papukchiev, A. [Gesellschaft für Anlagen- und Reaktorsicherheit (GRS) (Germany); Geffray, C.; Macian-Juan, R. [Technische Universität München (TUM) (Germany); Maas, L. [Institut de Radioprotection et de Sûreté Nucléaire (IRSN) (France)

    2015-01-15

    Highlights: • The assessment of RELAP5, TRACE and CATHARE system codes on integral experiments is presented. • Code benchmark of CATHARE, DYN2B, and ATHLET on PHENIX natural circulation experiment. • Grid-free pool modelling based on proper orthogonal decomposition for system codes is explained. • The code coupling methodologies are explained. • The coupling of several CFD/system codes is tested against integral experiments. - Abstract: The THINS project of the 7th Framework EU Program on nuclear fission safety is devoted to the investigation of crosscutting thermal–hydraulic issues for innovative nuclear systems. A significant effort in the project has been dedicated to the qualification and validation of system codes currently employed in thermal–hydraulic transient analysis for nuclear reactors. This assessment is based either on already available experimental data, or on the data provided by test campaigns carried out in the frame of THINS project activities. Data provided by TALL and CIRCE facilities were used in the assessment of system codes for HLM reactors, while the PHENIX ultimate natural circulation test was used as reference for a benchmark exercise among system codes for sodium-cooled reactor applications. In addition, a promising grid-free pool model based on proper orthogonal decomposition is proposed to overcome the limits shown by the thermal–hydraulic system codes in the simulation of pool-type systems. Furthermore, multi-scale system-CFD solutions have been developed and validated for innovative nuclear system applications. For this purpose, data from the PHENIX experiments have been used, and data are provided by the tests conducted with new configuration of the TALL-3D facility, which accommodates a 3D test section within the primary circuit. The TALL-3D measurements are currently used for the validation of the coupling between system and CFD codes.

  13. SRAC: JAERI thermal reactor standard code system for reactor design and analysis

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Takano, Hideki; Horikami, Kunihiko; Ishiguro, Yukio; Kaneko, Kunio; Hara, Toshiharu.

    1983-01-01

    The SRAC (Standard Reactor Analysis Code) is a code system for nuclear reactor analysis and design. It is composed of neutron cross section libraries and auxiliary processing codes, neutron spectrum routines, a variety of transport, 1-, 2- and 3-D diffusion routines, dynamic parameters and cell burn-up routines. By making the best use of the individual code function in the SRAC system, the user can select either the exact method for an accurate estimate of reactor characteristics or the economical method aiming at a shorter computer time, depending on the purpose of study. The user can select cell or core calculation; fixed source or eigenvalue problem; transport (collision probability or Sn) theory or diffusion theory. Moreover, smearing and collapsing of macroscopic cross sections are separately done by the user's selection. And a special attention is paid for double heterogeneity. Various techniques are employed to access the data storage and to optimize the internal data transfer. Benchmark calculations using the SRAC system have been made extensively for the Keff values of various types of critical assemblies (light water, heavy water and graphite moderated systems, and fast reactor systems). The calculated results show good prediction for the experimental Keff values. (author)

  14. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J M; Ahnert, C; Gomez Santamaria, J; Rodriguez Olabarria, I

    1985-07-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs.

  15. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs

  16. Solution of optimization problems by means of the CASTEM 2000 computer code

    International Nuclear Information System (INIS)

    Charras, Th.; Millard, A.; Verpeaux, P.

    1991-01-01

    In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)

  17. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J. [Delft University of Technology, NL-2629 JB Delft (Netherlands)

    1998-07-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  18. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    International Nuclear Information System (INIS)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J.

    1998-01-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  19. Development of hydraulic analysis code for optimizing thermo-chemical is process reactors

    International Nuclear Information System (INIS)

    Terada, Atsuhiko; Hino, Ryutaro; Hirayama, Toshio; Nakajima, Norihiro; Sugiyama, Hitoshi

    2007-01-01

    The Japan Atomic Energy Agency has been conducting study on thermochemical IS process for water splitting hydrogen production. Based on the test results and know-how obtained through the bench-scale test, a pilot test plant, which has a hydrogen production performance of 30 Nm 3 /h, is being designed conceptually as the next step of the IS process development. In design of the IS pilot plant, it is important to make chemical reactors compact with high performance from the viewpoint of plant cost reduction. A new hydraulic analytical code has been developed for optimizing mixing performance of multi-phase flow involving chemical reactions especially in the Bunsen reactor. Complex flow pattern with gas-liquid chemical interaction involving flow instability will be characterized in the Bunsen reactor. Preliminary analytical results obtained with above mentioned code, especially flow patterns induced by swirling flow agreed well with that measured by water experiments, which showed vortex breakdown pattern in a simplified Bunsen reactor. (author)

  20. Optimal Control Development System for Electrical Drives

    Directory of Open Access Journals (Sweden)

    Marian GAICEANU

    2008-08-01

    Full Text Available In this paper the optimal electrical drive development system is presented. It consists of both electrical drive types: DC and AC. In order to implement the optimal control for AC drive system an Altivar 71 inverter, a Frato magnetic particle brake (as load, three-phase induction machine, and dSpace 1104 controller have been used. The on-line solution of the matrix Riccati differential equation (MRDE is computed by dSpace 1104 controller, based on the corresponding feedback signals, generating the optimal speed reference for the AC drive system. The optimal speed reference is tracked by Altivar 71 inverter, conducting to energy reduction in AC drive. The classical control (consisting of rotor field oriented control with PI controllers and the optimal one have been implemented by designing an adequate ControlDesk interface. The three-phase induction machine (IM is controlled at constant flux. Therefore, the linear dynamic mathematical model of the IM has been obtained. The optimal control law provides transient regimes with minimal energy consumption. The obtained solution by integration of the MRDE is orientated towards the numerical implementation-by using a zero order hold. The development system is very useful for researchers, doctoral students or experts training in electrical drive. The experimental results are shown.

  1. An Expert System for the Development of Efficient Parallel Code

    Science.gov (United States)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  2. A guide to the AUS modular neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1987-04-01

    A general description is given of the AUS modular neutronics code system, which may be used for calculations of a very wide range of fission reactors, fusion blankets and other neutron applications. The present system has cross-section libraries derived from ENDF/B-IV and includes modules which provide for lattice calculations, one-dimensional transport calculations, and one, two, and three-dimensional diffusion calculations, burnup calculations and the flexible editing of results. Details of all system aspects of AUS are provided but the major individual modules are only outlined. Sufficient information is given to enable other modules to be added to the system

  3. Distributed magnetic field positioning system using code division multiple access

    Science.gov (United States)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  4. Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services

    DEFF Research Database (Denmark)

    Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani

    2016-01-01

    Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...

  5. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  6. Coded aperture material motion detection system for the ACPR

    International Nuclear Information System (INIS)

    McArthur, D.A.; Kelly, J.G.

    1975-01-01

    Single LMFBR fuel pins are being irradiated in Sandia's Annular Core Pulsed Reactor (ACPR). In these experiments single fuel pins have been driven well into the melt and vaporization regions in transients with pulse widths of about 5 ms. The ACPR is being upgraded so that it can be used to irradiate bundles of seven LMFBR fuel pins. The coded aperture material motion detection system described is being developed for this upgraded ACPR, and has for its design goals 1 mm transverse resolution (i.e., in the axial and radial directions), depth resolution of a few cm, and time resolution of 0.1 ms. The target date for development of this system is fall 1977. The paper briefly reviews the properties of coded aperture imaging, describes one possible system for the ACPR upgrade, discusses experiments which have been performed to investigate the feasibility of such a system, and describes briefly the further work required to develop such a system. The type of coded aperture to be used has not yet been fixed, but a one-dimensional section of a Fresnel zone plate appears at this time to have significant advantages

  7. HVAC system optimization - in-building section

    Energy Technology Data Exchange (ETDEWEB)

    Lu Lu; Wenjian Cai; Lihua Xie; Shujiang Li; Yeng Chai Soh [Nanyang Technological Univ., Singapore (Singapore). School of Electrical and Electronic Engineering

    2005-01-01

    This paper presents a practical method to optimize in-building section of centralized Heating, Ventilation and Air-conditioning (HVAC) systems which consist of indoor air loops and chilled water loops. First, through component characteristic analysis, mathematical models associated with cooling loads and energy consumption for heat exchangers and energy consuming devices are established. By considering variation of cooling load of each end user, adaptive neuro-fuzzy inference system (ANFIS) is employed to model duct and pipe networks and obtain optimal differential pressure (DP) set points based on limited sensor information. A mix-integer nonlinear constraint optimization of system energy is formulated and solved by a modified genetic algorithm. The main feature of our paper is a systematic approach in optimizing the overall system energy consumption rather than that of individual component. A simulation study for a typical centralized HVAC system is provided to compare the proposed optimization method with traditional ones. The results show that the proposed method indeed improves the system performance significantly. (author)

  8. Optimization of photovoltaic power systems

    CERN Document Server

    Rekioua, Djamila

    2012-01-01

    Photovoltaic generation is one of the cleanest forms of energy conversion available. One of the advantages offered by solar energy is its potential to provide sustainable electricity in areas not served by the conventional power grid. Optimisation of Photovoltaic Power Systems details explicit modelling, control and optimisation of the most popular stand-alone applications such as pumping, power supply, and desalination. Each section is concluded by an example using the MATLAB(R) and Simulink(R) packages to help the reader understand and evaluate the performance of different photovoltaic syste

  9. Collaborative Systems Driven Aircraft Configuration Design Optimization

    OpenAIRE

    Shiva Prakasha, Prajwal; Ciampa, Pier Davide; Nagel, Björn

    2016-01-01

    A Collaborative, Inside-Out Aircraft Design approach is presented in this paper. An approach using physics based analysis to evaluate the correlations between the airframe design, as well as sub-systems integration from the early design process, and to exploit the synergies within a simultaneous optimization process. Further, the disciplinary analysis modules involved in the optimization task are located in different organization. Hence, the Airframe and Subsystem design tools are integrated ...

  10. Adaptive stimulus optimization for sensory systems neuroscience

    OpenAIRE

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system...

  11. V.S.O.P.-computer code system for reactor physics and fuel cycle simulation

    International Nuclear Information System (INIS)

    Teuchert, E.; Hansen, U.; Haas, K.A.

    1980-03-01

    V.S.O.P. (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shutdown features, incore and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. A limitation of the storage requirement to 360 K-bites is achieved by an overlay structure. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. Beside its use in research and development work for the high temperature reactor the system has been applied successfully to LWR and Heavy Water Reactors. (orig.) [de

  12. Genetic optimization of steam multi-turbines system

    International Nuclear Information System (INIS)

    Olszewski, Pawel

    2014-01-01

    Optimization analysis of partially loaded cogeneration, multiple-stages steam turbines system was numerically investigated by using own-developed code (C++). The system can be controlled by following variables: fresh steam temperature, pressure, and flow rates through all stages in steam turbines. Five various strategies, four thermodynamics and one economical, which quantify system operation, were defined and discussed as an optimization functions. Mathematical model of steam turbines calculates steam properties according to the formulation proposed by the International Association for the Properties of Water and Steam. Genetic algorithm GENOCOP was implemented as a solving engine for non–linear problem with handling constrains. Using formulated methodology, example solution for partially loaded system, composed of five steam turbines (30 input variables) with different characteristics, was obtained for five strategies. The genetic algorithm found multiple solutions (various input parameters sets) giving similar overall results. In real application it allows for appropriate scheduling of machine operation that would affect equable time load of every system compounds. Also based on these results three strategies where chosen as the most complex: the first thermodynamic law energy and exergy efficiency maximization and total equivalent energy minimization. These strategies can be successfully used in optimization of real cogeneration applications. - Highlights: • Genetic optimization model for a set of five various steam turbines was presented. • Four various thermodynamic optimization strategies were proposed and discussed. • Operational parameters (steam pressure, temperature, flow) influence was examined. • Genetic algorithm generated optimal solutions giving the best estimators values. • It has been found that similar energy effect can be obtained for various inputs

  13. A new two dimensional spectral/spatial multi-diagonal code for noncoherent optical code division multiple access (OCDMA) systems

    Science.gov (United States)

    Kadhim, Rasim Azeez; Fadhil, Hilal Adnan; Aljunid, S. A.; Razalli, Mohamad Shahrazel

    2014-10-01

    A new two dimensional codes family, namely two dimensional multi-diagonal (2D-MD) codes, is proposed for spectral/spatial non-coherent OCDMA systems based on the one dimensional MD code. Since the MD code has the property of zero cross correlation, the proposed 2D-MD code also has this property. So that, the multi-access interference (MAI) is fully eliminated and the phase induced intensity noise (PIIN) is suppressed with the proposed code. Code performance is analyzed in terms of bit error rate (BER) while considering the effect of shot noise, PIIN, and thermal noise. The performance of the proposed code is compared with the related MD, modified quadratic congruence (MQC), two dimensional perfect difference (2D-PD) and two dimensional diluted perfect difference (2D-DPD) codes. The analytical and the simulation results reveal that the proposed 2D-MD code outperforms the other codes. Moreover, a large number of simultaneous users can be accommodated at low BER and high data rate.

  14. An optimized cosine-modulated nonuniform filter bank design for subband coding of ECG signal

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2015-07-01

    Full Text Available A simple iterative technique for the design of nonuniform cosine modulated filter banks (CMFBS is presented in this paper. The proposed technique employs a single parameter for optimization. The nonuniform cosine modulated filter banks are derived by merging the adjacent filters of uniform cosine modulated filter banks. The prototype filter is designed with the aid of different adjustable window functions such as Kaiser, Cosh and Exponential, and by using the constrained equiripple finite impulse response (FIR digital filter design technique. In this method, either cut off frequency or passband edge frequency is varied in order to adjust the filter coefficients so that reconstruction error could be optimized/minimized to zero. Performance and effectiveness of the proposed method in terms of peak reconstruction error (PRE, aliasing distortion (AD, computational (CPU time, and number of iteration (NOI have been shown through the numerical examples and comparative studies. Finally, the technique is exploited for the subband coding of electrocardiogram (ECG and speech signals.

  15. MatLab script to C code converter for embedded processors of FLASH LLRF control system

    Science.gov (United States)

    Bujnowski, K.; Siemionczyk, A.; Pucyk, P.; Szewiński, J.; Pożniak, K. T.; Romaniuk, R. S.

    2008-01-01

    The low level RF control system (LLRF) of FEL serves for stabilization of the electromagnetic (EM) field in the superconducting niobium, resonant, microwave cavities and for controlling high power (MW) klystron. LLRF system of FLASH accelerator bases on FPGA technology and embedded microprocessors. Basic and auxiliary functions of the systems are listed as well as used algorithms for superconductive cavity parameters identification. These algorithms were prepared originally in Matlab. The main part of the paper presents implementation of the cavity parameters identification algorithm in a PowerPC processor embedded in the FPGA circuit VirtexIIPro. A construction of a very compact Matlab script converter to C code was presented, referred to as M2C. The application is designed specifically for embedded systems of very confined resources. The generated code is optimized for the weight. The code should be transferable between different hardware platforms. The converter generates a code for Linux and for stand-alone applications. Functional structure of the program was described and the way it is acting. FLEX and BIZON tools were used for construction of the converter. The paper concludes with an example of the M2C application to convert a complex identification algorithm for superconductive cavities in FLASH laser.

  16. Computer codes and methods for simulating accelerator driven systems

    International Nuclear Information System (INIS)

    Sartori, E.; Byung Chan Na

    2003-01-01

    A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)

  17. Multiobjective Collaborative Optimization of Systems of Systems

    National Research Council Canada - National Science Library

    Wolf, Robert A

    2005-01-01

    ...; in other words an inefficient design of the system of systems. This thesis examines the simultaneous design of several ships using the sea base concept as an example application of a network of ships working together...

  18. Optimal reduction of flexible dynamic system

    International Nuclear Information System (INIS)

    Jankovic, J.

    1994-01-01

    Dynamic system reduction is basic procedure in various problems of active control synthesis of flexible structures. In this paper is presented direct method for system reduction by explicit extraction of modes included in reduced model form. Criterion for optimal system discrete approximation in synthesis reduced dynamic model is also presented. Subjected method of system decomposition is discussed in relation to the Schur method of solving matrix algebraic Riccati equation as condition for system reduction. By using exposed method procedure of flexible system reduction in addition with corresponding example is presented. Shown procedure is powerful in problems of active control synthesis of flexible system vibrations

  19. The Optimization of power reactor control system

    International Nuclear Information System (INIS)

    Danupoyo, S.D.

    1997-01-01

    A power reactor is an important part in nuclear powered electrical plant systems. Success in controlling the power reactor will establish safety of the whole power plant systems. Until now, the power reactor has been controlled by a classical control system that was designed based on output feedback method. To meet the safety requirements that are now more restricted, the recently used power reactor control system should be modified. this paper describes a power reactor control system that is designed based on a state feedback method optimized with LQG (Linear-quadrature-gaussian) method and equipped with a state estimator. A pressurized-water type reactor has been used as the model. by using a point kinetics method with one group delayed neutrons. the result of simulation testing shows that the optimized control system can control the power reactor more effective and efficient than the classical control system

  20. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  1. Channel estimation for physical layer network coding systems

    CERN Document Server

    Gao, Feifei; Wang, Gongpu

    2014-01-01

    This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa

  2. Photovoltaic power systems and the National Electrical Code: Suggested practices

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, J. [New Mexico State Univ., Las Cruces, NM (United States). Southwest Technology Development Inst.

    1996-12-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently. Application of this information and results obtained are the responsibility of the user.

  3. Photovoltaic Power Systems and the National Electrical Code: Suggested Practices

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-02-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently.

  4. Impact of Different Spreading Codes Using FEC on DWT Based MC-CDMA System

    OpenAIRE

    Masum, Saleh; Kabir, M. Hasnat; Islam, Md. Matiqul; Shams, Rifat Ara; Ullah, Shaikh Enayet

    2012-01-01

    The effect of different spreading codes in DWT based MC-CDMA wireless communication system is investigated. In this paper, we present the Bit Error Rate (BER) performance of different spreading codes (Walsh-Hadamard code, Orthogonal gold code and Golay complementary sequences) using Forward Error Correction (FEC) of the proposed system. The data is analyzed and is compared among different spreading codes in both coded and uncoded cases. It is found via computer simulation that the performance...

  5. Structure and operation of the ITS code system

    International Nuclear Information System (INIS)

    Halbleib, J.

    1988-01-01

    The TIGER series of time-independent coupled electron-photon Monte Carlo transport codes is a group of multimaterial and multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron-photon cascade by combining microscopic photon transport with a macroscopic random walk for electron transport. Major contributors to its evolution are listed. The author and his associates are primarily code users rather than code developers, and have borrowed freely from existing work wherever possible. Nevertheless, their efforts have resulted in various software packages for describing the production and transport of the electron-photon cascade that they found sufficiently useful to warrant dissemination through the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory. The ITS system (Integrated TIGER Series) represents the organization and integration of this combined software, along with much additional capability from previously unreleased work, into a single convenient package of exceptional user friendliness and portability. Emphasis is on simplicity and flexibility of application without sacrificing the rigor or sophistication of the physical model

  6. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  7. Optimizing a mobile robot control system using GPU acceleration

    Science.gov (United States)

    Tuck, Nat; McGuinness, Michael; Martin, Fred

    2012-01-01

    This paper describes our attempt to optimize a robot control program for the Intelligent Ground Vehicle Competition (IGVC) by running computationally intensive portions of the system on a commodity graphics processing unit (GPU). The IGVC Autonomous Challenge requires a control program that performs a number of different computationally intensive tasks ranging from computer vision to path planning. For the 2011 competition our Robot Operating System (ROS) based control system would not run comfortably on the multicore CPU on our custom robot platform. The process of profiling the ROS control program and selecting appropriate modules for porting to run on a GPU is described. A GPU-targeting compiler, Bacon, is used to speed up development and help optimize the ported modules. The impact of the ported modules on overall performance is discussed. We conclude that GPU optimization can free a significant amount of CPU resources with minimal effort for expensive user-written code, but that replacing heavily-optimized library functions is more difficult, and a much less efficient use of time.

  8. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    Full Text Available Purpose. To demonstrate feasibility of the proposed integrated optimization of various MTS parameters to reduce capital investments as well as decrease any operational and maintenance expense. This will make use of MTS reasonable. At present, the Maglev Transport Systems (MTS for High-Speed Ground Transportation (HSGT almost do not apply. Significant capital investments, high operational and maintenance costs are the main reasons why Maglev Transport Systems (MTS are hardly currently used for the High-Speed Ground Transportation (HSGT. Therefore, this article justifies use of Theory of Complex Optimization of Transport (TCOT, developed by one of the co-authors, to reduce MTS costs. Methodology. According to TCOT, authors developed an abstract model of the generalized transport system (AMSTG. This model mathematically determines the optimal balance between all components of the system and thus provides the ultimate adaptation of any transport systems to the conditions of its application. To identify areas for effective use of MTS, by TCOT, the authors developed a dynamic model of distribution and expansion of spheres of effective use of transport systems (DMRRSEPTS. Based on this model, the most efficient transport system was selected for each individual track. The main estimated criterion at determination of efficiency of application of MTS is the size of the specific transportation tariff received from calculation of payback of total given expenses to a standard payback period or term of granting the credit. Findings. The completed multiple calculations of four types of MTS: TRANSRAPID, MLX01, TRANSMAG and TRANSPROGRESS demonstrated efficiency of the integrated optimization of the parameters of such systems. This research made possible expending the scope of effective usage of MTS in about 2 times. The achieved results were presented at many international conferences in Germany, Switzerland, United States, China, Ukraine, etc. Using MTS as an

  9. Real-time validation of receiver state information in optical space-time block code systems.

    Science.gov (United States)

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  10. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs. The procedure for sensor configuration is based on simultaneous perturbation stochastic approximation (SPSA). SPSA avoids the need for detailed modeling of the sensor response by simply......Considers the problem of sensor configuration for complex systems. Our approach involves definition of an appropriate optimality criterion or performance measure, and description of an efficient and practical algorithm for achieving the optimality objective. The criterion for optimal sensor...... relying on observed responses as obtained by limited experimentation with test sensor configurations. We illustrate the approach with the optimal placement of acoustic sensors for signal detection in structures. This includes both a computer simulation study for an aluminum plate, and real...

  11. Complex energy system management using optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bridgeman, Stuart; Hurdowar-Castro, Diana; Allen, Rick; Olason, Tryggvi; Welt, Francois

    2010-09-15

    Modern energy systems are often very complex with respect to the mix of generation sources, energy storage, transmission, and avenues to market. Historically, power was provided by government organizations to load centers, and pricing was provided in a regulatory manner. In recent years, this process has been displaced by the independent system operator (ISO). This complexity makes the operation of these systems very difficult, since the components of the system are interdependent. Consequently, computer-based large-scale simulation and optimization methods like Decision Support Systems are now being used. This paper discusses the application of a DSS to operations and planning systems.

  12. Multiphysics simulation electromechanical system applications and optimization

    CERN Document Server

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  13. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  14. Topology optimization of nano-photonic systems

    DEFF Research Database (Denmark)

    Elesin, Yuriy; Wang, Fengwen; Andkjær, Jacob Anders

    2012-01-01

    We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles.......We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles....

  15. Reward optimization of a repairable system

    Energy Technology Data Exchange (ETDEWEB)

    Castro, I.T. [Departamento de Matematicas, Facultad de Veterinaria, Universidad de Extremadura, Avenida de la Universidad, s/n. 10071 Caceres (Spain)]. E-mail: inmatorres@unex.es; Perez-Ocon, R. [Departamento de Estadistica e Investigacion Operativa, Facultad de Ciencias, Universidad de Granada, Avenida de Severo Ochoa, s/n. 18071 Granada (Spain)]. E-mail: rperezo@ugr.es

    2006-03-15

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures.

  16. Reward optimization of a repairable system

    International Nuclear Information System (INIS)

    Castro, I.T.; Perez-Ocon, R.

    2006-01-01

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures

  17. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  18. SWAT3.1 - the integrated burnup code system driving continuous energy Monte Carlo codes MVP and MCNP

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Takada, Tomoyuki; Ryufuku, Susumu; Okuno, Hiroshi; Murazaki, Minoru; Ohkubo, Kiyoshi

    2009-05-01

    Integrated burnup calculation code system SWAT is a system that combines neutronics calculation code SRAC,which is widely used in Japan, and point burnup calculation code ORIGEN2. It has been used to evaluate the composition of the uranium, plutonium, minor actinides and the fission products in the spent nuclear fuel. Based on this idea, the integrated burnup calculation code system SWAT3.1 was developed by combining the continuous energy Monte Carlo code MVP and MCNP, and ORIGEN2. This enables us to treat the arbitrary fuel geometry and to generate the effective cross section data to be used in the burnup calculation with few approximations. This report describes the outline, input data instruction and several examples of the calculation. (author)

  19. Application of the OPTIMUS Code to the Neutral Beam Injection System of TJ-II

    International Nuclear Information System (INIS)

    Fuentes, C.; Liniers, M.; Guasp, J.

    1998-01-01

    The different losses processes affecting a neutral beam since is born into the ions source until is coming into the fusion machine, are dependent of the residual gas pressure distribution inside injector. The OPTIMUS code analyzes that losses and calculates the pressure distribution inside one injector with specific geometry. The adaptation of injector to TJ-II has not required important design changes, only the operating range of the gas flow and the pumping speed have modified. The calculations show that the required gas flows for the optimal operation of the system ned an independent pumping system for the calorimeter box with a pumping speed of 1200001/s. The system efficiency is not affected by an hypothetical beaming effect and it is found also that with a proper conditioning of the injector walls, so that the absorption coefficients do not surpass excessively the unity value, the injector operation remains optimal. (Author) 8 refs

  20. Multiple Description Coding for Closed Loop Systems over Erasure Channels

    DEFF Research Database (Denmark)

    Østergaard, Jan; Quevedo, Daniel

    2013-01-01

    In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller......) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization.We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet...... by showing that the system can be cast as a Markov jump linear system....

  1. Direct Fuel Injector Power Drive System Optimization

    Science.gov (United States)

    2014-04-01

    solenoid coil to create magnetic field in the stator. Then, the stator pulls the pintle to open the injector nozzle . This pintle movement occurs when the...that typically deal with power strategies to the injector solenoid coil. Numerical simulation codes for diesel injection systems were developed by...Laboratory) for providing the JP-8 test fuel. REFERENCES 1. Digesu, P. and Laforgia D., “ Diesel electro- injector : A numerical simulation code”. Journal of

  2. Adaptive stimulus optimization for sensory systems neuroscience.

    Science.gov (United States)

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.

  3. BER performance comparison of optical CDMA systems with/without turbo codes

    Science.gov (United States)

    Kulkarni, Muralidhar; Chauhan, Vijender S.; Dutta, Yashpal; Sinha, Ravindra K.

    2002-08-01

    In this paper, we have analyzed and simulated the BER performance of a turbo coded optical code-division multiple-access (TC-OCDMA) system. A performance comparison has been made between uncoded OCDMA and TC-OCDMA systems employing various OCDMA address codes (optical orthogonal codes (OOCs), Generalized Multiwavelength Prime codes (GMWPC's), and Generalized Multiwavelength Reed Solomon code (GMWRSC's)). The BER performance of TC-OCDMA systems has been analyzed and simulated by varying the code weight of address code employed by the system. From the simulation results, it is observed that lower weight address codes can be employed for TC-OCDMA systems that can have the equivalent BER performance of uncoded systems employing higher weight address codes for a fixed number of active users.

  4. Space-Frequency Block Code with Matched Rotation for MIMO-OFDM System with Limited Feedback

    Directory of Open Access Journals (Sweden)

    Thushara D. Abhayapala

    2009-01-01

    Full Text Available This paper presents a novel matched rotation precoding (MRP scheme to design a rate one space-frequency block code (SFBC and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

  5. Optimization and Control of Electric Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard C. [Univ. of Wisconsin, Madison, WI (United States); Molzahn, Daniel K. [Univ. of Wisconsin, Madison, WI (United States)

    2014-10-17

    The analysis and optimization needs for planning and operation of the electric power system are challenging due to the scale and the form of model representations. The connected network spans the continent and the mathematical models are inherently nonlinear. Traditionally, computational limits have necessitated the use of very simplified models for grid analysis, and this has resulted in either less secure operation, or less efficient operation, or both. The research conducted in this project advances techniques for power system optimization problems that will enhance reliable and efficient operation. The results of this work appear in numerous publications and address different application problems include optimal power flow (OPF), unit commitment, demand response, reliability margins, planning, transmission expansion, as well as general tools and algorithms.

  6. Developing Optimal Procedure of Emergency Outside Cooling Water Injection for APR1400 Extended SBO Scenario Using MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jong Rok; Oh, Seung Jong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    In this study, we examined optimum operator actions to mitigate extended SBO using MARS code. Particularly, this paper focuses on analyzing outside core cooling water injection scenario, and aimed to develop optimal extended SBO procedure. Supplying outside emergency cooling water is the key feature of flexible strategy in extended SBO situation. An optimum strategy to maintain core cooling is developed for typical extended SBO. MARS APR1400 best estimate model was used to find optimal procedure. Also RCP seal leakage effect was considered importantly. Recent Fukushima accident shows the importance of mitigation capability against extended SBO scenarios. In Korea, all nuclear power plants incorporated various measures against Fukushima-like events. For APR1400 NPP, outside connectors are installed to inject cooling water using fire trucks or portable pumps. Using these connectors, outside cooling water can be provided to reactor, steam generators (SG), containment spray system, and spent fuel pool. In U. S., similar approach is chosen to provide a diverse and flexible means to prevent fuel damage (core and SFP) in external event conditions resulting in extended loss of AC power and loss of ultimate heat sink. Hence, hardware necessary to cope with extended SBO is already available for APR1400. However, considering the complex and stressful condition encountered by operators during extended SBO, it is important to develop guidelines/procedures to best cope with the event.

  7. Method of laser beam coding for control systems

    Science.gov (United States)

    Pałys, Tomasz; Arciuch, Artur; Walczak, Andrzej; Murawski, Krzysztof

    2017-08-01

    The article presents the method of encoding a laser beam for control systems. The experiments were performed using a red laser emitting source with a wavelength of λ = 650 nm and a power of P ≍ 3 mW. The aim of the study was to develop methods of modulation and demodulation of the laser beam. Results of research, in which we determined the effect of selected camera parameters, such as image resolution, number of frames per second on the result of demodulation of optical signal, is also shown in the paper. The experiments showed that the adopted coding method provides sufficient information encoded in a single laser beam (36 codes with the effectiveness of decoding at 99.9%).

  8. Security Concerns and Countermeasures in Network Coding Based Communications Systems

    DEFF Research Database (Denmark)

    Talooki, Vahid; Bassoli, Riccardo; Roetter, Daniel Enrique Lucani

    2015-01-01

    key protocol types, namely, state-aware and stateless protocols, specifying the benefits and disadvantages of each one of them. We also present the key security assumptions of network coding (NC) systems as well as a detailed analysis of the security goals and threats, both passive and active......This survey paper shows the state of the art in security mechanisms, where a deep review of the current research and the status of this topic is carried out. We start by introducing network coding and its variety applications in enhancing current traditional networks. In particular, we analyze two....... This paper also presents a detailed taxonomy and a timeline of the different NC security mechanisms and schemes reported in the literature. Current proposed security mechanisms and schemes for NC in the literature are classified later. Finally a timeline of these mechanism and schemes is presented....

  9. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  10. Alignment effects on a neutron imaging system using coded apertures

    International Nuclear Information System (INIS)

    Thfoin, Isabelle; Landoas, Olivier; Caillaud, Tony; Vincent, Maxime; Bourgade, Jean-Luc; Rosse, Bertrand; Disdier, Laurent; Sangster, Thomas C.; Glebov, Vladimir Yu.; Pien, Greg; Armstrong, William

    2010-01-01

    A high resolution neutron imaging system is being developed and tested on the OMEGA laser facility for inertial confinement fusion experiments. This diagnostic uses a coded imaging technique with a penumbral or an annular aperture. The sensitiveness of these techniques to misalignment was pointed out with both experiments and simulations. Results obtained during OMEGA shots are in good agreement with calculations performed with the Monte Carlo code GEANT4. Both techniques are sensitive to the relative position of the source in the field of view. The penumbral imaging technique then demonstrates to be less sensitive to misalignment compared to the ring. These results show the necessity to develop a neutron imaging diagnostic for megajoule class lasers taking into account our alignment capabilities on such facilities.

  11. Nexus: A modular workflow management system for quantum simulation codes

    Science.gov (United States)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  12. Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes

    Science.gov (United States)

    Chan, V. S.; Costley, A. E.; Wan, B. N.; Garofalo, A. M.; Leuer, J. A.

    2015-02-01

    This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher BT and Ip can result in a high gain Qfus ˜ 12, Pfus ˜ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target.

  13. Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes

    International Nuclear Information System (INIS)

    Chan, V.S.; Garofalo, A.M.; Leuer, J.A.; Costley, A.E.; Wan, B.N.

    2015-01-01

    This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher B T and I p can result in a high gain Q fus  ∼ 12, P fus  ∼ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target. (paper)

  14. System and economic optimization problems of NPPs and its ideology

    International Nuclear Information System (INIS)

    Klimenko, A.V.; Mironovich, V.L.

    2016-01-01

    The iterative circuit design of optimization of system of links of nuclear fuel and energy complex (NFEC) is presented in the paper. Problems of system optimization of links NFEC as functional of NPP optimization are indicated and investigated [ru

  15. SEJITS: embedded specializers to turn patterns-based designs into optimized parallel code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    All software should be parallel software. This is natural result of the transition to a many core world. For a small fraction of the world's programmers (efficiency programmers), this is not a problem. They enjoy mapping algorithms onto the details of a particular system and are well served by low level languages and OpenMP, MPI, or OpenCL. Most programmers, however, are "domain specialists" who write code. They are too busy working in their domain of choice (such as physics) to master the intricacies of each computer they use. How do we make these programmers productive without giving up performance? We have been working with a team at UC Berkeley's ParLab to address this problem. The key is a clear software architecture expressed in terms of design patterns that exposes the concurrency in a problem. The resulting code is written using a patterns-based framework within a high level, productivity language (such as Python). Then a separate system is used by a small group o...

  16. OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation

    Directory of Open Access Journals (Sweden)

    Mehdi Abolfazli

    2013-04-01

    Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.

  17. Development of a domestically-made system code

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    According to lessons learned from the Fukushima-Daiichi NPP accidents, a new safety standard based on state-of-the-art findings has been established by the Japanese Nuclear Regulation Authority (NRA) and will soon come into force in Japan. In order to ensure a precise response to this movement from a technological point of view, it should be required for safety regulation to develop a new system code with much smaller uncertainty and reinforced simulation capability even in application to beyond-DBAs (BDBAs), as well as with the capability of close coupling to a newly developing severe accident code. Accordingly, development of a new domestically-made system code that incorporates 3-dimensional and 3 or more fluid thermal-hydraulics in tandem with a 3-dimensional neutronics has been started in 2012. In 2012, two branches of development activities, the development of 'main body' and advanced features have been started in parallel for development efficiency. The main body has been started from scratch and the following activities have therefore been performed: 1) development and determination of key principles and methodologies to realize a flexible, extensible and robust platform, 2) determination of requirements definition, 3) start of basic program design and coding and 4) start of a development of prototypical GUI-based pre-post processor. As for the advanced features, the following activities have been performed: 1) development of Phenomena Identification and Ranking Tables (PIRTs) and model capability matrix from normal operations to BDBAs in order to address requirements definition for advanced modeling, 2) development of detailed action plan for modification of field equations, numerical schemes and solvers and 3) start of the program development of field equations with an interfacial area concentration transport equation, a robust solver for condensation induced water hammer phenomena and a versatile Newton-Raphson solver. (author)

  18. Optimal test intervals for shutdown systems for the Cernavoda nuclear power station

    International Nuclear Information System (INIS)

    Negut, Gh.; Laslau, F.

    1993-01-01

    Cernavoda nuclear power station required a complete PSA study. As a part of this study, an important goal to enhance the effectiveness of the plant operation is to establish optimal test intervals for the important engineering safety systems. The paper presents, briefly, the current methods to optimize the test intervals. For this reason it was used Vesely methods to establish optimal test intervals and Frantic code to survey the influence of the test intervals on system availability. The applications were done on the Shutdown System no. 1, a shutdown system provided whit solid rods and on Shutdown System no. 2 provided with injecting poison. The shutdown systems receive nine total independent scram signals that dictate the test interval. Fault trees for the both safety systems were developed. For the fault tree solutions an original code developed in our Institute was used. The results, intended to be implemented in the technical specifications for test and operation of Cernavoda NPS are presented

  19. Hybrid Techniques for Optimizing Complex Systems

    Science.gov (United States)

    2009-12-01

    relay placement problem, we modeled the network as a mechanical system with springs and a viscous damper ⎯a widely used approach for solving optimization...fundamental mathematical tools in many branches of physics such as fluid and solid mechanics, and general relativity [108]. More recently, several

  20. Application of the code Slac to the study of Ion Extraction Systems in Neutral Injectors

    International Nuclear Information System (INIS)

    Garcia, M.; Liniers, M.; Guasp, J.

    1997-01-01

    In this study different extraction geometries for intense ion beams have been analyzed with the code SLAC, in view of its possible application to the neutral injectors of TJ-II. With this aim, we have introduced several modifications in the code in order to correctly simulate the transition between the ion source plasma and the extraction region, which has great impact on the beam optics. These modifications include the introduction of a population of Boltzmann electrons in the transition region, and the implementation of an option to simulate the thermal velocity of the ions in the source. We have found a better agreement between the results obtained with the new version of the code and the experimental data in two well known systems. With this new version of the code two different studies have been carried out: in the first place an optimization of the ATF injectors extraction system for its use on TJ-II, leading to an optimum value of the gap in the energy range 30-40 KeV, and in the second place a systematic study of extraction geometries at 40 KeV. As a result of this second study we have found the combinations of parameters that can be used under different working conditions (e.g. different pulse lengths), leading to acceptable values of the beam divergence. (Author)