WorldWideScience

Sample records for system optimization codes

  1. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  2. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    Science.gov (United States)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  3. Two-Layer Coding Rate Optimization in Relay-Aided Systems

    DEFF Research Database (Denmark)

    Sun, Fan

    2011-01-01

    -layer coding scheme is proposed, where physical layer channel coding is utilized within each packet for error-correction and random network coding is applied on top of channel coding for network error-control. There is a natural tradeoff between the physical layer coding rate and the network coding rate given...... requirement. Numerical results are also provided to show the optimized physical layer coding and network coding rate pairs in different system scenarios....

  4. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    Energy Technology Data Exchange (ETDEWEB)

    Gentry, S.M. [Sandia National Labs., Albuquerque, NM (United States). Optoelectronic Design Dept.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  5. ETRANS: an energy transport system optimization code for distributed networks of solar collectors

    Energy Technology Data Exchange (ETDEWEB)

    Barnhart, J.S.

    1980-09-01

    The optimization code ETRANS was developed at the Pacific Northwest Laboratory to design and estimate the costs associated with energy transport systems for distributed fields of solar collectors. The code uses frequently cited layouts for dish and trough collectors and optimizes them on a section-by-section basis. The optimal section design is that combination of pipe diameter and insulation thickness that yields the minimum annualized system-resultant cost. Among the quantities included in the costing algorithm are (1) labor and materials costs associated with initial plant construction, (2) operating expenses due to daytime and nighttime heat losses, and (3) operating expenses due to pumping power requirements. Two preliminary series of simulations were conducted to exercise the code. The results indicate that transport system costs for both dish and trough collector fields increase with field size and receiver exit temperature. Furthermore, dish collector transport systems were found to be much more expensive to build and operate than trough transport systems. ETRANS itself is stable and fast-running and shows promise of being a highly effective tool for the analysis of distributed solar thermal systems.

  6. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  7. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  8. Low-complexity BCH codes with optimized interleavers for DQPSK systems with laser phase noise

    DEFF Research Database (Denmark)

    Leong, Miu Yoong; Larsen, Knud J.; Jacobsen, Gunnar

    2017-01-01

    The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose...... simulations. For a target post-FEC BER of 10−6, codes selected using our method result in BERs around 3× target and achieve the target with around 0.2 dB extra signal-to-noise ratio....

  9. Manual and Fast C Code Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Fadle Abdulla

    2010-01-01

    Full Text Available Developing an application with high performance through the code optimization places a greater responsibility on the programmers. While most of the existing compilers attempt to automatically optimize the program code, manual techniques remain the predominant method for performing optimization. Deciding where to try to optimize code is difficult, especially for large complex applications. For manual optimization, the programmers can use his experiences in writing the code, and then he can use a software profiler in order to collect and analyze the performance data from the code. In this work, we have gathered the most experiences which can be applied to improve the style of writing programs in C language as well as we present an implementation of the manual optimization of the codes using the Intel VTune profiler. The paper includes two case studies to illustrate our optimization on the Heap Sort and Factorial functions.

  10. Progress on DART code optimization

    International Nuclear Information System (INIS)

    Taboada, Horacio; Solis, Diego; Rest, Jeffrey

    1999-01-01

    This work consists about the progress made on the design and development of a new optimized version of DART code (DART-P), a mechanistic computer model for the performance calculation and assessment of aluminum dispersion fuel. It is part of a collaboration agreement between CNEA and ANL in the area of Low Enriched Uranium Advanced Fuels. It is held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy, signed on October 16, 1997 between US DOE and the National Atomic Energy Commission of the Argentine Republic. DART optimization is a biannual program; it is operative since February 8, 1999 and has the following goals: 1. Design and develop a new DART calculation kernel for implementation within a parallel processing architecture. 2. Design and develop new user-friendly I/O routines to be resident on Personal Computer (PC)/WorkStation (WS) platform. 2.1. The new input interface will be designed and developed by means of a Visual interface, able to guide the user in the construction of the problem to be analyzed with the aid of a new database (described in item 3, below). The new I/O interface will include input data check controls in order to avoid corrupted input data. 2.2. The new output interface will be designed and developed by means of graphical tools, able to translate numeric data output into 'on line' graphic information. 3. Design and develop a new irradiated materials database, to be resident on PC/WS platform, so as to facilitate the analysis of the behavior of different fuel and meat compositions with DART-P. Currently, a different version of DART is used for oxide, silicide, and advanced alloy fuels. 4. Develop rigorous general inspection algorithms in order to provide valuable DART-P benchmarks. 5. Design and develop new models, such as superplasticity, elastoplastic feedback, improved models for the calculation of fuel deformation and the evolution of the fuel microstructure for

  11. Optimal space communication techniques. [a discussion of delta modulation, pulse code modulation, and phase locked systems

    Science.gov (United States)

    Schilling, D. L.

    1975-01-01

    Encoding of video signals using adaptive delta modulation (DM) was investigated, along with the error correction of DM encoded signals corrupted by thermal noise. Conversion from pulse code modulation to delta modulation was studied; an expression for the signal to noise ratio of the DM signal derived was achieved by employing linear, 2-sample, interpolation between sample points. A phase locked loop using a nonlinear processor in lieu of a loop filter is discussed.

  12. NUSTRA - optimization code for nuclear reactor strategies

    International Nuclear Information System (INIS)

    Tusa, E.; Vira, J.

    1979-02-01

    A computer code is designed to solve the optimal reactor strategy corresponding to a given nuclear power program. As a novel feature the code includes capabilities for explicit uncertainty resolution. After a short description of the calculation methods this report gives the input instructions for the code. (author)

  13. Optimal coding-decoding for systems controlled via a communication channel

    Science.gov (United States)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  14. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization.

    Science.gov (United States)

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  15. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  16. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  17. Development of a computer code system for selecting off-site protective action in radiological accidents based on the multiobjective optimization method

    International Nuclear Information System (INIS)

    Ishigami, Tsutomu; Oyama, Kazuo

    1989-09-01

    This report presents a new method to support selection of off-site protective action in nuclear reactor accidents, and provides a user's manual of a computer code system, PRASMA, developed using the method. The PRASMA code system gives several candidates of protective action zones of evacuation, sheltering and no action based on the multiobjective optimization method, which requires objective functions and decision variables. We have assigned population risks of fatality, injury and cost as the objective functions, and distance from a nuclear power plant characterizing the above three protective action zones as the decision variables. (author)

  18. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  19. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  20. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  1. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    Energy Technology Data Exchange (ETDEWEB)

    Dellin, T.A.; Fish, M.J.; Yang, C.L.

    1981-08-01

    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.

  2. Statistical physics, optimization and source coding

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 64; Issue 6. Statistical physics, optimization and source coding. Riccardo Zecchina. Invited Talks:- Topic 12. Other applications of statistical physics (networks, traffic flows, algorithmic problems, econophysics, astrophysical applications, etc.) Volume 64 Issue 6 June ...

  3. A code optimization package for REDUCE

    NARCIS (Netherlands)

    van Hulzen, J.A.; Hulshof, B.J.; Gates, B.L.; van Heerwaarden, M.C.

    1989-01-01

    A survey of the strategy behind and the facilities of a code optimization package for REDUCE are given. We avoid a detailed discussion of the different algorithms and concentrate on the user aspects of the package. Examples of straightforward and more advanced usage are shown

  4. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  5. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals.

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  6. Code aperture optimization for spectrally agile compressive imaging.

    Science.gov (United States)

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  7. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    Science.gov (United States)

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  8. Optimizing Extender Code for NCSX Analyses

    International Nuclear Information System (INIS)

    Richman, M.; Ethier, S.; Pomphrey, N.

    2008-01-01

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  9. Optimizing Extender Code for NCSX Analyses

    Energy Technology Data Exchange (ETDEWEB)

    M. Richman, S. Ethier, and N. Pomphrey

    2008-01-22

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  10. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  11. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  12. Efficient topology optimization in MATLAB using 88 lines of code

    DEFF Research Database (Denmark)

    Andreassen, Erik; Clausen, Anders; Schevenels, Mattias

    2011-01-01

    The paper presents an efficient 88 line MATLAB code for topology optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. The original code has been extended by a density filter, and a considerable improvement...

  13. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  14. Design of Optimal Quincunx Filter Banks for Image Coding

    Directory of Open Access Journals (Sweden)

    Chen Yi

    2007-01-01

    Full Text Available Two new optimization-based methods are proposed for the design of high-performance quincunx filter banks for the application of image coding. These new techniques are used to build linear-phase finite-length-impulse-response (FIR perfect-reconstruction (PR systems with high coding gain, good frequency selectivity, and certain prescribed vanishing-moment properties. A parametrization of quincunx filter banks based on the lifting framework is employed to structurally impose the PR and linear-phase conditions. Then, the coding gain is maximized subject to a set of constraints on vanishing moments and frequency selectivity. Examples of filter banks designed using the newly proposed methods are presented and shown to be highly effective for image coding. In particular, our new optimal designs are shown to outperform three previously proposed quincunx filter banks in 72% to 95% of our experimental test cases. Moreover, in some limited cases, our optimal designs are even able to outperform the well-known (separable 9/7 filter bank (from the JPEG-2000 standard.

  15. ESCADRE and ICARE code systems

    International Nuclear Information System (INIS)

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  16. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  17. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  18. System Code Models and Capabilities

    International Nuclear Information System (INIS)

    Bestion, D.

    2008-01-01

    System thermalhydraulic codes such as RELAP, TRACE, CATHARE or ATHLET are now commonly used for reactor transient simulations. The whole methodology of code development is described including the derivation of the system of equations, the analysis of experimental data to obtain closure relation and the validation process. The characteristics of the models are briefly presented starting with the basic assumptions, the system of equations and the derivation of closure relationships. An extensive work was devoted during the last three decades to the improvement and validation of these models, which resulted in some homogenisation of the different codes although separately developed. The so called two-fluid model is the common basis of these codes and it is shown how it can describe both thermal and mechanical nonequilibrium. A review of some important physical models allows to illustrate the main capabilities and limitations of system codes. Attention is drawn on the role of flow regime maps, on the various methods for developing closure laws, on the role of interfacial area and turbulence on interfacial and wall transfers. More details are given for interfacial friction laws and their relation with drift flux models. Prediction of chocked flow and CFFL is also addressed. Based on some limitations of the present generation of codes, perspectives for future are drawn.

  19. A Robust Cross Coding Scheme for OFDM Systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2010-01-01

    In wireless OFDM-based systems, coding jointly over all the sub-carriers simultaneously performs better than coding separately per sub-carrier. However, the joint coding is not always optimal because its achievable channel capacity (i.e. the maximum data rate) is inversely proportional to the

  20. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  1. Optimizing ATLAS code with different profilers

    Science.gov (United States)

    Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.

    2014-06-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

  2. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  3. Optimization of KINETICS Chemical Computation Code

    Science.gov (United States)

    Donastorg, Cristina

    2012-01-01

    NASA JPL has been creating a code in FORTRAN called KINETICS to model the chemistry of planetary atmospheres. Recently there has been an effort to introduce Message Passing Interface (MPI) into the code so as to cut down the run time of the program. There has been some implementation of MPI into KINETICS; however, the code could still be more efficient than it currently is. One way to increase efficiency is to send only certain variables to all the processes when an MPI subroutine is called and to gather only certain variables when the subroutine is finished. Therefore, all the variables that are used in three of the main subroutines needed to be investigated. Because of the sheer amount of code that there is to comb through this task was given as a ten-week project. I have been able to create flowcharts outlining the subroutines, common blocks, and functions used within the three main subroutines. From these flowcharts I created tables outlining the variables used in each block and important information about each. All this information will be used to determine how to run MPI in KINETICS in the most efficient way possible.

  4. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...

  5. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-03-13

    In this work we have developed a restructuring software tool (RT-CUDA) following the proposed optimization specifications to bridge the gap between high-level languages and the machine dependent CUDA environment. RT-CUDA takes a C program and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design of linear algebra solvers. We expect RT-CUDA to be needed by many KSA industries dealing with science and engineering simulation on massively parallel computers like NVIDIA GPUs.

  6. Optimizing the ATLAS code with different profilers

    CERN Document Server

    Kama, S; The ATLAS collaboration

    2013-01-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...

  7. Optimized reversible binary-coded decimal adders

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert

    2008-01-01

    their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit...... in reversible logic design by drastically reducing the number of garbage bits. Specialized designs benefit from support by reversible logic synthesis. All circuit components required for optimizing the original design could also be synthesized successfully by an implementation of an existing synthesis algorithm...

  8. On Analyzing LDPC Codes over Multiantenna MC-CDMA System

    Directory of Open Access Journals (Sweden)

    S. Suresh Kumar

    2014-01-01

    Full Text Available Multiantenna multicarrier code-division multiple access (MC-CDMA technique has been attracting much attention for designing future broadband wireless systems. In addition, low-density parity-check (LDPC code, a promising near-optimal error correction code, is also being widely considered in next generation communication systems. In this paper, we propose a simple method to construct a regular quasicyclic low-density parity-check (QC-LDPC code to improve the transmission performance over the precoded MC-CDMA system with limited feedback. Simulation results show that the coding gain of the proposed QC-LDPC codes is larger than that of the Reed-Solomon codes, and the performance of the multiantenna MC-CDMA system can be greatly improved by these QC-LDPC codes when the data rate is high.

  9. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  10. SU-E-T-590: Optimizing Magnetic Field Strengths with Matlab for An Ion-Optic System in Particle Therapy Consisting of Two Quadrupole Magnets for Subsequent Simulations with the Monte-Carlo Code FLUKA

    International Nuclear Information System (INIS)

    Baumann, K; Weber, U; Simeonov, Y; Zink, K

    2015-01-01

    Purpose: Aim of this study was to optimize the magnetic field strengths of two quadrupole magnets in a particle therapy facility in order to obtain a beam quality suitable for spot beam scanning. Methods: The particle transport through an ion-optic system of a particle therapy facility consisting of the beam tube, two quadrupole magnets and a beam monitor system was calculated with the help of Matlab by using matrices that solve the equation of motion of a charged particle in a magnetic field and field-free region, respectively. The magnetic field strengths were optimized in order to obtain a circular and thin beam spot at the iso-center of the therapy facility. These optimized field strengths were subsequently transferred to the Monte-Carlo code FLUKA and the transport of 80 MeV/u C12-ions through this ion-optic system was calculated by using a user-routine to implement magnetic fields. The fluence along the beam-axis and at the iso-center was evaluated. Results: The magnetic field strengths could be optimized by using Matlab and transferred to the Monte-Carlo code FLUKA. The implementation via a user-routine was successful. Analyzing the fluence-pattern along the beam-axis the characteristic focusing and de-focusing effects of the quadrupole magnets could be reproduced. Furthermore the beam spot at the iso-center was circular and significantly thinner compared to an unfocused beam. Conclusion: In this study a Matlab tool was developed to optimize magnetic field strengths for an ion-optic system consisting of two quadrupole magnets as part of a particle therapy facility. These magnetic field strengths could subsequently be transferred to and implemented in the Monte-Carlo code FLUKA to simulate the particle transport through this optimized ion-optic system

  11. Optimal Alarm Systems

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system is simply an optimal level-crossing predictor that can be designed to elicit the fewest false alarms for a fixed detection probability. It...

  12. QR images: optimized image embedding in QR codes.

    Science.gov (United States)

    Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P

    2014-07-01

    This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.

  13. System performance optimization

    International Nuclear Information System (INIS)

    Bednarz, R.J.

    1978-01-01

    The System Performance Optimization has become an important and difficult field for large scientific computer centres. Important because the centres must satisfy increasing user demands at the lowest possible cost. Difficult because the System Performance Optimization requires a deep understanding of hardware, software and workload. The optimization is a dynamic process depending on the changes in hardware configuration, current level of the operating system and user generated workload. With the increasing complication of the computer system and software, the field for the optimization manoeuvres broadens. The hardware of two manufacturers IBM and CDC is discussed. Four IBM and two CDC operating systems are described. The description concentrates on the organization of the operating systems, the job scheduling and I/O handling. The performance definitions, workload specification and tools for the system stimulation are given. The measurement tools for the System Performance Optimization are described. The results of the measurement and various methods used for the operating system tuning are discussed. (Auth.)

  14. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  15. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  16. Software exorcism a handbook for debugging and optimizing legacy code

    CERN Document Server

    Blunden, Bill

    2013-01-01

    Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code takes an unflinching, no bulls and look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that you are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our atten

  17. Control and optimization system

    Science.gov (United States)

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  18. The CALOR93 code system

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1993-01-01

    The purpose of this paper is to describe a program package, CALOR93, that has been developed to design and analyze different detector systems, in particular, calorimeters which are used in high energy physics experiments to determine the energy of particles. One's ability to design a calorimeter to perform a certain task can have a strong influence upon the validity of experimental results. The validity of the results obtained with CALOR93 has been verified many times by comparison with experimental data. The codes (HETC93, SPECT93, LIGHT, EGS4, MORSE, and MICAP) are quite generalized and detailed enough so that any experimental calorimeter setup can be studied. Due to this generalization, some software development is necessary because of the wide diversity of calorimeter designs

  19. Expansion of the CHR bone code system

    International Nuclear Information System (INIS)

    Farnham, J.E.; Schlenker, R.A.

    1976-01-01

    This report describes the coding system used in the Center for Human Radiobiology (CHR) to identify individual bones and portions of bones of a complete skeletal system. It includes illustrations of various bones and bone segments with their respective code numbers. Codes are also presented for bone groups and for nonbone materials

  20. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  1. Source-optimized irregular repeat accumulate codes with inherent unequal error protection capabilities and their application to scalable image transmission.

    Science.gov (United States)

    Lan, Ching-Fu; Xiong, Zixiang; Narayanan, Krishna R

    2006-07-01

    The common practice for achieving unequal error protection (UEP) in scalable multimedia communication systems is to design rate-compatible punctured channel codes before computing the UEP rate assignments. This paper proposes a new approach to designing powerful irregular repeat accumulate (IRA) codes that are optimized for the multimedia source and to exploiting the inherent irregularity in IRA codes for UEP. Using the end-to-end distortion due to the first error bit in channel decoding as the cost function, which is readily given by the operational distortion-rate function of embedded source codes, we incorporate this cost function into the channel code design process via density evolution and obtain IRA codes that minimize the average cost function instead of the usual probability of error. Because the resulting IRA codes have inherent UEP capabilities due to irregularity, the new IRA code design effectively integrates channel code optimization and UEP rate assignments, resulting in source-optimized channel coding or joint source-channel coding. We simulate our source-optimized IRA codes for transporting SPIHT-coded images over a binary symmetric channel with crossover probability p. When p = 0.03 and the channel code length is long (e.g., with one codeword for the whole 512 x 512 image), we are able to operate at only 9.38% away from the channel capacity with code length 132380 bits, achieving the best published results in terms of average peak signal-to-noise ratio (PSNR). Compared to conventional IRA code design (that minimizes the probability of error) with the same code rate, the performance gain in average PSNR from using our proposed source-optimized IRA code design is 0.8759 dB when p = 0.1 and the code length is 12800 bits. As predicted by Shannon's separation principle, we observe that this performance gain diminishes as the code length increases.

  2. Optimization in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Geraldo R.M. da [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia

    1994-12-31

    This paper discusses, partially, the advantages and the disadvantages of the optimal power flow. It shows some of the difficulties of implementation and proposes solutions. An analysis is made comparing the power flow, BIGPOWER/CESP, and the optimal power flow, FPO/SEL, developed by the author, when applied to the CEPEL-ELETRONORTE and CESP systems. (author) 8 refs., 5 tabs.

  3. An optimal dissipative encoder for the toric code

    Science.gov (United States)

    Dengis, John; König, Robert; Pastawski, Fernando

    2014-01-01

    We consider the problem of preparing specific encoded resource states for the toric code by local, time-independent interactions with a memoryless environment. We propose the construction of such a dissipative encoder which converts product states to topologically ordered ones while preserving logical information. The corresponding Liouvillian is made up of four local Lindblad operators. For a qubit lattice of size L × L, we show that this process prepares encoded states in time O(L), which is optimal. This scaling compares favorably with known local unitary encoders for the toric code which take time of order Ω(L2) and require active time-dependent control.

  4. The octopus burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de.

    1996-01-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  5. The OCTOPUS burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.)

  6. Energy optimization system

    Science.gov (United States)

    Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat

    2013-01-22

    A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.

  7. Spectral-Amplitude-Coded OCDMA Optimized for a Realistic FBG Frequency Response

    Science.gov (United States)

    Penon, Julien; El-Sahn, Ziad A.; Rusch, Leslie A.; Larochelle, Sophie

    2007-05-01

    We develop a methodology for numerical optimization of fiber Bragg grating frequency response to maximize the achievable capacity of a spectral-amplitude-coded optical code-division multiple-access (SAC-OCDMA) system. The optimal encoders are realized, and we experimentally demonstrate an incoherent SAC-OCDMA system with seven simultaneous users. We report a bit error rate (BER) of 2.7 x 10-8 at 622 Mb/s for a fully loaded network (seven users) using a 9.6-nm optical band. We achieve error-free transmission (BER < 1 x 10-9) for up to five simultaneous users.

  8. Fundamentals of an Optimal Multirate Subband Coding of Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    D. Kula

    2000-06-01

    Full Text Available A consistent theory of optimal subband coding of zero mean wide-sense cyclostationary signals, with N-periodic statistics, is presented in this article. An M-channel orthonormal uniform filter bank, employing N-periodic analysis and synthesis filters, is used while an average variance condition is applied to evaluate the output distortion. In three lemmas and final theorem, the necessity of decorrelation of blocked subband signals and requirement of specific ordering of power spectral densities are proven.

  9. Iterative optimization of performance libraries by hierarchical division of codes

    International Nuclear Information System (INIS)

    Donadio, S.

    2007-09-01

    The increasing complexity of hardware features incorporated in modern processors makes high performance code generation very challenging. Library generators such as ATLAS, FFTW and SPIRAL overcome this issue by empirically searching in the space of possible program versions for the one that performs the best. This thesis explores fully automatic solution to adapt a compute-intensive application to the target architecture. By mimicking complex sequences of transformations useful to optimize real codes, we show that generative programming is a practical tool to implement a new hierarchical compilation approach for the generation of high performance code relying on the use of state-of-the-art compilers. As opposed to ATLAS, this approach is not application-dependant but can be applied to fairly generic loop structures. Our approach relies on the decomposition of the original loop nest into simpler kernels. These kernels are much simpler to optimize and furthermore, using such codes makes the performance trade off problem much simpler to express and to solve. Finally, we propose a new approach for the generation of performance libraries based on this decomposition method. We show that our method generates high-performance libraries, in particular for BLAS. (author)

  10. Coded aperture optimization in compressive X-ray tomography: a gradient descent approach.

    Science.gov (United States)

    Cuadros, Angela P; Arce, Gonzalo R

    2017-10-02

    Coded aperture X-ray computed tomography (CT) has the potential to revolutionize X-ray tomography systems in medical imaging and air and rail transit security - both areas of global importance. It allows either a reduced set of measurements in X-ray CT without degradation in image reconstruction, or measure multiplexed X-rays to simplify the sensing geometry. Measurement reduction is of particular interest in medical imaging to reduce radiation, and airport security often imposes practical constraints leading to limited angle geometries. Coded aperture compressive X-ray CT places a coded aperture pattern in front of the X-ray source in order to obtain patterned projections onto a detector. Compressive sensing (CS) reconstruction algorithms are then used to recover the image. To date, the coded illumination patterns used in conventional CT systems have been random. This paper addresses the code optimization problem for general tomography imaging based on the point spread function (PSF) of the system, which is used as a measure of the sensing matrix quality which connects to the restricted isometry property (RIP) and coherence of the sensing matrix. The methods presented are general, simple to use, and can be easily extended to other imaging systems. Simulations are presented where the peak signal to noise ratios (PSNR) of the reconstructed images using optimized coded apertures exhibit significant gain over those attained by random coded apertures. Additionally, results using real X-ray tomography projections are presented.

  11. Distributed Optimization System

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2004-11-30

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  12. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  13. Random mask optimization for fast neutron coded aperture imaging

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, Kyle [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Univ. of California, Los Angeles, CA (United States); Marleau, Peter [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brubaker, Erik [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-05-01

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed image quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.

  14. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2012-12-01

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system. © 2012 IEEE.

  15. System floorplanning optimization

    KAUST Repository

    Browning, David W.

    2013-01-10

    Notebook and Laptop Original Equipment Manufacturers (OEMs) place great emphasis on creating unique system designs to differentiate themselves in the mobile market. These systems are developed from the \\'outside in\\' with the focus on how the system is perceived by the end-user. As a consequence, very little consideration is given to the interconnections or power of the devices within the system with a mentality of \\'just make it fit\\'. In this paper we discuss the challenges of Notebook system design and the steps by which system floor-planning tools and algorithms can be used to provide an automated method to optimize this process to ensure all required components most optimally fit inside the Notebook system.

  16. Teradata Database System Optimization

    OpenAIRE

    Krejčík, Jan

    2008-01-01

    The Teradata database system is specially designed for data warehousing environment. This thesis explores the use of Teradata in this environment and describes its characteristics and potential areas for optimization. The theoretical part is tended to be a user study material and it shows the main principles Teradata system operation and describes factors significantly affecting system performance. Following sections are based on previously acquired information which is used for analysis and ...

  17. Evolutionary Optimization of Electric Power Distribution Using the Dandelion Code

    Directory of Open Access Journals (Sweden)

    Jorge Sabattin

    2012-01-01

    Full Text Available Planning primary electric power distribution involves solving an optimization problem using nonlinear components, which makes it difficult to obtain the optimum solution when the problem has dimensions that are found in reality, in terms of both the installation cost and the power loss cost. To tackle this problem, heuristic methods have been used, but even when sacrificing quality, finding the optimum solution still represents a computational challenge. In this paper, we study this problem using genetic algorithms. With the help of a coding scheme based on the dandelion code, these genetic algorithms allow larger instances of the problem to be solved. With the stated approach, we have solved instances of up to 40,000 consumer nodes when considering 20 substations; the total cost deviates 3.1% with respect to a lower bound that considers only the construction costs of the network.

  18. The EGS5 Code System

    Energy Technology Data Exchange (ETDEWEB)

    Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version

  19. BWROPT: A multi-cycle BWR fuel cycle optimization code

    Energy Technology Data Exchange (ETDEWEB)

    Ottinger, Keith E.; Maldonado, G. Ivan, E-mail: Ivan.Maldonado@utk.edu

    2015-09-15

    Highlights: • A multi-cycle BWR fuel cycle optimization algorithm is presented. • New fuel inventory and core loading pattern determination. • The parallel simulated annealing algorithm was used for the optimization. • Variable sampling probabilities were compared to constant sampling probabilities. - Abstract: A new computer code for performing BWR in-core and out-of-core fuel cycle optimization for multiple cycles simultaneously has been developed. Parallel simulated annealing (PSA) is used to optimize the new fuel inventory and placement of new and reload fuel for each cycle considered. Several algorithm improvements were implemented and evaluated. The most significant of these are variable sampling probabilities and sampling new fuel types from an ordered array. A heuristic control rod pattern (CRP) search algorithm was also implemented, which is useful for single CRP determinations, however, this feature requires significant computational resources and is currently not practical for use in a full multi-cycle optimization. The PSA algorithm was demonstrated to be capable of significant objective function reduction and finding candidate loading patterns without constraint violations. The use of variable sampling probabilities was shown to reduce runtime while producing better results compared to using constant sampling probabilities. Sampling new fuel types from an ordered array was shown to have a mixed effect compared to random new fuel type sampling, whereby using both random and ordered sampling produced better results but required longer runtimes.

  20. EGS4 code system and its application

    International Nuclear Information System (INIS)

    Shin, Chang Ho; Kim, Jong Kyung

    1998-01-01

    The EGS4 code system is a powerful and user-friend software package permitting state-of-the-art Monte Carlo solution of time-independent coupled electron/photon transport problems, with or without presence of macroscopic electric and magnetic fields. The EGS4 code system consists of EGS4, PEGS4, and USER code. The EGS4 code is designed to simulate electromagnetic cascades in various geometries and at energies up to a few thousand GeV and down to cut-off kinetic energies of 10 and 1 keV for electrons and photons, respectively. The radiation transport of electrons or photons can be simulated in any elements, compound, or mixture. The PEGS4 code, data preparation package, creates data to be used by the EGS4 code, using cross section tables for elements 1 through 100. USER code should be written. This consists of a MAIN program and the subroutines HOWFAR and AUSGAB, the latter two determining the geometry and output (scoring), respectively. The EGS4 code system has been written in a MORTRAN language, extended FORTRAN language. The EGS4 code system has been used in a wide range of applications, such as beam target design, accelerator shielding analysis, gas bremsstrahlung analysis, nuclear data evaluation, and so on. An example is calculation of photonuclear reaction (γ, n) yield and produced neutron energy distribution. In this work, the routine for photonuclear reaction yield and neutron energy distribution calculation was developed using the EGS4 code system. The photonuclear reaction yield was obtained by the convolution of the photonuclear reaction cross section and photon differential track length. The photonuclear reaction cross section was evaluated from Lorentz formula. Benchmark calculation was performed to compare our results with Hansen's those. The results obtained from the EGS4 code system and Hansen's those are in good agreement

  1. Advanced thermionic reactor systems design code

    International Nuclear Information System (INIS)

    Lewis, B.R.; Pawlowski, R.A.; Greek, K.J.; Klein, A.C.

    1991-01-01

    An overall systems design code is under development to model an advanced in-core thermionic nuclear reactor system for space applications at power levels of 10 to 50 kWe. The design code is written in an object-oriented programming environment that allows the use of a series of design modules, each of which is responsible for the determination of specific system parameters. The code modules include a neutronics and core criticality module, a core thermal hydraulics module, a thermionic fuel element performance module, a radiation shielding module, a module for waste heat transfer and rejection, and modules for power conditioning and control. The neutronics and core criticality module determines critical core size, core lifetime, and shutdown margins using the criticality calculation capability of the Monte Carlo Neutron and Photon Transport Code System (MCNP). The remaining modules utilize results of the MCNP analysis along with FORTRAN programming to predict the overall system performance

  2. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  3. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jia, E-mail: lijia@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Jiang, Kecheng; Zhang, Xiaokang [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China); Nie, Xingchen [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Zhu, Qinjun; Liu, Songlin [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China)

    2016-12-15

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  4. CRM System Optimization

    OpenAIRE

    Fučík, Ivan

    2015-01-01

    This thesis is focused on CRM solutions in small and medium-sized organizations with respect to the quality of their customer relationship. The main goal of this work is to design an optimal CRM solution in the environment of real organization. To achieve this goal it is necessary to understand the theoretical basis of several topics, such as organizations and their relationship with customers, CRM systems, their features and trends. On the basis of these theoretical topics it is possible to ...

  5. Power system optimization

    International Nuclear Information System (INIS)

    Bogdan, Zeljko; Cehil, Mislav

    2007-01-01

    Long-term gas purchase contracts usually determine delivery and payment for gas on the regular hourly basis, independently of demand side consumption. In order to use fuel gas in an economically viable way, optimization of gas distribution for covering consumption must be introduced. In this paper, a mathematical model of the electric utility system which is used for optimization of gas distribution over electric generators is presented. The utility system comprises installed capacity of 1500 MW of thermal power plants, 400 MW of combined heat and power plants, 330 MW of a nuclear power plant and 1600 MW of hydro power plants. Based on known demand curve the optimization model selects plants according to the prescribed criteria. Firstly it engages run-of-river hydro plants, then the public cogeneration plants, the nuclear plant and thermal power plants. Storage hydro plants are used for covering peak load consumption. In case of shortage of installed capacity, the cross-border purchase is allowed. Usage of dual fuel equipment (gas-oil), which is available in some thermal plants, is also controlled by the optimization procedure. It is shown that by using such a model it is possible to properly plan the amount of fuel gas which will be contracted. The contracted amount can easily be distributed over generators efficiently and without losses (no breaks in delivery). The model helps in optimizing of fuel gas-oil ratio for plants with combined burners and enables planning of power plants overhauls over a year in a viable and efficient way. (author)

  6. Improved decoding for a concatenated coding system

    DEFF Research Database (Denmark)

    Paaske, Erik

    1990-01-01

    The concatenated coding system recommended by CCSDS (Consultative Committee for Space Data Systems) uses an outer (255,233) Reed-Solomon (RS) code based on 8-b symbols, followed by the block interleaver and an inner rate 1/2 convolutional code with memory 6. Viterbi decoding is assumed. Two new...... decoding procedures based on repeated decoding trials and exchange of information between the two decoders and the deinterleaver are proposed. In the first one, where the improvement is 0.3-0.4 dB, only the RS decoder performs repeated trials. In the second one, where the improvement is 0.5-0.6 dB, both...

  7. Low Spectral Efficiency Trellis Coded Modulation Systems

    Science.gov (United States)

    2006-09-01

    2 bBW R= . The three alternative systems are all non- TCM systems and consist of QPSK with independent r=1/2 error correction coding on the in-phase...and quadrature components, with null-to-null bandwidth 2 bBW R= , 8-ary biorthogonal keying (8-BOK) with r=2/3 error correction coding with bandwidth...21 12 bBW R= and 16-BOK with r=3/4 error correction coding and with bandwidth 44 24 bBW R= . At the beginning of the analysis only the effect of

  8. High frequency coded imaging system with RF.

    Science.gov (United States)

    Lewandowski, Marcin; Nowicki, Andrzej

    2008-08-01

    Coded transmission is an approach to solve the inherent compromise between penetration and resolution required in ultrasound imaging. Our goal was to examine the applicability of the coded excitation to HF (20-35 MHz) ultrasound imaging. A novel real-time imaging system for research and evaluation of the coded transmission was developed. The digital programmable coder- digitizer module based on the field programmable gate array (FPGA) chip supports arbitrary waveform coded transmission and RF echo sampling up to 200 megasamples per second, as well as real-time streaming of digitized RF data via a high-speed USB interface to the PC. All RF and image data processing were implemented in the software. A novel balanced software architecture supports real-time processing and display at rates up to 30 frames/sec. The system was used to acquire quantitative data for sine burst and 16-bit Golay code excitation at 20 MHz fundamental frequency. SNR gain close to 14 dB was obtained. The example of the skin scan clearly shows the extended penetration and improved contrast when a 35-MHz Golay code is used. The system presented is a practical and low-cost implementation of a coded excitation technique in HF ultrasound imaging that can be used as a research tool as well as to be introduced into production.

  9. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  10. Design of Short Synchronization Codes for Use in Future GNSS System

    Directory of Open Access Journals (Sweden)

    Surendran K. Shanmugam

    2008-01-01

    The modernization efforts include numerous signal structure innovations to ensure better performances over legacy GNSS system. The adoption of secondary short synchronization codes is one among these innovations that play an important role in spectral separation, bit synchronization, and narrowband interference protection. In this paper, we present a short synchronization code design based on the optimization of judiciously selected performance criteria. The new synchronization codes were obtained for lengths up to 30 bits through exhaustive search and are characterized by optimal periodic correlation. More importantly, the presence of better synchronization codes over standardized GPS and Galileo codes corroborates the benefits and the need for short synchronization code design.

  11. Fusion PIC code performance analysis on the Cori KNL system

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, Tuomas S. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Friesen, Brian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Raman, Karthic [INTEL Corp. (United States)

    2017-05-25

    We study the attainable performance of Particle-In-Cell codes on the Cori KNL system by analyzing a miniature particle push application based on the fusion PIC code XGC1. We start from the most basic building blocks of a PIC code and build up the complexity to identify the kernels that cost the most in performance and focus optimization efforts there. Particle push kernels operate at high AI and are not likely to be memory bandwidth or even cache bandwidth bound on KNL. Therefore, we see only minor benefits from the high bandwidth memory available on KNL, and achieving good vectorization is shown to be the most beneficial optimization path with theoretical yield of up to 8x speedup on KNL. In practice we are able to obtain up to a 4x gain from vectorization due to limitations set by the data layout and memory latency.

  12. Optimization and optimal control in automotive systems

    CERN Document Server

    Kolmanovsky, Ilya; Steinbuch, Maarten; Re, Luigi

    2014-01-01

    This book demonstrates the use of the optimization techniques that are becoming essential to meet the increasing stringency and variety of requirements for automotive systems. It shows the reader how to move away from earlier  approaches, based on some degree of heuristics, to the use of  more and more common systematic methods. Even systematic methods can be developed and applied in a large number of forms so the text collects contributions from across the theory, methods and real-world automotive applications of optimization. Greater fuel economy, significant reductions in permissible emissions, new drivability requirements and the generally increasing complexity of automotive systems are among the criteria that the contributing authors set themselves to meet. In many cases multiple and often conflicting requirements give rise to multi-objective constrained optimization problems which are also considered. Some of these problems fall into the domain of the traditional multi-disciplinary optimization applie...

  13. User effects on the transient system code calculations. Final report

    International Nuclear Information System (INIS)

    Aksan, S.N.; D'Auria, F.

    1995-01-01

    Large thermal-hydraulic system codes are widely used to perform safety and licensing analyses of nuclear power plants to optimize operational procedures and the plant design itself. Evaluation of the capabilities of these codes are accomplished by comparing the code predictions with the measured experimental data obtained from various types of separate effects and integral test facilities. In recent years, some attempts have been made to establish methodologies to evaluate the accuracy and the uncertainty of the code predictions and consequently judgement on the acceptability of the codes. In none of the methodologies has the influence of the code user on the calculated results been directly addressed. In this paper, the results of the investigations on the user effects for the thermal-hydraulic transient system codes is presented and discussed on the basis of some case studies. The general findings of the investigations show that in addition to user effects, there are other reasons that affect the results of the calculations and which are hidden under user effects. Both the hidden factors and the direct user effects are discussed in detail and general recommendations and conclusions are presented to control and limit them

  14. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  15. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    International Nuclear Information System (INIS)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant

  16. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  17. Implementing a modular system of computer codes

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.

    1983-07-01

    A modular computation system has been developed for nuclear reactor core analysis. The codes can be applied repeatedly in blocks without extensive user input data, as needed for reactor history calculations. The primary control options over the calculational paths and task assignments within the codes are blocked separately from other instructions, admitting ready access by user input instruction or directions from automated procedures and promoting flexible and diverse applications at minimum application cost. Data interfacing is done under formal specifications with data files manipulated by an informed manager. This report emphasizes the system aspects and the development of useful capability, hopefully informative and useful to anyone developing a modular code system of much sophistication. Overall, this report in a general way summarizes the many factors and difficulties that are faced in making reactor core calculations, based on the experience of the authors. It provides the background on which work on HTGR reactor physics is being carried out

  18. A novel neutron energy spectrum unfolding code using particle swarm optimization

    Science.gov (United States)

    Shahabinejad, H.; Sohrabpour, M.

    2017-07-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code.

  19. A new Morse code scheme optimized according to the statistical properties of Turkish

    OpenAIRE

    ÇİÇEK, Emrah; YILMAZ, Asım Egemen

    2013-01-01

    Morse code has been in use for more than 180 years, even though its currently known form is slightly different than the form defined by Morse and Vail. The code book constructed by Vail was optimized according to the statistical properties of English. In this study, we propose a new code book optimized for Turkish and demonstrate that it is information-theoretically possible to achieve about a 10% improvement throughout the coding of Turkish texts by means of our proposal. The outco...

  20. Plotting system for the MINCS code

    International Nuclear Information System (INIS)

    Watanabe, Tadashi

    1990-08-01

    The plotting system for the MINCS code is described. The transient two-phase flow analysis code MINCS has been developed to provide a computational tool for analysing various two-phase flow phenomena in one-dimensional ducts. Two plotting systems, namely the SPLPLOT system and the SDPLOT system, can be used as the plotting functions. The SPLPLOT system is used for plotting time transients of variables, while the SDPLOT system is for spatial distributions. The SPLPLOT system is based on the SPLPACK system, which is used as a general tool for plotting results of transient analysis codes or experiments. The SDPLOT is based on the GPLP program, which is also regarded as one of the general plotting programs. In the SPLPLOT and the SDPLOT systems, the standardized data format called the SPL format is used in reading data to be plotted. The output data format of MINCS is translated into the SPL format by using the conversion system called the MINTOSPL system. In this report, how to use the plotting functions is described. (author)

  1. Investigation on natural frequency of an optimized elliptical container using real-coded genetic algorithm

    Directory of Open Access Journals (Sweden)

    M. H. Shojaeefard

    Full Text Available This study introduces a method based on real-coded genetic algorithm to design an elliptical shaped fuel tank. This method enhances the advantage of the system such as roll stability, and reduces disadvantages like fluid c.g. height and overturning moment. These parameters corresponding to the elliptical tanks with different filling levels are properly optimized. Moreover the effects of these optimized shapes on natural sloshing frequency are investigated. Comparing presented results with experimental ones indicate the reliability and accuracy of the present work. In addition, a new method based on genetic algorithm, which enhances tank rollover threshold, is presented. This optimization enhances roll stability, although reducing the natural sloshing frequency in comparison to cylindrical tanks. In contrast, the sloshing frequency of the optimized elliptical tank is enhanced in compare with conventional elliptical tanks, which is considered as an advantageof the presented work.

  2. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    Science.gov (United States)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  3. A Realistic Model Under Which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, Harry; van der Gulik, Peter T. S.; Klau, Gunnar W.; Schaffner, Christian; Speijer, Dave; Stougie, Leen

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  4. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  5. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  6. An integrated radiation physics computer code system.

    Science.gov (United States)

    Steyn, J. J.; Harris, D. W.

    1972-01-01

    An integrated computer code system for the semi-automatic and rapid analysis of experimental and analytic problems in gamma photon and fast neutron radiation physics is presented. Such problems as the design of optimum radiation shields and radioisotope power source configurations may be studied. The system codes allow for the unfolding of complex neutron and gamma photon experimental spectra. Monte Carlo and analytic techniques are used for the theoretical prediction of radiation transport. The system includes a multichannel pulse-height analyzer scintillation and semiconductor spectrometer coupled to an on-line digital computer with appropriate peripheral equipment. The system is geometry generalized as well as self-contained with respect to material nuclear cross sections and the determination of the spectrometer response functions. Input data may be either analytic or experimental.

  7. MPEG-2/4 Low-Complexity Advanced Audio Coding Optimization and Implementation on DSP

    Science.gov (United States)

    Wu, Bing-Fei; Huang, Hao-Yu; Chen, Yen-Lin; Peng, Hsin-Yuan; Huang, Jia-Hsiung

    This study presents several optimization approaches for the MPEG-2/4 Audio Advanced Coding (AAC) Low Complexity (LC) encoding and decoding processes. Considering the power consumption and the peripherals required for consumer electronics, this study adopts the TI OMAP5912 platform for portable devices. An important optimization issue for implementing AAC codec on embedded and mobile devices is to reduce computational complexity and memory consumption. Due to power saving issues, most embedded and mobile systems can only provide very limited computational power and memory resources for the coding process. As a result, modifying and simplifying only one or two blocks is insufficient for optimizing the AAC encoder and enabling it to work well on embedded systems. It is therefore necessary to enhance the computational efficiency of other important modules in the encoding algorithm. This study focuses on optimizing the Temporal Noise Shaping (TNS), Mid/Side (M/S) Stereo, Modified Discrete Cosine Transform (MDCT) and Inverse Quantization (IQ) modules in the encoder and decoder. Furthermore, we also propose an efficient memory reduction approach that provides a satisfactory balance between the reduction of memory usage and the expansion of the encoded files. In the proposed design, both the AAC encoder and decoder are built with fixed-point arithmetic operations and implemented on a DSP processor combined with an ARM-core for peripheral controlling. Experimental results demonstrate that the proposed AAC codec is computationally effective, has low memory consumption, and is suitable for low-cost embedded and mobile applications.

  8. CASKETSS: a computer code system for thermal and structural analysis of nuclear fuel shipping casks

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1989-02-01

    A computer program CASKETSS has been developed for the purpose of thermal and structural analysis of nuclear fuel shipping casks. CASKETSS measn a modular code system for CASK Evaluation code system Thermal and Structural Safety. Main features of CASKETSS are as follow; (1) Thermal and structural analysis computer programs for one-, two-, three-dimensional geometries are contained in the code system. (2) Some of the computer programs in the code system has been programmed to provide near optimal speed on vector processing computers. (3) Data libralies fro thermal and structural analysis are provided in the code system. (4) Input data generator is provided in the code system. (5) Graphic computer program is provided in the code system. In the paper, brief illustration of calculation method, input data and sample calculations are presented. (author)

  9. User Instructions for the Systems Assessment Capability, Rev. 1, Computer Codes Volume 3: Utility Codes

    Energy Technology Data Exchange (ETDEWEB)

    Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.; Miley, Terri B.; Nichols, William E.; Strenge, Dennis L.

    2004-09-14

    This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.

  10. Logistics systems optimization under competition

    DEFF Research Database (Denmark)

    Choi, Tsan Ming; Govindan, Kannan; Ma, Lijun

    2015-01-01

    Nowadays, optimization on logistics and supply chain systems is a crucial and critical issue in industrial and systems engineering. Important areas of logistics and supply chain systems include transportation control, inventory management, and facility location planning. Under a competitive market...

  11. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  12. Greedy vs. L1 Convex Optimization in Sparse Coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... and action recognition, a comparative study of codes in abnormal event detection is less studied and hence no conclusion is gained on the effect of codes in detecting abnormalities. We constrict our comparison in two types of the above L0-norm solutions: greedy algorithms and convex L1-norm solutions....... Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate their performance...

  13. Optimal Control of Mechanical Systems

    Directory of Open Access Journals (Sweden)

    Vadim Azhmyakov

    2007-01-01

    Full Text Available In the present work, we consider a class of nonlinear optimal control problems, which can be called “optimal control problems in mechanics.” We deal with control systems whose dynamics can be described by a system of Euler-Lagrange or Hamilton equations. Using the variational structure of the solution of the corresponding boundary-value problems, we reduce the initial optimal control problem to an auxiliary problem of multiobjective programming. This technique makes it possible to apply some consistent numerical approximations of a multiobjective optimization problem to the initial optimal control problem. For solving the auxiliary problem, we propose an implementable numerical algorithm.

  14. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    . Zulfikar

    2012-10-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  15. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    Zulfikar Zulfikar

    2015-05-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  16. A New Method Of Gene Coding For A Genetic Algorithm Designed For Parametric Optimization

    Directory of Open Access Journals (Sweden)

    Radu BELEA

    2003-12-01

    Full Text Available In a parametric optimization problem the genes code the real parameters of the fitness function. There are two coding techniques known under the names of: binary coded genes and real coded genes. The comparison between these two is a controversial subject since the first papers about parametric optimization have appeared. An objective analysis regarding the advantages and disadvantages of the two coding techniques is difficult to be done while different format information is compared. The present paper suggests a gene coding technique that uses the same format for both binary coded genes and for the real coded genes. After unifying the real parameters representation, the next criterion is going to be applied: the differences between the two techniques are statistically measured by the effect of the genetic operators over some random generated fellows.

  17. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    Science.gov (United States)

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  18. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  19. Burnup calculation code system COMRAD96

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, `Cross Section Treatment`, `Generation and Depletion Calculation`, and `Post Process`. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the {gamma} Spectrum on a terminal. This report is the general description and user`s manual of COMRAD96. (author)

  20. Burnup calculation code system COMRAD96

    International Nuclear Information System (INIS)

    Suyama, Kenya; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu.

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, 'Cross Section Treatment', 'Generation and Depletion Calculation', and 'Post Process'. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the γ Spectrum on a terminal. This report is the general description and user's manual of COMRAD96. (author)

  1. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    POS Tagging, and Dependency Parsing. Fourth Workshop on Statistical Parsing of Morphologically Rich Languages (SPMRL). English (Note: These are for...Detection, Affix Labeling, POS Tagging, and Dependency Parsing" by Stephen Tratz presented at the Statistical Parsing of Morphologically Rich Languages ...and also English ) natural language processing (NLP), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  2. SRAC95; general purpose neutronics code system

    Energy Technology Data Exchange (ETDEWEB)

    Okumura, Keisuke; Tsuchihashi, Keichiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-03-01

    SRAC is a general purpose neutronics code system applicable to core analyses of various types of reactors. Since the publication of JAERI-1302 for the revised SRAC in 1986, a number of additions and modifications have been made for nuclear data libraries and programs. Thus, the new version SRAC95 has been completed. The system consists of six kinds of nuclear data libraries(ENDF/B-IV, -V, -VI, JENDL-2, -3.1, -3.2), five modular codes integrated into SRAC95; collision probability calculation module (PIJ) for 16 types of lattice geometries, Sn transport calculation modules(ANISN, TWOTRAN), diffusion calculation modules(TUD, CITATION) and two optional codes for fuel assembly and core burn-up calculations(newly developed ASMBURN, revised COREBN). In this version, many new functions and data are implemented to support nuclear design studies of advanced reactors, especially for burn-up calculations. SRAC95 is available not only on conventional IBM-compatible computers but also on scalar or vector computers with the UNIX operating system. This report is the SRAC95 users manual which contains general description, contents of revisions, input data requirements, detail information on usage, sample input data and list of available libraries. (author).

  3. Hardware Abstraction and Protocol Optimization for Coded Sensor Networks

    DEFF Research Database (Denmark)

    Nistor, Maricica; Roetter, Daniel Enrique Lucani; Barros, João

    2015-01-01

    The design of the communication protocols in wireless sensor networks (WSNs) often neglects several key characteristics of the sensor's hardware, while assuming that the number of transmitted bits is the dominating factor behind the system's energy consumption. A closer look at the hardware...... specifications of common sensors reveals, however, that other equally important culprits exist, such as the reception and processing energy. Hence, there is a need for a more complete hardware abstraction of a sensor node to reduce effectively the total energy consumption of the network by designing energy......-efficient protocols that use such an abstraction, as well as mechanisms to optimize a communication protocol in terms of energy consumption. The problem is modeled for different feedback-based techniques, where sensors are connected to a base station, either directly or through relays. We show that for four example...

  4. Scaling of Thermal-Hydraulic Phenomena and System Code Assessment

    International Nuclear Information System (INIS)

    Wolfert, K.

    2008-01-01

    In the last five decades large efforts have been undertaken to provide reliable thermal-hydraulic system codes for the analyses of transients and accidents in nuclear power plants. Many separate effects tests and integral system tests were carried out to establish a data base for code development and code validation. In this context the question has to be answered, to what extent the results of down-scaled test facilities represent the thermal-hydraulic behaviour expected in a full-scale nuclear reactor under accidental conditions. Scaling principles, developed by many scientists and engineers, present a scientific technical basis and give a valuable orientation for the design of test facilities. However, it is impossible for a down-scaled facility to reproduce all physical phenomena in the correct temporal sequence and in the kind and strength of their occurrence. The designer needs to optimize a down-scaled facility for the processes of primary interest. This leads compulsorily to scaling distortions of other processes with less importance. Taking into account these weak points, a goal oriented code validation strategy is required, based on the analyses of separate effects tests and integral system tests as well as transients occurred in full-scale nuclear reactors. The CSNI validation matrices are an excellent basis for the fulfilling of this task. Separate effects tests in full scale play here an important role.

  5. Topology optimized permanent magnet systems

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Bahl, Christian; Insinga, Andrea Roberto

    2017-01-01

    Topology optimization of permanent magnet systems consisting of permanent magnets, high permeability iron and air is presented. An implementation of topology optimization for magnetostatics is discussed and three examples are considered. The Halbach cylinder is topology optimized with iron...... and an increase of 15% in magnetic efficiency is shown. A topology optimized structure to concentrate a homogeneous field is shown to increase the magnitude of the field by 111%. Finally, a permanent magnet with alternating high and low field regions is topology optimized and a ΛcoolΛcool figure of merit of 0...

  6. Optimization of power system operation

    CERN Document Server

    Zhu, Jizhong

    2015-01-01

    This book applies the latest applications of new technologies topower system operation and analysis, including new and importantareas that are not covered in the previous edition. Optimization of Power System Operation covers both traditional andmodern technologies, including power flow analysis, steady-statesecurity region analysis, security constrained economic dispatch,multi-area system economic dispatch, unit commitment, optimal powerflow, smart grid operation, optimal load shed, optimalreconfiguration of distribution network, power system uncertaintyanalysis, power system sensitivity analysis, analytic hierarchicalprocess, neural network, fuzzy theory, genetic algorithm,evolutionary programming, and particle swarm optimization, amongothers. New topics such as the wheeling model, multi-areawheeling, the total transfer capability computation in multipleareas, are also addressed. The new edition of this book continues to provide engineers andac demics with a complete picture of the optimization of techn...

  7. Integrated burnup calculation code system SWAT

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Hirakawa, Naohiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Iwasaki, Tomohiko

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user`s manual of SWAT. (author)

  8. SALT [System Analysis Language Translater]: A steady state and dynamic systems code

    International Nuclear Information System (INIS)

    Berry, G.; Geyer, H.

    1983-01-01

    SALT (System Analysis Language Translater) is a lumped parameter approach to system analysis which is totally modular. The modules are all precompiled and only the main program, which is generated by SALT, needs to be compiled for each unique system configuration. This is a departure from other lumped parameter codes where all models are written by MACROS and then compiled for each unique configuration, usually after all of the models are lumped together and sorted to eliminate undetermined variables. The SALT code contains a robust and sophisticated steady-sate finder (non-linear equation solver), optimization capability and enhanced GEAR integration scheme which makes use of sparsity and algebraic constraints. The SALT systems code has been used for various technologies. The code was originally developed for open-cycle magnetohydrodynamic (MHD) systems. It was easily extended to liquid metal MHD systems by simply adding the appropriate models and property libraries. Similarly, the model and property libraries were expanded to handle fuel cell systems, flue gas desulfurization systems, combined cycle gasification systems, fluidized bed combustion systems, ocean thermal energy conversion systems, geothermal systems, nuclear systems, and conventional coal-fired power plants. Obviously, the SALT systems code is extremely flexible to be able to handle all of these diverse systems. At present, the dynamic option has only been used for LMFBR nuclear power plants and geothermal power plants. However, it can easily be extended to other systems and can be used for analyzing control problems. 12 refs

  9. Coded aperture imaging using imperfect detector systems

    International Nuclear Information System (INIS)

    Byard, K.; Ramsden, D.

    1994-01-01

    The imaging properties of a gamma-ray telescope which employs a coded aperture in conjunction with a modular detection plane has been investigated. Gaps in the detection plane, which arise as a consequence of the design of the position sensitive detector used, produce artifacts in the deconvolved images which reduce the signal to noise ratio for the detection of point sources. The application of an iterative image processing algorithm is shown to restore the image quality to that expected from an ideal detector. The efficiency of image processing has enabled its subsequent application to a general coded aperture system in order to gain a significant improvement in the field of view without compromising the angular resolution. (orig.)

  10. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Young Jin; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2004-07-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures.

  11. Coupling the severe accident code SCDAP with the system thermal hydraulic code MARS

    International Nuclear Information System (INIS)

    Lee, Young Jin; Chung, Bub Dong

    2004-01-01

    MARS is a best-estimate system thermal hydraulics code with multi-dimensional modeling capability. One of the aims in MARS code development is to make it a multi-functional code system with the analysis capability to cover the entire accident spectrum. For this purpose, MARS code has been coupled with a number of other specialized codes such as CONTEMPT for containment analysis, and MASTER for 3-dimensional kinetics. And in this study, the SCDAP code has been coupled with MARS to endow the MARS code system with severe accident analysis capability. With the SCDAP, MARS code system now has acquired the capability to simulate such severe accident related phenomena as cladding oxidation, melting and slumping of fuel and reactor structures

  12. Adaptation of the Specific Affect Coding System (SPAFF

    Directory of Open Access Journals (Sweden)

    Tomaž Erzar

    2013-06-01

    Full Text Available The article describes the Slovenian adaptation of the Specific Affect Coding System (SPAFF which was developed by Gottman and colleagues (Gottman and Coan, 2007 for the purpose of examining emotional expression. We present a short history and problems of coding emotions, codes of the system, coding procedure, training of coders, and rules of accurate observing. Also presented are the experiences with the new system, arguments for adaptation of codes to therapeutic processes and suggestions for further improvements.

  13. Embedded Systems Design: Optimization Challenges

    DEFF Research Database (Denmark)

    Pop, Paul

    2005-01-01

    -to-market, and reduce development and manufacturing costs. In this paper, the author introduces several embedded systems design problems, and shows how they can be formulated as optimization problems. Solving such challenging design optimization problems are the key to the success of the embedded systems design...... of designing such systems is becoming increasingly important and difficult at the same time. New automated design optimization techniques are needed, which are able to: successfully manage the complexity of embedded systems, meet the constraints imposed by the application domain, shorten the time...

  14. An Optimal Lower Eigenvalue System

    Directory of Open Access Journals (Sweden)

    Yingfan Liu

    2011-01-01

    Full Text Available An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

  15. GPU Optimizations for a Production Molecular Docking Code*

    Science.gov (United States)

    Landaverde, Raphael; Herbordt, Martin C.

    2015-01-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users. PMID:26594667

  16. GPU Optimizations for a Production Molecular Docking Code.

    Science.gov (United States)

    Landaverde, Raphael; Herbordt, Martin C

    2014-09-01

    Modeling molecular docking is critical to both understanding life processes and designing new drugs. In previous work we created the first published GPU-accelerated docking code (PIPER) which achieved a roughly 5× speed-up over a contemporaneous 4 core CPU. Advances in GPU architecture and in the CPU code, however, have since reduced this relalative performance by a factor of 10. In this paper we describe the upgrade of GPU PIPER. This required an entire rewrite, including algorithm changes and moving most remaining non-accelerated CPU code onto the GPU. The result is a 7× improvement in GPU performance and a 3.3× speedup over the CPU-only code. We find that this difference in time is almost entirely due to the difference in run times of the 3D FFT library functions on CPU (MKL) and GPU (cuFFT), respectively. The GPU code has been integrated into the ClusPro docking server which has over 4000 active users.

  17. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  18. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  19. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...

  20. Systemization of burnup sensitivity analysis code. 2

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2005-02-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of criticality experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristics is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons; the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion. For

  1. On Optimal Policies for Network-Coded Cooperation

    DEFF Research Database (Denmark)

    Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Pahlevani, Peyman

    2015-01-01

    's Raspberry Pi testbed and compared with random linear network coding (RLNC) broadcast in terms of completion time, total number of required transmissions, and percentage of delivered generations. Our measurements show that enabling cooperation only among pairs of devices can decrease the completion time...

  2. Optimizing electrical distribution systems

    International Nuclear Information System (INIS)

    Scott, W.G.

    1990-01-01

    Electrical utility distribution systems are in the middle of an unprecedented technological revolution in planning, design, maintenance and operation. The prime movers of the revolution are the major economic shifts that affect decision making. The major economic influence on the revolution is the cost of losses (technical and nontechnical). The vehicle of the revolution is the computer, which enables decision makers to examine alternatives in greater depth and detail than their predecessors could. The more important elements of the technological revolution are: system planning, computers, load forecasting, analytical systems (primary systems, transformers and secondary systems), system losses and coming technology. The paper is directed towards the rather unique problems encountered by engineers of utilities in developing countries - problems that are being solved through high technology, such as the recent World Bank-financed engineering computer system for Sri Lanka. This system includes a DEC computer, digitizer, plotter and engineering software to model the distribution system via a digitizer, analyse the system and plot single-line diagrams. (author). 1 ref., 4 tabs., 6 figs

  3. Transmission over UWB channels with OFDM system using LDPC coding

    Science.gov (United States)

    Dziwoki, Grzegorz; Kucharczyk, Marcin; Sulek, Wojciech

    2009-06-01

    Hostile wireless environment requires use of sophisticated signal processing methods. The paper concerns on Ultra Wideband (UWB) transmission over Personal Area Networks (PAN) including MB-OFDM specification of physical layer. In presented work the transmission system with OFDM modulation was connected with LDPC encoder/decoder. Additionally the frame and bit error rate (FER and BER) of the system was decreased using results from the LDPC decoder in a kind of turbo equalization algorithm for better channel estimation. Computational block using evolutionary strategy, from genetic algorithms family, was also used in presented system. It was placed after SPA (Sum-Product Algorithm) decoder and is conditionally turned on in the decoding process. The result is increased effectiveness of the whole system, especially lower FER. The system was tested with two types of LDPC codes, depending on type of parity check matrices: randomly generated and constructed deterministically, optimized for practical decoder architecture implemented in the FPGA device.

  4. Code system BCG for gamma-ray skyshine calculation

    International Nuclear Information System (INIS)

    Ryufuku, Hiroshi; Numakunai, Takao; Miyasaka, Shun-ichi; Minami, Kazuyoshi.

    1979-03-01

    A code system BCG has been developed for calculating conveniently and efficiently gamma-ray skyshine doses using the transport calculation codes ANISN and DOT and the point-kernel calculation codes G-33 and SPAN. To simplify the input forms to the system, the forms for these codes are unified, twelve geometric patterns are introduced to give material regions, and standard data are available as a library. To treat complex arrangements of source and shield, it is further possible to use successively the code such that the results from one code may be used as input data to the same or other code. (author)

  5. Optimization of nuclear safety systems

    International Nuclear Information System (INIS)

    Beninson, D.; Gonzalez, A.J.

    1981-01-01

    The paper presents an approach for selecting the level of ambition of nuclear safety by a process of optimization based on cost-benefit considerations. Optimization has been incorporated as a requirement for radiation protection, to keep doses ''as low as reasonably achievable''. In radiation protection, optimization takes account of the costs of protection and the costs of the detriment, minimizing the sum of both. Optimization of a nuclear safety system could conceptually treat similarly the cost of potential damages from nuclear accidents and the cost associated with achieving a given level of safety. Within the above framework a method of optimizing the design of nuclear safety systems is presented, and a simple case of redundancy by output voting techniques is given. (author)

  6. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  7. The Structural Optimization System CAOS

    DEFF Research Database (Denmark)

    Rasmussen, John

    1990-01-01

    CAOS is a system for structural shape optimization. It is closely integrated in a Computer Aided Design environment and controlled entirely from the CAD-system AutoCAD. The mathematical foundation of the system is briefly presented and a description of the CAD-integration strategy is given together...

  8. An Optimal Dissipative Encoder for the Toric Code

    Science.gov (United States)

    2014-01-16

    corresponding Liouvillian is made up of four local Lindblad operators . For a qubit lattice of size L × L , we show that this process prepares encoded...this work may be used under the terms of the Creative Commons Attribution 3.0 licence . Any further distribution of this work must maintain attribution to...associated correction operations . For the toric code, an encoding procedure of this form was given [7]. It involves active error correction operations

  9. Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Torsten Palfner

    2003-06-01

    Full Text Available In this paper, a compression algorithm is introduced which allows the efficient storage and transmission of stereo images. The coder uses a block-based disparity estimation/ compensation technique to decorrelate the image pair. To code both images progressively, we have adapted the wellknown SPIHT coder to stereo images. The results presented in this paper are better than any other results published so far.

  10. System Design Considerations In Bar-Code Laser Scanning

    Science.gov (United States)

    Barkan, Eric; Swartz, Jerome

    1984-08-01

    The unified transfer function approach to the design of laser barcode scanner signal acquisition hardware is considered. The treatment of seemingly disparate system areas such as the optical train, the scanning spot, the electrical filter circuits, the effects of noise, and printing errors is presented using linear systems theory. Such important issues as determination of depth of modulation, filter specification, tolerancing of optical components, and optimi-zation of system performance in the presence of noise are discussed. The concept of effective spot size to allow for impact of optical system and analog processing circuitry upon depth of modulation is introduced. Considerations are limited primarily to Gaussian spot profiles, but also apply to more general cases. Attention is paid to realistic bar-code symbol models and to implications with respect to printing tolerances.

  11. Degaussing System Design Optimization

    NARCIS (Netherlands)

    Bekers, D.J.; Lepelaars, E.S.A.M.

    2013-01-01

    Steel ships with a magnetic signature requirement are equipped with a degaussing system to reduce their perceptibility for magnetic influence mines. To be able to reduce the magnetic signature accurately, a proper distribution of coils over the ship is essential. Finding the best distribution of

  12. A code system for ADS transmutation studies

    International Nuclear Information System (INIS)

    Brolly, A.; Vertes, P.

    2001-01-01

    An accelerator driven reactor physical system can be divided into two different subsystems. One is the neutron source the other is the subcritical reactor. Similarly, the modelling of such system is also split into two parts. The first step is the determination of the spatial distribution and angle-energy spectrum of neutron source in the target region; the second one is the calculation of neutron flux which is responsible for the transmutation process in the subcritical system. Accelerators can make neutrons from high energy protons by spallation or photoneutrons from accelerated electrons by Bremsstrahlung (e-n converter). The Monte Carlo approach is the only way of modelling such processes and it might be extended to the whole subcritical system as well. However, a subcritical reactor may be large, it may contain thermal regions and the lifetime of neutrons may be long. Therefore a comprehensive Monte Carlo modelling of such system is a very time consuming computational process. It is unprofitable as well when applied to system optimization that requires a comparative study of large number of system variants. An appropriate method of deterministic transport calculation may adequately satisfy these requirements. Thus, we have built up a coupled calculational model for ADS to be used for transmutation of nuclear waste which we refer further as M-c-T system. Flow chart is shown in Figure. (author)

  13. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    International Nuclear Information System (INIS)

    Mori, Takamasa; Nakagawa, Masayuki; Kaneko, Kunio.

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author)

  14. Neutron cross section library production code system for continuous energy Monte Carlo code MVP. LICEM

    Energy Technology Data Exchange (ETDEWEB)

    Mori, Takamasa; Nakagawa, Masayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kaneko, Kunio

    1996-05-01

    A code system has been developed to produce neutron cross section libraries for the MVP continuous energy Monte Carlo code from an evaluated nuclear data library in the ENDF format. The code system consists of 9 computer codes, and can process nuclear data in the latest ENDF-6 format. By using the present system, MVP neutron cross section libraries for important nuclides in reactor core analyses, shielding and fusion neutronics calculations have been prepared from JENDL-3.1, JENDL-3.2, JENDL-FUSION file and ENDF/B-VI data bases. This report describes the format of MVP neutron cross section library, the details of each code in the code system and how to use them. (author).

  15. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    Science.gov (United States)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  16. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    channel transmission. The optimization of linear predictive coding for such lossy channel behaviour is not well understood in the literature. We review basics of source and channel coding, differential pulse code modulation (DPCM), state-space models, minimum mean squared error (MMSE) estimation...... of the source signal. The source process and source encoder are formulated as a state-space model, enabling the use of Kalman filtering for decoding the source signal. The optimization algorithm is a greedy approach that designs the filter coefficients of a generalized DPCM encoder. The objective...

  17. Truss systems and shape optimization

    Science.gov (United States)

    Pricop, Mihai Victor; Bunea, Marian; Nedelcu, Roxana

    2017-07-01

    Structure optimization is an important topic because of its benefits and wide applicability range, from civil engineering to aerospace and automotive industries, contributing to a more green industry and life. Truss finite elements are still in use in many research/industrial codesfor their simple stiffness matrixand are naturally matching the requirements for cellular materials especially considering various 3D printing technologies. Optimality Criteria combined with Solid Isotropic Material with Penalization is the optimization method of choice, particularized for truss systems. Global locked structures areobtainedusinglocally locked lattice local organization, corresponding to structured or unstructured meshes. Post processing is important for downstream application of the method, to make a faster link to the CAD systems. To export the optimal structure in CATIA, a CATScript file is automatically generated. Results, findings and conclusions are given for two and three-dimensional cases.

  18. Optical code division multiple access secure communications systems with rapid reconfigurable polarization shift key user code

    Science.gov (United States)

    Gao, Kaiqiang; Wu, Chongqing; Sheng, Xinzhi; Shang, Chao; Liu, Lanlan; Wang, Jian

    2015-09-01

    An optical code division multiple access (OCDMA) secure communications system scheme with rapid reconfigurable polarization shift key (Pol-SK) bipolar user code is proposed and demonstrated. Compared to fix code OCDMA, by constantly changing the user code, the performance of anti-eavesdropping is greatly improved. The Pol-SK OCDMA experiment with a 10 Gchip/s user code and a 1.25 Gb/s user data of payload has been realized, which means this scheme has better tolerance and could be easily realized.

  19. Optimization theory for large systems

    CERN Document Server

    Lasdon, Leon S

    2002-01-01

    Important text examines most significant algorithms for optimizing large systems and clarifying relations between optimization procedures. Much data appear as charts and graphs and will be highly valuable to readers in selecting a method and estimating computer time and cost in problem-solving. Initial chapter on linear and nonlinear programming presents all necessary background for subjects covered in rest of book. Second chapter illustrates how large-scale mathematical programs arise from real-world problems. Appendixes. List of Symbols.

  20. Distributed optimization system and method

    Science.gov (United States)

    Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.

    2003-06-10

    A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.

  1. A simple numerical coding system for clinical electrocardiography

    NARCIS (Netherlands)

    Robles de Medina, E.O.; Meijler, F.L.

    1974-01-01

    A simple numerical coding system for clinical electrocardiography has been developed. This system enables the storage in coded form of the ECG analysis. The code stored on a digital magnetic tape can be used for a computer print-out of the analysis, while the information can be retrieved at any time

  2. Development and verification of a coupled code system RETRAN-MASTER-TORC

    International Nuclear Information System (INIS)

    Cho, J.Y.; Song, J.S.; Joo, H.G.; Zee, S.Q.

    2004-01-01

    Recently, coupled thermal-hydraulics (T-H) and three-dimensional kinetics codes have been widely used for the best-estimate simulations such as the main steam line break (MSLB) and locked rotor problems. This work is to develop and verify one of such codes by coupling the system T-H code RETRAN, the 3-D kinetics code MASTER and sub-channel analysis code TORC. The MASTER code has already been applied to such simulations after coupling with the MARS or RETRAN-3D multi-dimensional system T-H codes. The MASTER code contains a sub-channel analysis code COBRA-III C/P, and the coupled systems MARSMASTER-COBRA and RETRAN-MASTER-COBRA had been already developed and verified. With these previous studies, a new coupled system of RETRAN-MASTER-TORC is to be developed and verified for the standard best-estimate simulation code package in Korea. The TORC code has already been applied to the thermal hydraulics design of the several ABB/CE type plants and Korean Standard Nuclear Power Plants (KSNP). This justifies the choice of TORC rather than COBRA. Because the coupling between RETRAN and MASTER codes are already established and verified, this work is simplified to couple the TORC sub-channel T-H code with the MASTER neutronics code. The TORC code is a standalone code that solves the T-H equations for a given core problem from reading the input file and finally printing the converged solutions. However, in the coupled system, because TORC receives the pin power distributions from the neutronics code MASTER and transfers the T-H results to MASTER iteratively, TORC needs to be controlled by the MASTER code and does not need to solve the given problem completely at each iteration step. By this reason, the coupling of the TORC code with the MASTER code requires several modifications in the I/O treatment, flow iteration and calculation logics. The next section of this paper describes the modifications in the TORC code. The TORC control logic of the MASTER code is then followed. The

  3. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto

    2017-01-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10 13 combinations and 10 11 great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  4. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto, E-mail: alan@lmp.ufrj.br, E-mail: andressa@lmp.ufrj.br, E-mail: schirru@lmp.ufrj.br, E-mail: ffreire@eletronuclear.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Eletrobras Termonuclear S.A. (ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10{sup 13} combinations and 10{sup 11} great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  5. Cat Codes with Optimal Decoherence Suppression for a Lossy Bosonic Channel

    Science.gov (United States)

    Li, Linshu; Zou, Chang-Ling; Albert, Victor V.; Muralidharan, Sreraman; Girvin, S. M.; Jiang, Liang

    2017-07-01

    We investigate cat codes that can correct multiple excitation losses and identify two types of logical errors: bit-flip errors due to excessive excitation loss and dephasing errors due to quantum backaction from the environment. We show that selected choices of logical subspace and coherent amplitude significantly reduce dephasing errors. The trade-off between the two major errors enables optimized performance of cat codes in terms of minimized decoherence. With high coupling efficiency, we show that one-way quantum repeaters with cat codes feature a boosted secure communication rate per mode when compared to conventional encoding schemes, showcasing the promising potential of quantum information processing with continuous variable quantum codes.

  6. Systems' optimization: Achieving the balance

    Science.gov (United States)

    Kraus, Peter

    1994-04-01

    Fuel cells for stationary power generation applications are being pursued on a large scale worldwide in an effort to achieve commercialization before the turn of the century. Some aspects of system optimization are discussed illustrating the influence of basic system design possibilities. Design variants investigated include alternatives for anode and cathode gas supply and gas recycling, methods to achieve self-sufficiency on water for the reforming of natural gas, and recovery of unspent fuel from the anode exhaust. Especially in small systems for decentralized applications, e.g., industrial cogeneration, system simplification is decisive to bring down the capital cost of the balance-of-plant. Trade-offs between system complexity and efficiency are possible to optimize the economy. In large plants, high-temperature fuel cell can be supplemented with bottoming cycles for best fuel utilization. Gas turbines and steam turbines can be evaluated, having strong influence on the system design pressures and, therefore, system cost.

  7. SCALE Code System 6.2.1

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.

  8. SCALE Code System 6.2.2

    Energy Technology Data Exchange (ETDEWEB)

    Rearden, Bradley T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jessee, Matthew Anderson [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-05-01

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.

  9. Next generation Zero-Code control system UI

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Developing ergonomic user interfaces for control systems is challenging, especially during machine upgrade and commissioning where several small changes may suddenly be required. Zero-code systems, such as *Inspector*, provide agile features for creating and maintaining control system interfaces. More so, these next generation Zero-code systems bring simplicity and uniformity and brake the boundaries between Users and Developers. In this talk we present *Inspector*, a CERN made Zero-code application development system, and we introduce the major differences and advantages of using Zero-code control systems to develop operational UI.

  10. Optimal coding of vectorcardiographic sequences using spatial prediction.

    Science.gov (United States)

    Augustyniak, Piotr

    2007-05-01

    This paper discusses principles, implementation details, and advantages of sequence coding algorithm applied to the compression of vectocardiograms (VCG). The main novelty of the proposed method is the automatic management of distortion distribution controlled by the local signal contents in both technical and medical aspects. As in clinical practice, the VCG loops representing P, QRS, and T waves in the three-dimensional (3-D) space are considered here as three simultaneous sequences of objects. Because of the similarity of neighboring loops, encoding the values of prediction error significantly reduces the data set volume. The residual values are de-correlated with the discrete cosine transform (DCT) and truncated at certain energy threshold. The presented method is based on the irregular temporal distribution of medical data in the signal and takes advantage of variable sampling frequency for automatically detected VCG loops. The features of the proposed algorithm are confirmed by the results of the numerical experiment carried out for a wide range of real records. The average data reduction ratio reaches a value of 8.15 while the percent root-mean-square difference (PRD) distortion ratio for the most important sections of signal does not exceed 1.1%.

  11. Variable-length code construction for incoherent optical CDMA systems

    Science.gov (United States)

    Lin, Jen-Yung; Jhou, Jhih-Syue; Wen, Jyh-Horng

    2007-04-01

    The purpose of this study is to investigate the multirate transmission in fiber-optic code-division multiple-access (CDMA) networks. In this article, we propose a variable-length code construction for any existing optical orthogonal code to implement a multirate optical CDMA system (called as the multirate code system). For comparison, a multirate system where the lower-rate user sends each symbol twice is implemented and is called as the repeat code system. The repetition as an error-detection code in an ARQ scheme in the repeat code system is also investigated. Moreover, a parallel approach for the optical CDMA systems, which is proposed by Marić et al., is also compared with other systems proposed in this study. Theoretical analysis shows that the bit error probability of the proposed multirate code system is smaller than other systems, especially when the number of lower-rate users is large. Moreover, if there is at least one lower-rate user in the system, the multirate code system accommodates more users than other systems when the error probability of system is set below 10 -9.

  12. The role of stochasticity in an information-optimal neural population code

    International Nuclear Information System (INIS)

    Stocks, N G; Nikitin, A P; McDonnell, M D; Morse, R P

    2009-01-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  13. Communication Systems Simulator with Error Correcting Codes Using MATLAB

    Science.gov (United States)

    Gomez, C.; Gonzalez, J. E.; Pardo, J. M.

    2003-01-01

    In this work, the characteristics of a simulator for channel coding techniques used in communication systems, are described. This software has been designed for engineering students in order to facilitate the understanding of how the error correcting codes work. To help students understand easily the concepts related to these kinds of codes, a…

  14. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  15. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... is applied on receive, contrast or resolution can be traded in for range sidelobe levels down to -86 dB. The digital filter is designed to efficiently use the available bandwidth and at the same time to be insensitive to the transducer's impulse response. For evaluation of the method, simulations were...... performed with the program Field II. A commercial scanner (B-K Medical 3535) was modified and interfaced to an arbitrary function generator along with an RF power amplifier (Ritec). Hydrophone measurements in water were done to establish excitation voltage and corresponding intensity levels (I-sptp and I...

  16. Low Complexity Receiver Structures for Space-Time Coded Multiple-Access Systems

    Directory of Open Access Journals (Sweden)

    Sudharman K. Jayaweera

    2002-03-01

    Full Text Available Multiuser detection for space-time coded synchronous multiple-access systems in the presence of independent Rayleigh fading is considered. Under the assumption of quasi-static fading, it is shown that optimal (full diversity achieving space-time codes designed for single-user channels, can still provide full diversity in the multiuser channel. The joint optimal maximum likelihood multiuser detector, which can be implemented with a Viterbi-type algorithm, is derived for such space-time coded systems. Low complexity, partitioned detector structures that separate the multiuser detection and space-time decoding into two stages are also developed. Both linear and nonlinear multiuser detection schemes are considered for the first stage of these partitioned space-time multiuser receivers. Simulation results show that these latter methods achieve performance competitive with the single-user bound for space-time coded systems.

  17. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T.; Rollstin, J.A.; Chanin, D.I.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs

  18. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  19. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Rollstin, J.A.; Chanin, D.I.; Jow, H.N.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management

  20. Nuclear modules of ITER tokamak systems code

    International Nuclear Information System (INIS)

    Gohar, Y.; Baker, C.; Brooks, J.

    1987-10-01

    Nuclear modules were developed to model various reactor components in the ITER systems code. Several design options and cost algorithms are included for each component. The first wall, blanket and shield modules calculate the beryllium zone thickness, the disruptions results, the nuclear responses in different components including the toroidal field coils. Tungsten shield/water coolant/steel structure and steel shield/water coolant are the shield options for the inboard and outboard sections of the reactor. Lithium nitrate dissolved in the water coolant with a variable beryllium zone thickness in the outboard section of the reactor provides the tritium breeding capability. The reactor vault module defines the thickness of the reactor wall and the roof based on the dose equivalent during operation including skyshine contribution. The impurity control module provides the design parameters for the divertor including plate design, heat load, erosion rate, tritium permeation through the plate material to the coolant, plasma contamination by sputtered impurities, and plate lifetime. Several materials: Be, C, V, Mo, and W can be used for the divertor plate to cover a range of plasma edge temperatures. The tritium module calculates tritium and deuterium flow rates for the reactor plant. The tritium inventory in the fuelers, neutral beams, vacuum pumps, impurity control, first wall, and blanket is calculated. Tritium requirements are provided for different operating conditions. The nuclear models are summarized in this paper including the different design options and key analyses of each module. 39 refs., 3 tabs

  1. Recent developments in the Los Alamos radiation transport code system

    International Nuclear Information System (INIS)

    Forster, R.A.; Parsons, K.

    1997-01-01

    A brief progress report on updates to the Los Alamos Radiation Transport Code System (LARTCS) for solving criticality and fixed-source problems is provided. LARTCS integrates the Diffusion Accelerated Neutral Transport (DANT) discrete ordinates codes with the Monte Carlo N-Particle (MCNP) code. The LARCTS code is being developed with a graphical user interface for problem setup and analysis. Progress in the DANT system for criticality applications include a two-dimensional module which can be linked to a mesh-generation code and a faster iteration scheme. Updates to MCNP Version 4A allow statistical checks of calculated Monte Carlo results

  2. Coding Conversation between Intimates: A Validation Study of the Intimate Negotiation Coding System (INCS).

    Science.gov (United States)

    Ting-Toomey, Stella

    A study was conducted to test the reliability and validity of the Intimate Coding System (INCS)--an instrument designed to code verbal conversation in intimate relationships. Subjects, 34 married couples, completed Spanier's Dyadic Adjustment Scale, which elicited information about relational adjustment and satisfaction in intimate couples in…

  3. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-05-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  4. Morse Monte Carlo Radiation Transport Code System

    Energy Technology Data Exchange (ETDEWEB)

    Emmett, M.B.

    1975-02-01

    The report contains sections containing descriptions of the MORSE and PICTURE codes, input descriptions, sample problems, deviations of the physical equations and explanations of the various error messages. The MORSE code is a multipurpose neutron and gamma-ray transport Monte Carlo code. Time dependence for both shielding and criticality problems is provided. General three-dimensional geometry may be used with an albedo option available at any material surface. The PICTURE code provide aid in preparing correct input data for the combinatorial geometry package CG. It provides a printed view of arbitrary two-dimensional slices through the geometry. By inspecting these pictures one may determine if the geometry specified by the input cards is indeed the desired geometry. 23 refs. (WRF)

  5. Secure Cooperative Regenerating Codes for Distributed Storage Systems

    OpenAIRE

    Koyluoglu, O. Ozan; Rawat, Ankit Singh; Vishwanath, Sriram

    2012-01-01

    Regenerating codes enable trading off repair bandwidth for storage in distributed storage systems (DSS). Due to their distributed nature, these systems are intrinsically susceptible to attacks, and they may also be subject to multiple simultaneous node failures. Cooperative regenerating codes allow bandwidth efficient repair of multiple simultaneous node failures. This paper analyzes storage systems that employ cooperative regenerating codes that are robust to (passive) eavesdroppers. The ana...

  6. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  7. THE OPTIMAL CONTROL IN THE MODELOF NETWORK SECURITY FROM MALICIOUS CODE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The paper deals with a mathematical model of network security. The model is described in terms of the nonlinear optimal control. As a criterion of the control problem quality the price of the summary damage inflicted by the harmful codes is chosen, under additional restriction: the number of recovered nodes is maximized. The Pontryagin maximum principle for construction of the optimal decisions is formulated. The number of switching points of the optimal control is found. The explicit form of optimal control is given using the Lagrange multipliers method.

  8. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.

  9. The application of LDPC code in MIMO-OFDM system

    Science.gov (United States)

    Liu, Ruian; Zeng, Beibei; Chen, Tingting; Liu, Nan; Yin, Ninghao

    2018-03-01

    The combination of MIMO and OFDM technology has become one of the key technologies of the fourth generation mobile communication., which can overcome the frequency selective fading of wireless channel, increase the system capacity and improve the frequency utilization. Error correcting coding introduced into the system can further improve its performance. LDPC (low density parity check) code is a kind of error correcting code which can improve system reliability and anti-interference ability, and the decoding is simple and easy to operate. This paper mainly discusses the application of LDPC code in MIMO-OFDM system.

  10. SRAC2006: A comprehensive neutronics calculation code system

    International Nuclear Information System (INIS)

    Okumura, Keisuke; Kugo, Teruhiko; Kaneko, Kunio; Tsuchihashi, Keichiro

    2007-02-01

    The SRAC is a code system applicable to neutronics analysis of a variety of reactor types. Since the publication of the second version of the users manual (JAERI-1302) in 1986 for the SRAC system, a number of additions and modifications to the functions and the library data have been made to establish a comprehensive neutronics code system. The current system includes major neutron data libraries (JENDL-3.3, JENDL-3.2, ENDF/B-VII, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, etc.), and integrates five elementary codes for neutron transport and diffusion calculation; PIJ based on the collision probability method applicable to 16 kind of lattice models, S N transport codes ANISN(1D) and TWOTRN(2D), diffusion codes TUD(1D) and CITATION(multi-D). The system also includes an auxiliary code COREBN for multi-dimensional core burn-up calculation. (author)

  11. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  12. Embedded Systems Design: Optimization Challenges

    DEFF Research Database (Denmark)

    Pop, Paul

    2005-01-01

    Summary form only given. Embedded systems are everywhere: from alarm clocks to PDAs, from mobile phones to cars, almost all the devices we use are controlled by embedded systems. Over 99% of the microprocessors produced today are used in embedded systems, and recently the number of embedded systems...... in use has become larger than the number of humans on the planet. The complexity of embedded systems is growing at a very high pace and the constraints in terms of functionality, performance, low energy consumption, reliability, cost and time-to-market are getting tighter. Therefore, the task...... of designing such systems is becoming increasingly important and difficult at the same time. New automated design optimization techniques are needed, which are able to: successfully manage the complexity of embedded systems, meet the constraints imposed by the application domain, shorten the time...

  13. Optimization of space manufacturing systems

    Science.gov (United States)

    Akin, D. L.

    1979-01-01

    Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.

  14. Basic concept of common reactor physics code systems. Final report of working party on common reactor physics code systems (CCS)

    International Nuclear Information System (INIS)

    2004-03-01

    A working party was organized for two years (2001-2002) on common reactor physics code systems under the Research Committee on Reactor Physics of JAERI. This final report is compilation of activity of the working party on common reactor physics code systems during two years. Objectives of the working party is to clarify basic concept of common reactor physics code systems to improve convenience of reactor physics code systems for reactor physics researchers in Japan on their various field of research and development activities. We have held four meetings during 2 years, investigated status of reactor physics code systems and innovative software technologies, and discussed basic concept of common reactor physics code systems. (author)

  15. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Chanin, D.I.; Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems

  16. Comparison of criticality benchmark evaluations for U+Pu system. JACS code system and the other Monte Carlo codes

    International Nuclear Information System (INIS)

    Takada, Tomoyuki; Yoshiyama, Hiroshi; Miyoshi, Yoshinori; Katakura, Jun-ichi

    2003-01-01

    Criticality safety evaluation code system JACS was developed by JAERI. Its accuracy evaluation was performed in 1980's. Although the evaluation of JACS was performed for various critical systems, the comparisons with continuous energy Monte Carlo code were not performed because such code was not developed those days. The comparisons are presented in this paper about the heterogeneous and homogeneous system containing U+Pu nitrate solutions. (author)

  17. Performance enhancement of optical code-division multiple-access systems using transposed modified Walsh code

    Science.gov (United States)

    Sikder, Somali; Ghosh, Shila

    2018-02-01

    This paper presents the construction of unipolar transposed modified Walsh code (TMWC) and analysis of its performance in optical code-division multiple-access (OCDMA) systems. Specifically, the signal-to-noise ratio, bit error rate (BER), cardinality, and spectral efficiency were investigated. The theoretical analysis demonstrated that the wavelength-hopping time-spreading system using TMWC was robust against multiple-access interference and more spectrally efficient than systems using other existing OCDMA codes. In particular, the spectral efficiency was calculated to be 1.0370 when TMWC of weight 3 was employed. The BER and eye pattern for the designed TMWC were also successfully obtained using OptiSystem simulation software. The results indicate that the proposed code design is promising for enhancing network capacity.

  18. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  19. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  20. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  1. User's manual for the BNW-I optimization code for dry-cooled power plants. [AMCIRC

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.; Faletti, D.W.; Wiles, L.E.

    1977-01-01

    This appendix provides a listing, called Program AMCIRC, of the BNW-1 optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using ammonia flow in the heat exchanger tubes. The optimum design is determined by repeating the design of the cooling system over a range of design conditions in order to find the cooling system with the smallest incremental cost. This is accomplished by varying five parameters of the plant and cooling system over ranges of values. These parameters are varied systematically according to techniques that perform pattern and gradient searches. The dry cooling system optimized by program AMCIRC is composed of a condenser/reboiler (condensation of steam and boiling of ammonia), piping system (transports ammonia vapor out and ammonia liquid from the dry cooling towers), and circular tower system (vertical one-pass heat exchangers situated in circular configurations with cocurrent ammonia flow in the tubes of the heat exchanger). (LCL)

  2. NUFCOS - nuclear fuel cycle optimization system

    International Nuclear Information System (INIS)

    Kaikkonen, H.; Salo, J.-P.; Vieno, T.; Vira, J.

    1979-05-01

    NUFCOS is a multigoal nuclear fuel cycle optimization code with an arbitrary number of decision objectives. The multigoal decision-making is based on the evolving techniques of fuzzy optimization. After a short description of the fuel cycle model and the calculation methods this report gives the input instructions in the case of three optimization criteria: minimization of fuel cycle costs, economical risk and nuclear weapons proliferation risk. (author)

  3. The in-core fuel management code system for VVER reactors

    International Nuclear Information System (INIS)

    Cada, R.; Krysl, V.; Mikolas, P.; Sustek, J.; Svarny, J.

    2004-01-01

    The structure and methodology of a fuel management system for NPP VVER 1000 (NPP Temelin) and VVER 440 (NPP Dukovany) is described. It is under development in SKODA JS a.s. and is followed by practical applications. The general objectives of the system are maximization of end of cycle reactivity, the minimization of fresh fuel inventory for the minimization of fed enrichment and minimization of burnable poisons (BPs) inventory. They are also safety related constraints in witch minimization of power peaking plays a dominant role. General structure of the system consists in preparation of input data for macrocode calculation, algorithms (codes) for optimization of fuel loading, calculation of fuel enrichment and BPs assignment. At present core loading can be calculated (optimized) by Tabu search algorithm (code ATHENA), genetic algorithm (code Gen1) and hybrid algorithm - simplex procedure with application of Tabu search algorithm on binary shuffling (code OPAL B ). Enrichment search is realized by the application of simplex algorithm (OPAL B code) and BPs assignment by module BPASS and simplex algorithm in OPAL B code. Calculations of the real core loadings are presented and a comparison of different optimization methods is provided. (author)

  4. Performance Analysis of Optical Code Division Multiplex System

    Science.gov (United States)

    Kaur, Sandeep; Bhatia, Kamaljit Singh

    2013-12-01

    This paper presents the Pseudo-Orthogonal Code generator for Optical Code Division Multiple Access (OCDMA) system which helps to reduce the need of bandwidth expansion and improve spectral efficiency. In this paper we investigate the performance of multi-user OCDMA system to achieve data rate more than 1 Tbit/s.

  5. The Effect of Slot-Code Optimization in Warehouse Order Picking

    Directory of Open Access Journals (Sweden)

    Andrea Fumi

    2013-07-01

    most appropriate material handling resource configuration. Building on previous work on the effect of slot-code optimization on travel times in single/dual command cycles, the authors broaden the scope to include the most general picking case, thus widening the range of applicability and realising former suggestions for future research.

  6. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng

    2016-12-24

    In computer storage, RAID 6 is a level of RAID that can tolerate two failed drives. When RAID-6 is implemented by Reed-Solomon (RS) codes, the penalty of the writing performance is on the field multiplications in the second parity. In this paper, we present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first parity is also utilized to calculate the second parity. To the best of our knowledge, this is the first approach supporting the RAID-6 RS codes to approach the optimal arithmetic complexity.

  7. Modern Nuclear Data Evaluation with the TALYS Code System

    Science.gov (United States)

    Koning, A. J.; Rochman, D.

    2012-12-01

    This paper presents a general overview of nuclear data evaluation and its applications as developed at NRG, Petten. Based on concepts such as robustness, reproducibility and automation, modern calculation tools are exploited to produce original nuclear data libraries that meet the current demands on quality and completeness. This requires a system which comprises differential measurements, theory development, nuclear model codes, resonance analysis, evaluation, ENDF formatting, data processing and integral validation in one integrated approach. Software, built around the TALYS code, will be presented in which all these essential nuclear data components are seamlessly integrated. Besides the quality of the basic data and its extensive format testing, a second goal lies in the diversity of processing for different type of users. The implications of this scheme are unprecedented. The most important are: 1. Complete ENDF-6 nuclear data files, in the form of the TENDL library, including covariance matrices, for many isotopes, particles, energies, reaction channels and derived quantities. All isotopic data files are mutually consistent and are supposed to rival those of the major world libraries. 2. More exact uncertainty propagation from basic nuclear physics to applied (reactor) calculations based on a Monte Carlo approach: "Total" Monte Carlo (TMC), using random nuclear data libraries. 3. Automatic optimization in the form of systematic feedback from integral measurements back to the basic data. This method of work also opens a new way of approaching the analysis of nuclear applications, with consequences in both applied nuclear physics and safety of nuclear installations, and several examples are given here. This applied experience and feedback is integrated in a final step to improve the quality of the nuclear data, to change the users vision and finally to orchestrate their integration into simulation codes.

  8. Modern Nuclear Data Evaluation with the TALYS Code System

    International Nuclear Information System (INIS)

    Koning, A.J.; Rochman, D.

    2012-01-01

    This paper presents a general overview of nuclear data evaluation and its applications as developed at NRG, Petten. Based on concepts such as robustness, reproducibility and automation, modern calculation tools are exploited to produce original nuclear data libraries that meet the current demands on quality and completeness. This requires a system which comprises differential measurements, theory development, nuclear model codes, resonance analysis, evaluation, ENDF formatting, data processing and integral validation in one integrated approach. Software, built around the TALYS code, will be presented in which all these essential nuclear data components are seamlessly integrated. Besides the quality of the basic data and its extensive format testing, a second goal lies in the diversity of processing for different type of users. The implications of this scheme are unprecedented. The most important are: 1. Complete ENDF-6 nuclear data files, in the form of the TENDL library, including covariance matrices, for many isotopes, particles, energies, reaction channels and derived quantities. All isotopic data files are mutually consistent and are supposed to rival those of the major world libraries. 2. More exact uncertainty propagation from basic nuclear physics to applied (reactor) calculations based on a Monte Carlo approach: “Total” Monte Carlo (TMC), using random nuclear data libraries. 3. Automatic optimization in the form of systematic feedback from integral measurements back to the basic data. This method of work also opens a new way of approaching the analysis of nuclear applications, with consequences in both applied nuclear physics and safety of nuclear installations, and several examples are given here. This applied experience and feedback is integrated in a final step to improve the quality of the nuclear data, to change the users vision and finally to orchestrate their integration into simulation codes.

  9. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    International Nuclear Information System (INIS)

    Kurosu, K; Takashina, M; Koizumi, M; Das, I; Moskvin, V

    2014-01-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  10. RELAP5/MOD3 code manual: Code structure, system models, and solution methods. Volume 1

    International Nuclear Information System (INIS)

    1995-08-01

    The RELAP5 code has been developed for best estimate transient simulation of light water reactor coolant systems during postulated accidents. The code models the coupled behavior of the reactor coolant system and the core for loss-of-coolant accidents, and operational transients, such as anticipated transient without scram, loss of offsite power, loss of feedwater, and loss of flow. A generic modeling, approach is used that permits simulating a variety of thermal hydraulic systems. Control system and secondary system components are included to permit modeling of plant controls, turbines, condensers, and secondary feedwater systems. RELAP5/MOD3 code documentation is divided into seven volumes: Volume I provides modeling theory and associated numerical schemes

  11. Cogeneration system simulation/optimization

    International Nuclear Information System (INIS)

    Puppa, B.A.; Chandrashekar, M.

    1992-01-01

    Companies are increasingly turning to computer software programs to improve and streamline the analysis o cogeneration systems. This paper introduces a computer program which originated with research at the University of Waterloo. The program can simulate and optimize any type of layout of cogeneration plant. An application of the program to a cogeneration feasibility study for a university campus is described. The Steam and Power Plant Optimization System (SAPPOS) is a PC software package which allows users to model any type of steam/power plant on a component-by-component basis. Individual energy/steam balances can be done quickly to model any scenario. A typical days per month cogeneration simulation can also be carried out to provide a detailed monthly cash flow and energy forecast. This paper reports that SAPPOS can be used for scoping, feasibility, and preliminary design work, along with financial studies, gas contract studies, and optimizing the operation of completed plants. In the feasibility study presented, SAPPOS is used to evaluate both diesel engine and gas turbine combined cycle options

  12. Biometric iris image acquisition system with wavefront coding technology

    Science.gov (United States)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code

  13. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-07-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs.

  14. LOLA SYSTEM: A code block for nodal PWR simulation. Part. I - Simula-3 Code

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the SIMULA-3 code, which is part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. SIMULA-3 is the main module of the system, it uses a modified nodal theory, with interface leakages equivalent to the diffusion theory. (Author) 4 refs

  15. SWAT2: The improved SWAT code system by incorporating the continuous energy Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Mochizuki, Hiroki; Suyama, Kenya; Okuno, Hiroshi

    2003-01-01

    SWAT is a code system, which performs the burnup calculation by the combination of the neutronics calculation code, SRAC95 and the one group burnup calculation code, ORIGEN2.1. The SWAT code system can deal with the cell geometry in SRAC95. However, a precise treatment of resonance absorptions by the SRAC95 code using the ultra-fine group cross section library is not directly applicable to two- or three-dimensional geometry models, because of restrictions in SRAC95. To overcome this problem, SWAT2 which newly introduced the continuous energy Monte Carlo code, MVP into SWAT was developed. Thereby, the burnup calculation by the continuous energy in any geometry became possible. Moreover, using the 147 group cross section library called SWAT library, the reactions which are not dealt with by SRAC95 and MVP can be treated. OECD/NEA burnup credit criticality safety benchmark problems Phase-IB (PWR, a single pin cell model) and Phase-IIIB (BWR, fuel assembly model) were calculated as a verification of SWAT2, and the results were compared with the average values of calculation results of burnup calculation code of each organization. Through two benchmark problems, it was confirmed that SWAT2 was applicable to the burnup calculation of the complicated geometry. (author)

  16. Interval Coded Scoring: a toolbox for interpretable scoring systems

    Directory of Open Access Journals (Sweden)

    Lieven Billiet

    2018-04-01

    Full Text Available Over the last decades, clinical decision support systems have been gaining importance. They help clinicians to make effective use of the overload of available information to obtain correct diagnoses and appropriate treatments. However, their power often comes at the cost of a black box model which cannot be interpreted easily. This interpretability is of paramount importance in a medical setting with regard to trust and (legal responsibility. In contrast, existing medical scoring systems are easy to understand and use, but they are often a simplified rule-of-thumb summary of previous medical experience rather than a well-founded system based on available data. Interval Coded Scoring (ICS connects these two approaches, exploiting the power of sparse optimization to derive scoring systems from training data. The presented toolbox interface makes this theory easily applicable to both small and large datasets. It contains two possible problem formulations based on linear programming or elastic net. Both allow to construct a model for a binary classification problem and establish risk profiles that can be used for future diagnosis. All of this requires only a few lines of code. ICS differs from standard machine learning through its model consisting of interpretable main effects and interactions. Furthermore, insertion of expert knowledge is possible because the training can be semi-automatic. This allows end users to make a trade-off between complexity and performance based on cross-validation results and expert knowledge. Additionally, the toolbox offers an accessible way to assess classification performance via accuracy and the ROC curve, whereas the calibration of the risk profile can be evaluated via a calibration curve. Finally, the colour-coded model visualization has particular appeal if one wants to apply ICS manually on new observations, as well as for validation by experts in the specific application domains. The validity and applicability

  17. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    International Nuclear Information System (INIS)

    Baratta, A.J.

    1997-01-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together

  18. Interface requirements for coupling a containment code to a reactor system thermal hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Baratta, A.J.

    1997-07-01

    To perform a complete analysis of a reactor transient, not only the primary system response but the containment response must also be accounted for. Such transients and accidents as a loss of coolant accident in both pressurized water and boiling water reactors and inadvertent operation of safety relief valves all challenge the containment and may influence flows because of containment feedback. More recently, the advanced reactor designs put forth by General Electric and Westinghouse in the US and by Framatome and Seimens in Europe rely on the containment to act as the ultimate heat sink. Techniques used by analysts and engineers to analyze the interaction of the containment and the primary system were usually iterative in nature. Codes such as RELAP or RETRAN were used to analyze the primary system response and CONTAIN or CONTEMPT the containment response. The analysis was performed by first running the system code and representing the containment as a fixed pressure boundary condition. The flows were usually from the primary system to the containment initially and generally under choked conditions. Once the mass flows and timing are determined from the system codes, these conditions were input into the containment code. The resulting pressures and temperatures were then calculated and the containment performance analyzed. The disadvantage of this approach becomes evident when one performs an analysis of a rapid depressurization or a long term accident sequence in which feedback from the containment can occur. For example, in a BWR main steam line break transient, the containment heats up and becomes a source of energy for the primary system. Recent advances in programming and computer technology are available to provide an alternative approach. The author and other researchers have developed linkage codes capable of transferring data between codes at each time step allowing discrete codes to be coupled together.

  19. Performance Analysis of Spectral Amplitude Coding Based OCDMA System with Gain and Splitter Mismatch

    Science.gov (United States)

    Umrani, Fahim A.; Umrani, A. Waheed; Umrani, Naveed A.; Memon, Kehkashan A.; Kalwar, Imtiaz Hussain

    2013-09-01

    This paper presents the practical analysis of the optical code-division multiple-access (O-CDMA) systems based on perfect difference codes. The work carried out use SNR criterion to select the optimal value of avalanche photodiodes (APD) gain and shows how the mismatch in the splitters and gains of the APD used in the transmitters and receivers of network can degrade the BER performance of the system. The investigations also reveal that higher APD gains are not suitable for such systems even at higher powers. The system performance, with consideration of shot noise, thermal noise, bulk and surface leakage currents is also investigated.

  20. Study of nuclear computer code maintenance and management system

    International Nuclear Information System (INIS)

    Ryu, Chang Mo; Kim, Yeon Seung; Eom, Heung Seop; Lee, Jong Bok; Kim, Ho Joon; Choi, Young Gil; Kim, Ko Ryeo

    1989-01-01

    Software maintenance is one of the most important problems since late 1970's.We wish to develop a nuclear computer code system to maintenance and manage KAERI's nuclear software. As a part of this system, we have developed three code management programs for use on CYBER and PC systems. They are used in systematic management of computer code in KAERI. The first program is embodied on the CYBER system to rapidly provide information on nuclear codes to the users. The second and the third programs were embodied on the PC system for the code manager and for the management of data in korean language, respectively. In the requirement analysis, we defined each code, magnetic tape, manual and abstract information data. In the conceptual design, we designed retrieval, update, and output functions. In the implementation design, we described the technical considerations of database programs, utilities, and directions for the use of databases. As a result of this research, we compiled the status of nuclear computer codes which belonged KAERI until September, 1988. Thus, by using these three database programs, we could provide the nuclear computer code information to the users more rapidly. (Author)

  1. Optimal concentrations in transport systems

    Science.gov (United States)

    Jensen, Kaare H.; Kim, Wonjung; Holbrook, N. Michele; Bush, John W. M.

    2013-01-01

    Many biological and man-made systems rely on transport systems for the distribution of material, for example matter and energy. Material transfer in these systems is determined by the flow rate and the concentration of material. While the most concentrated solutions offer the greatest potential in terms of material transfer, impedance typically increases with concentration, thus making them the most difficult to transport. We develop a general framework for describing systems for which impedance increases with concentration, and consider material flow in four different natural systems: blood flow in vertebrates, sugar transport in vascular plants and two modes of nectar drinking in birds and insects. The model provides a simple method for determining the optimum concentration copt in these systems. The model further suggests that the impedance at the optimum concentration μopt may be expressed in terms of the impedance of the pure (c = 0) carrier medium μ0 as μopt∼2αμ0, where the power α is prescribed by the specific flow constraints, for example constant pressure for blood flow (α = 1) or constant work rate for certain nectar-drinking insects (α = 6). Comparing the model predictions with experimental data from more than 100 animal and plant species, we find that the simple model rationalizes the observed concentrations and impedances. The model provides a universal framework for studying flows impeded by concentration, and yields insight into optimization in engineered systems, such as traffic flow. PMID:23594815

  2. Code system to compute radiation dose in human phantoms

    International Nuclear Information System (INIS)

    Ryman, J.C.; Cristy, M.; Eckerman, K.F.; Davis, J.L.; Tang, J.S.; Kerr, G.D.

    1986-01-01

    Monte Carlo photon transport code and a code using Monte Carlo integration of a point kernel have been revised to incorporate human phantom models for an adult female, juveniles of various ages, and a pregnant female at the end of the first trimester of pregnancy, in addition to the adult male used earlier. An analysis code has been developed for deriving recommended values of specific absorbed fractions of photon energy. The computer code system and calculational method are described, emphasizing recent improvements in methods

  3. PlayNCool: Opportunistic Network Coding for Local Optimization of Routing in Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces PlayNCool, an opportunistic protocol with local optimization based on network coding to increase the throughput of a wireless mesh network (WMN). PlayNCool aims to enhance current routing protocols by (i) allowing random linear network coding transmissions end-to-end, (ii...... in large scale mesh networks. We show that PlayNCool can provide gains of more than 3x in individual links, which translates into a large end-to-end throughput improvement, and that it provides higher gains when more nodes in the network contend for the channel at the MAC layer, making it particularly...... relevant for dense mesh networks....

  4. PERFORMANCE ANALYSIS OF OPTICAL CDMA SYSTEM USING VC CODE FAMILY UNDER VARIOUS OPTICAL PARAMETERS

    Directory of Open Access Journals (Sweden)

    HASSAN YOUSIF AHMED

    2012-06-01

    Full Text Available The intent of this paper is to study the performance of spectral-amplitude coding optical code-division multiple-access (OCDMA systems using Vector Combinatorial (VC code under various optical parameters. This code can be constructed by an algebraic way based on Euclidian vectors for any positive integer number. One of the important properties of this code is that the maximum cross-correlation is always one which means that multi-user interference (MUI and phase induced intensity noise are reduced. Transmitter and receiver structures based on unchirped fiber Bragg grating (FBGs using VC code and taking into account effects of the intensity, shot and thermal noise sources is demonstrated. The impact of the fiber distance effects on bit error rate (BER is reported using a commercial optical systems simulator, virtual photonic instrument, VPITM. The VC code is compared mathematically with reported codes which use similar techniques. We analyzed and characterized the fiber link, received power, BER and channel spacing. The performance and optimization of VC code in SAC-OCDMA system is reported. By comparing the theoretical and simulation results taken from VPITM, we have demonstrated that, for a high number of users, even if data rate is higher, the effective power source is adequate when the VC is used. Also it is found that as the channel spacing width goes from very narrow to wider, the BER decreases, best performance occurs at a spacing bandwidth between 0.8 and 1 nm. We have shown that the SAC system utilizing VC code significantly improves the performance compared with the reported codes.

  5. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  6. Temporally Dependent Rate-Distortion Optimization for Low-Delay Hierarchical Video Coding.

    Science.gov (United States)

    Gao, Yanbo; Zhu, Ce; Li, Shuai; Yang, Tianwu

    2017-09-01

    Low-delay hierarchical coding structure (LD-HCS), as one of the most important components in the latest High Efficiency Video Coding (HEVC) standard, greatly improves coding performance. It groups consecutive P/B frames into different layers and encodes them with different quantization parameters (QPs) and reference mechanisms in such a way that temporal dependency among frames can be exploited. However, due to varying characteristics of video contents, temporal dependency among coding units differs significantly from each other in the same or different layers, while a fixed LD-HCS scheme cannot take full advantage of the dependency, leading to a substantial loss in coding performance. This paper addresses the temporally dependent rate distortion optimization (RDO) problem by attempting to exploit varying temporal dependency of different units. First, the temporal relationship of different frames under the LD-HCS is examined, and hierarchical temporal propagation chains are constructed to represent the temporal dependency among coding units in different frames. Then, a hierarchical temporally dependent RDO scheme is developed specifically for the LD-HCS based on a source distortion propagation model. Experimental results show that our proposed scheme can achieve 2.5% and 2.3% BD-rate gain in average compared with the HEVC codec under the same configuration of P and B frames, respectively, with a negligible increase in encoding time. Furthermore, coupled with QP adaption, our proposed method can achieve higher coding gains, e.g., with multi-QP optimization, about 5.4% and 5.0% BD-rate saving in average over the HEVC codec under the same setting of P and B frames, respectively.

  7. The JAERI code system for evaluation of BWR ECCS performance

    International Nuclear Information System (INIS)

    Kohsaka, Atsuo; Akimoto, Masayuki; Asahi, Yoshiro; Abe, Kiyoharu; Muramatsu, Ken; Araya, Fumimasa; Sato, Kazuo

    1982-12-01

    Development of respective computer code system of BWR and PWR for evaluation of ECCS has been conducted since 1973 considering the differences of the reactor cooling system, core structure and ECCS. The first version of the BWR code system, of which developmental work started earlier than that of the PWR, has been completed. The BWR code system is designed to provide computational tools to analyze all phases of LOCAs and to evaluate the performance of the ECCS including an ''Evaluation Model (EM)'' feature in compliance with the requirements of the current Japanese Evaluation Guideline of ECCS. The BWR code system could be used for licensing purpose, i.e. for ECCS performance evaluation or audit calculations to cross-examine the methods and results of applicants or vendors. The BWR code system presented in this report comprises several computer codes, each of which analyzes a particular phase of a LOCA or a system blowdown depending on a range of LOCAs, i.e. large and small breaks in a variety of locations in the reactor system. The system includes ALARM-B1, HYDY-B1 and THYDE-B1 for analysis of the system blowdown for various break sizes, THYDE-B-REFLOOD for analysis of the reflood phase and SCORCH-B2 for the calculation of the fuel assembl hot plane temperature. When the multiple codes are used to analyze a broad range of LOCA as stated above, it is very important to evaluate the adequacy and consistency between the codes used to cover an entire break spectrum. The system consistency together with the system performance are discussed for a large commercial BWR. (author)

  8. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Qing [Univ. of Colorado, Colorado Springs, CO (United States); Whaley, Richard Clint [Univ. of Texas, San Antonio, TX (United States); Qasem, Apan [Texas State Univ., San Marcos, TX (United States); Quinlan, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  9. Network coding and its applications to satellite systems

    DEFF Research Database (Denmark)

    Vieira, Fausto; Roetter, Daniel Enrique Lucani

    2015-01-01

    Network coding has its roots in information theory where it was initially proposed as a way to improve a two-node communication using a (broadcasting) relay. For this theoretical construct, a satellite communications system was proposed as an illustrative example, where the relay node would...... be a satellite covering the two nodes. The benefits in terms of throughput, resilience, and flexibility of network coding are quite relevant for wireless networks in general, and for satellite systems in particular. This chapter presents some of the basics in network coding, as well as an overview of specific...... scenarios where network coding provides a significant improvement compared to existing solutions, for example, in broadcast and multicast satellite networks, hybrid satellite-terrestrial networks, and broadband multibeam satellites. The chapter also compares coding perspectives and revisits the layered...

  10. Optimization of energy saving device combined with a propeller using real-coded genetic algorithm

    Directory of Open Access Journals (Sweden)

    Ryu Tomohiro

    2014-06-01

    Full Text Available This paper presents a numerical optimization method to improve the performance of the propeller with Turbo-Ring using real-coded genetic algorithm. In the presented method, Unimodal Normal Distribution Crossover (UNDX and Minimal Generation Gap (MGG model are used as crossover operator and generation-alternation model, respectively. Propeller characteristics are evaluated by a simple surface panel method “SQCM” in the optimization process. Blade sections of the original Turbo-Ring and propeller are replaced by the NACA66 a = 0.8 section. However, original chord, skew, rake and maximum blade thickness distributions in the radial direction are unchanged. Pitch and maximum camber distributions in the radial direction are selected as the design variables. Optimization is conducted to maximize the efficiency of the propeller with Turbo-Ring. The experimental result shows that the efficiency of the optimized propeller with Turbo-Ring is higher than that of the original propeller with Turbo-Ring.

  11. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  12. The PASC-3 code system and the UNIPASC environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.

    1991-08-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and its associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified, Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  13. Code-modulated interferometric imaging system using phased arrays

    Science.gov (United States)

    Chauhan, Vikas; Greene, Kevin; Floyd, Brian

    2016-05-01

    Millimeter-wave (mm-wave) imaging provides compelling capabilities for security screening, navigation, and bio- medical applications. Traditional scanned or focal-plane mm-wave imagers are bulky and costly. In contrast, phased-array hardware developed for mass-market wireless communications and automotive radar promise to be extremely low cost. In this work, we present techniques which can allow low-cost phased-array receivers to be reconfigured or re-purposed as interferometric imagers, removing the need for custom hardware and thereby reducing cost. Since traditional phased arrays power combine incoming signals prior to digitization, orthogonal code-modulation is applied to each incoming signal using phase shifters within each front-end and two-bit codes. These code-modulated signals can then be combined and processed coherently through a shared hardware path. Once digitized, visibility functions can be recovered through squaring and code-demultiplexing operations. Pro- vided that codes are selected such that the product of two orthogonal codes is a third unique and orthogonal code, it is possible to demultiplex complex visibility functions directly. As such, the proposed system modulates incoming signals but demodulates desired correlations. In this work, we present the operation of the system, a validation of its operation using behavioral models of a traditional phased array, and a benchmarking of the code-modulated interferometer against traditional interferometer and focal-plane arrays.

  14. The Impact of Diagnostic Code Misclassification on Optimizing the Experimental Design of Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Steven J. Schrodi

    2017-01-01

    Full Text Available Diagnostic codes within electronic health record systems can vary widely in accuracy. It has been noted that the number of instances of a particular diagnostic code monotonically increases with the accuracy of disease phenotype classification. As a growing number of health system databases become linked with genomic data, it is critically important to understand the effect of this misclassification on the power of genetic association studies. Here, I investigate the impact of this diagnostic code misclassification on the power of genetic association studies with the aim to better inform experimental designs using health informatics data. The trade-off between (i reduced misclassification rates from utilizing additional instances of a diagnostic code per individual and (ii the resulting smaller sample size is explored, and general rules are presented to improve experimental designs.

  15. Optimizing the IAEA safeguards system

    International Nuclear Information System (INIS)

    Drobysz, Sonia; Sitt, Bernard

    2011-09-01

    During the 2010 Non-Proliferation Treaty Review Conference, States parties recognized that the Additional Protocol (AP) provides increased confidence about the absence of undeclared nuclear material and activities in a State as a whole. They agreed in action 28 of the final document to encourage 'all States parties that have not yet done so to conclude and bring into force an AP as soon as possible and to implement them provisionally pending their entry into force'. Today, 109 out of 189 States parties to the NPT have brought an AP in force. The remaining outliers have not yet done so for three types of reasons: they do not clearly understand what the AP entails; when they do, they refuse to accept new non-proliferation obligations either on the ground of lack of progress in the realm of disarmament, or simply because they are not ready to bear the burden of additional safeguards measures. Strong incentives are thus needed in order to facilitate universalization of the AP. While external incentives would help make the AP a de facto norm and encourage its conclusion by reducing the deplored imbalanced implementation of non-proliferation and disarmament obligations, internal incentives developed by the Agency and its member States can also play an important role. In this respect, NPT States parties recommended in action 32 of the Review Conference final document 'that IAEA safeguards should be assessed and evaluated regularly. Decisions adopted by the IAEA policy bodies aimed at further strengthening the effectiveness and improving the efficiency of IAEA safeguards should be supported and implemented'. The safeguards system should therefore be optimized: the most effective use of safeguards measures as well as safeguards human, financial and technical resources would indeed help enhance the acceptability and even attractiveness of the AP. Optimization can be attractive for States committed to a stronger verification regime independently from other

  16. Nonterminals and codings in defining variations of OL-systems

    DEFF Research Database (Denmark)

    Skyum, Sven

    1974-01-01

    The use of nonterminals versus the use of codings in variations of OL-systems is studied. It is shown that the use of nonterminals produces a comparatively low generative capacity in deterministic systems while it produces a comparatively high generative capacity in nondeterministic systems. Fina....... Finally it is proved that the family of context-free languages is contained in the family generated by codings on propagating OL-systems with a finite set of axioms, which was one of the open problems in [10]. All the results in this paper can be found in [71] and [72].......The use of nonterminals versus the use of codings in variations of OL-systems is studied. It is shown that the use of nonterminals produces a comparatively low generative capacity in deterministic systems while it produces a comparatively high generative capacity in nondeterministic systems...

  17. ATHENA code manual. Volume 1. Code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Carlson, K.E.; Roth, P.A.; Ransom, V.H.

    1986-09-01

    The ATHENA (Advanced Thermal Hydraulic Energy Network Analyzer) code has been developed to perform transient simulation of the thermal hydraulic systems which may be found in fusion reactors, space reactors, and other advanced systems. A generic modeling approach is utilized which permits as much of a particular system to be modeled as necessary. Control system and secondary system components are included to permit modeling of a complete facility. Several working fluids are available to be used in one or more interacting loops. Different loops may have different fluids with thermal connections between loops. The modeling theory and associated numerical schemes are documented in Volume I in order to acquaint the user with the modeling base and thus aid effective use of the code. The second volume contains detailed instructions for input data preparation

  18. Performance Evaluation of a Novel Optimization Sequential Algorithm (SeQ Code for FTTH Network

    Directory of Open Access Journals (Sweden)

    Fazlina C.A.S.

    2017-01-01

    Full Text Available The SeQ codes has advantages, such as variable cross-correlation property at any given number of users and weights, as well as effectively suppressed the impacts of phase induced intensity noise (PIIN and multiple access interference (MAI cancellation property. The result revealed, at system performance analysis of BER = 10-09, the SeQ code capable to achieved 1 Gbps up to 60 km.

  19. Implementing a mainframe coding/abstracting system.

    Science.gov (United States)

    Paige, L

    1992-08-01

    In conclusion, the successful implementation of a medical record abstracting system was realized due to the following factors: extensive planning, thorough organization of tasks, controlled implementation, and ongoing controls. While thorough planning and organization will result in an efficient implementation, ongoing controls will ensure continued success and produce high quality results for any medical record system.

  20. Development of realistic thermal hydraulic system analysis code

    International Nuclear Information System (INIS)

    Lee, Won Jae; Chung, B. D; Kim, K. D.

    2002-05-01

    The realistic safety analysis system is essential for nuclear safety research, advanced reactor development, safety analysis in nuclear industry and 'in-house' plant design capability development. In this project, we have developed a best-estimate multi-dimensional thermal-hydraulic system code, MARS, which is based on the integrated version of the RELAP5 and COBRA-TF codes. To improve the realistic analysis capability, we have improved the models for multi-dimensional two-phase flow phenomena and for advanced two-phase flow modeling. In addition, the GUI (Graphic User Interface) feature were developed to enhance the user's convenience. To develop the coupled analysis capability, the MARS code were linked with the three-dimensional reactor kinetics code (MASTER), the core thermal analysis code (COBRA-III/CP), and the best-estimate containment analysis code (CONTEMPT), resulting in MARS/MASTER/COBRA/CONTEMPT. Currently, the MARS code system has been distributed to 18 domestic organizations, including research, industrial, regulatory organizations and universities. The MARS has been being widely used for the safety research of existing PWRs, advanced PWR, CANDU and research reactor, the pre-test analysis of TH experiments, and others

  1. Development of realistic thermal hydraulic system analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Won Jae; Chung, B. D; Kim, K. D. [and others

    2002-05-01

    The realistic safety analysis system is essential for nuclear safety research, advanced reactor development, safety analysis in nuclear industry and 'in-house' plant design capability development. In this project, we have developed a best-estimate multi-dimensional thermal-hydraulic system code, MARS, which is based on the integrated version of the RELAP5 and COBRA-TF codes. To improve the realistic analysis capability, we have improved the models for multi-dimensional two-phase flow phenomena and for advanced two-phase flow modeling. In addition, the GUI (Graphic User Interface) feature were developed to enhance the user's convenience. To develop the coupled analysis capability, the MARS code were linked with the three-dimensional reactor kinetics code (MASTER), the core thermal analysis code (COBRA-III/CP), and the best-estimate containment analysis code (CONTEMPT), resulting in MARS/MASTER/COBRA/CONTEMPT. Currently, the MARS code system has been distributed to 18 domestic organizations, including research, industrial, regulatory organizations and universities. The MARS has been being widely used for the safety research of existing PWRs, advanced PWR, CANDU and research reactor, the pre-test analysis of TH experiments, and others.

  2. Sequence Coding and Search System for licensee event reports: code listings. Volume 2

    International Nuclear Information System (INIS)

    Gallaher, R.B.; Guymon, R.H.; Mays, G.T.; Poore, W.P.; Cagle, R.J.; Harrington, K.H.; Johnson, M.P.

    1985-04-01

    Operating experience data from nuclear power plants are essential for safety and reliability analyses, especially analyses of trends and patterns. The licensee event reports (LERs) that are submitted to the Nuclear Regulatory Commission (NRC) by the nuclear power plant utilities contain much of this data. The NRC's Office for Analysis and Evaluation of Operational Data (AEOD) has developed, under contract with NSIC, a system for codifying the events reported in the LERs. The primary objective of the Sequence Coding and Search System (SCSS) is to reduce the descriptive text of the LERs to coded sequences that are both computer-readable and computer-searchable. This system provides a structured format for detailed coding of component, system, and unit effects as well as personnel errors. The database contains all current LERs submitted by nuclear power plant utilities for events occurring since 1981 and is updated on a continual basis. Volume 2 contains all valid and acceptable codes used for searching and encoding the LER data. This volume contains updated material through amendment 1 to revision 1 of the working version of ORNL/NSIC-223, Vol. 2

  3. Optimization Program for Drinking Water Systems

    Science.gov (United States)

    The Area-Wide Optimization Program (AWOP) provides tools and approaches for drinking water systems to meet water quality optimization goals and provide an increased – and sustainable – level of public health protection to their consumers.

  4. Hydrogen detection systems leak response codes

    International Nuclear Information System (INIS)

    Desmas, T.; Kong, N.; Maupre, J.P.; Schindler, P.; Blanc, D.

    1990-01-01

    A loss in tightness of a water tube inside a Steam Generator Unit of a Fast Reactor is usually monitored by hydrogen detection systems. Such systems have demonstrated in the past their ability to detect a leak in a SGU. However, the increase in size of the SGU or the choice of ferritic material entails improvement of these systems in order to avoid secondary leak or to limit damages to the tube bundle. The R and D undertaken in France on this subject is presented. (author). 11 refs, 10 figs

  5. Automatic code generation for distributed robotic systems

    International Nuclear Information System (INIS)

    Jones, J.P.

    1993-01-01

    Hetero Helix is a software environment which supports relatively large robotic system development projects. The environment supports a heterogeneous set of message-passing LAN-connected common-bus multiprocessors, but the programming model seen by software developers is a simple shared memory. The conceptual simplicity of shared memory makes it an extremely attractive programming model, especially in large projects where coordinating a large number of people can itself become a significant source of complexity. We present results from three system development efforts conducted at Oak Ridge National Laboratory over the past several years. Each of these efforts used automatic software generation to create 10 to 20 percent of the system

  6. On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    KAUST Repository

    Sapenov, Yerzhan

    2017-07-06

    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

  7. Source Code Vulnerabilities in IoT Software Systems

    Directory of Open Access Journals (Sweden)

    Saleh Mohamed Alnaeli

    2017-08-01

    Full Text Available An empirical study that examines the usage of known vulnerable statements in software systems developed in C/C++ and used for IoT is presented. The study is conducted on 18 open source systems comprised of millions of lines of code and containing thousands of files. Static analysis methods are applied to each system to determine the number of unsafe commands (e.g., strcpy, strcmp, and strlen that are well-known among research communities to cause potential risks and security concerns, thereby decreasing a system’s robustness and quality. These unsafe statements are banned by many companies (e.g., Microsoft. The use of these commands should be avoided from the start when writing code and should be removed from legacy code over time as recommended by new C/C++ language standards. Each system is analyzed and the distribution of the known unsafe commands is presented. Historical trends in the usage of the unsafe commands of 7 of the systems are presented to show how the studied systems evolved over time with respect to the vulnerable code. The results show that the most prevalent unsafe command used for most systems is memcpy, followed by strlen. These results can be used to help train software developers on secure coding practices so that they can write higher quality software systems.

  8. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Science.gov (United States)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  9. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  10. SRAC2006; A Comprehensive neutronics calculation code system

    OpenAIRE

    奥村 啓介; 久語 輝彦; 金子 邦男; 土橋 敬一郎

    2007-01-01

    The SRAC is a code system applicable to neutronics analysis of a variety of reactor types. Since the publication of the second version of the users manual (JAERI-1302) in 1986 for the SRAC system, a number of additions and modifications to the functions and the library data have been made to establish a comprehensive neutronics code system. The current system includes major neutron data libraries (JENDL-3.3, JENDL-3.2, ENDF/B-VII, ENDF/B-VI.8, JEFF-3.1, JEF-2.2, etc.), and integrates five ele...

  11. Grid-code of Croatian power system

    International Nuclear Information System (INIS)

    Toljan, I.; Mesic, M.; Kalea, M.; Koscak, Z.

    2003-01-01

    Grid Rules by the Croatian Electricity Utility deal with the control and usage of the Croatian power system's transmission and distribution grid. Furthermore, these rules include obligations and permissions of power grid users and owners, with the aim of a reliable electricity supply.(author)

  12. Joint Transmitter-Receiver Optimization in the Downlink CDMA Systems

    Directory of Open Access Journals (Sweden)

    Mohammad Saquib

    2002-08-01

    Full Text Available To maximize the downlink code-division multiple access (CDMA system capacity, we propose to minimize the total transmitted power of the system subject to users′ signal-to-interference ratio (SIR requirements via designing optimum transmitter sequences and utilizing linear optimum receivers (minimum mean square error (MMSE receiver. In our work on joint transmitter-receiver design for the downlink CDMA systems with multiple antennas and multipath channels, we develop several optimization algorithms by considering various system constraints and prove their convergence. We empirically observed that under the optimization algorithm with no constraint on the system, the optimum receiver structure matches the received transmitter sequences. A simulation study is performed to see how the different practical system constraints penalize the system with respect to the optimum algorithm with no constraint on the system.

  13. Optimal Control and Optimization of Stochastic Supply Chain Systems

    CERN Document Server

    Song, Dong-Ping

    2013-01-01

    Optimal Control and Optimization of Stochastic Supply Chain Systems examines its subject in the context of the presence of a variety of uncertainties. Numerous examples with intuitive illustrations and tables are provided, to demonstrate the structural characteristics of the optimal control policies in various stochastic supply chains and to show how to make use of these characteristics to construct easy-to-operate sub-optimal policies.                 In Part I, a general introduction to stochastic supply chain systems is provided. Analytical models for various stochastic supply chain systems are formulated and analysed in Part II. In Part III the structural knowledge of the optimal control policies obtained in Part II is utilized to construct easy-to-operate sub-optimal control policies for various stochastic supply chain systems accordingly. Finally, Part IV discusses the optimisation of threshold-type control policies and their robustness. A key feature of the book is its tying together of ...

  14. Modular ORIGEN-S for multi-physics code systems

    International Nuclear Information System (INIS)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack

    2011-01-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  15. User's manual for the BNW-I optimization code for dry-cooled power plants. Volume I

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.; Faletti, D.W.; Wiles, L.E.

    1977-01-01

    This User's Manual provides information on the use and operation of three versions of BNW-I, a computer code developed by Battelle, Pacific Northwest Laboratory (PNL) as a part of its activities under the ERDA Dry Cooling Tower Program. These three versions of BNW-I were used as reported elsewhere to obtain comparative incremental costs of electrical power production by two advanced concepts (one using plastic heat exchangers and one using ammonia as an intermediate heat transfer fluid) and a state-of-the-art system. The computer program offers a comprehensive method of evaluating the cost savings potential of dry-cooled heat rejection systems and components for power plants. This method goes beyond simple ''figure-of-merit'' optimization of the cooling tower and includes such items as the cost of replacement capacity needed on an annual basis and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence, the BNW-I code is a useful tool for determining potential cost savings of new heat transfer surfaces, new piping or other components as part of an optimized system for a dry-cooled power plant.

  16. Dependability of self-optimizing mechatronic systems

    CERN Document Server

    Rammig, Franz; Schäfer, Wilhelm; Sextro, Walter

    2014-01-01

    Intelligent technical systems, which combine mechanical, electrical and software engineering with control engineering and advanced mathematics, go far beyond the state of the art in mechatronics and open up fascinating perspectives. Among these systems are so-called self-optimizing systems, which are able to adapt their behavior autonomously and flexibly to changing operating conditions. Self-optimizing systems create high value for example in terms of energy and resource efficiency as well as reliability. The Collaborative Research Center 614 "Self-optimizing Concepts and Structures in Mechanical Engineering" pursued the long-term aim to open up the active paradigm of self-optimization for mechanical engineering and to enable others to develop self-optimizing systems. This book is directed to researchers and practitioners alike. It provides a design methodology for the development of self-optimizing systems consisting of a reference process, methods, and tools. The reference process is divided into two phase...

  17. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm 3 , which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm 3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique

  18. System Data Model (SDM) Source Code

    Science.gov (United States)

    2012-08-23

    Harmonization of Plug-and-Play Technology for Modular and Reconfigurable Rapid Response Nanosatellites ," European Space Agency Small Satellite Systems and...Nordenberg, R., “QuadSat/PnP: A Space-Plug-and-play Architecture (SPA) Compliant Nanosatellite ,” Paper No. AIAA-2011-1575, AIAA Infotech@Aerospace, St...AIAA Infotech@Aerospace Conference, Rohnert Park, CA, 7-9 May 2007. 43. McNutt C., Vick R., Whiting H., Lyke J., “Modular Nanosatellites – (Plug

  19. Analysis of Coded FHSS Systems with Multiple Access Interference over Generalized Fading Channels

    Directory of Open Access Journals (Sweden)

    Salam A. Zummo

    2009-02-01

    Full Text Available We study the effect of interference on the performance of coded FHSS systems. This is achieved by modeling the physical channel in these systems as a block fading channel. In the derivation of the bit error probability over Nakagami fading channels, we use the exact statistics of the multiple access interference (MAI in FHSS systems. Due to the mathematically intractable expression of the Rician distribution, we use the Gaussian approximation to derive the error probability of coded FHSS over Rician fading channel. The effect of pilot-aided channel estimation is studied for Rician fading channels using the Gaussian approximation. From this, the optimal hopping rate in coded FHSS is approximated. Results show that the performance loss due to interference increases as the hopping rate decreases.

  20. Modification of BINX code for HP9000 system

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. C.; Kim, Y. J.; Kim, Y. G.; Chung, H. T.

    1997-12-01

    As one of the efforts to construct an integrated computation system, the K-CORE system for LMR core design and analysis, the BINX code which converts format of CCCC standard input/output files has been modified so that it works on HP 9000 workstations. The BINX code was improved to manipulate input/output files in the newer CCCC version IV format and some bugs in the former code were eliminated. These give BINX the compatibility of the input/output files among calculation modules. Hence the cross-section library processing system that can convert and produce standard input/output files satisfying the user`s function requirement has been established in the K-CORE system. (author). 10 refs.

  1. FAST: An advanced code system for fast reactor transient analysis

    International Nuclear Information System (INIS)

    Mikityuk, Konstantin; Pelloni, Sandro; Coddington, Paul; Bubelis, Evaldas; Chawla, Rakesh

    2005-01-01

    One of the main goals of the FAST project at PSI is to establish a unique analytical code capability for the core and safety analysis of advanced critical (and sub-critical) fast-spectrum systems for a wide range of different coolants. Both static and transient core physics, as well as the behaviour and safety of the power plant as a whole, are studied. The paper discusses the structure of the code system, including the organisation of the interfaces and data exchange. Examples of validation and application of the individual programs, as well as of the complete code system, are provided using studies carried out within the context of designs for experimental accelerator-driven, fast-spectrum systems

  2. Fuzzy logic control and optimization system

    Science.gov (United States)

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  3. An Optimization Model for Design of Asphalt Pavements Based on IHAP Code Number 234

    Directory of Open Access Journals (Sweden)

    Ali Reza Ghanizadeh

    2016-01-01

    Full Text Available Pavement construction is one of the most costly parts of transportation infrastructures. Incommensurate design and construction of pavements, in addition to the loss of the initial investment, would impose indirect costs to the road users and reduce road safety. This paper aims to propose an optimization model to determine the optimal configuration as well as the optimum thickness of different pavement layers based on the Iran Highway Asphalt Paving Code Number 234 (IHAP Code 234. After developing the optimization model, the optimum thickness of pavement layers for secondary rural roads, major rural roads, and freeways was determined based on the recommended prices in “Basic Price List for Road, Runway and Railway” of Iran in 2015 and several charts were developed to determine the optimum thickness of pavement layers including asphalt concrete, granular base, and granular subbase with respect to road classification, design traffic, and resilient modulus of subgrade. Design charts confirm that in the current situation (material prices in 2015, application of asphalt treated layer in pavement structure is not cost effective. Also it was shown that, with increasing the strength of subgrade soil, the subbase layer may be removed from the optimum structure of pavement.

  4. European coding system for tissues and cells: a challenge unmet?

    Science.gov (United States)

    Reynolds, Melvin; Warwick, Ruth M; Poniatowski, Stefan; Trias, Esteve

    2010-11-01

    The Comité Européen de Normalisation (European Committee for Standardization, CEN) Workshop on Coding of Information and Traceability of Human Tissues and Cells was established by the Expert Working Group of the Directorate General for Health and Consumer Affairs of the European Commission (DG SANCO) to identify requirements concerning the coding of information and the traceability of human tissues and cells, and propose guidelines and recommendations to permit the implementation of the European Coding system required by the European Tissues and Cells Directive 2004/23/EC (ED). The Workshop included over 70 voluntary participants from tissue, blood and eye banks, national ministries for healthcare, transplant organisations, universities and coding organisations; mainly from Europe with a small number of representatives from professionals in Canada, Australia, USA and Japan. The Workshop commenced in April 2007 and held its final meeting in February 2008. The draft Workshop Agreement went through a public comment phase from 15 December 2007 until 15 January 2008 and the endorsement period ran from 9 April 2008 until 2 May 2008. The endorsed CEN Workshop Agreement (CWA) set out the issues regarding a common coding system, qualitatively assessed what the industry felt was required of a coding system, reviewed coding systems that were put forward as potential European coding systems and established a basic specification for a proposed European coding system for human tissues and cells, based on ISBT 128, and which is compatible with existing systems of donation identification, traceability and nomenclatures, indicating how implementation of that system could be approached. The CWA, and the associated Workshop proposals with recommendations, were finally submitted to the European Commission and to the Committee of Member States that assists its management process under article 29 of the Directive 2004/23/EC on May 25 2008. In 2009 the European Commission initiated an

  5. 3D neutronic codes coupled with thermal-hydraulic system codes for PWR, and BWR and VVER reactors

    Energy Technology Data Exchange (ETDEWEB)

    Langenbuch, S.; Velkov, K. [GRS, Garching (Germany); Lizorkin, M. [Kurchatov-Institute, Moscow (Russian Federation)] [and others

    1997-07-01

    This paper describes the objectives of code development for coupling 3D neutronics codes with thermal-hydraulic system codes. The present status of coupling ATHLET with three 3D neutronics codes for VVER- and LWR-reactors is presented. After describing the basic features of the 3D neutronic codes BIPR-8 from Kurchatov-Institute, DYN3D from Research Center Rossendorf and QUABOX/CUBBOX from GRS, first applications of coupled codes for different transient and accident scenarios are presented. The need of further investigations is discussed.

  6. Design of Thermal Systems Using Topology Optimization

    DEFF Research Database (Denmark)

    Haertel, Jan Hendrik Klaas

    The goal of this thesis is to apply topology optimization to the design of di_erent thermal systems such as heat sinks and heat exchangers in order to improve the thermal performance of these systems compared to conventional designs. The design of thermal systems is a complex task that has...... of optimized designs are presented within this thesis.  The main contribution of the thesis is the development of several numerical optimization models that are applied to different design challenges within thermal engineering.  Topology optimization is applied in an industrial project to design the heat....... The design of 3D printed dry-cooled power plant condensers using a simpliffed thermouid topology optimization model is presented in another study. A benchmarking of the optimized geometries against a conventional heat exchanger design is conducted and the topology optimized designs show a superior...

  7. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  8. Tritium module for ITER/Tiber system code

    International Nuclear Information System (INIS)

    Finn, P.A.; Willms, S.; Busigin, A.; Kalyanam, K.M.

    1988-01-01

    A tritium module was developed for the ITER/Tiber system code to provide information on capital costs, tritium inventory, power requirements and building volumes for these systems. In the tritium module, the main tritium subsystems/emdash/plasma processing, atmospheric cleanup, water cleanup, blanket processing/emdash/are each represented by simple scaleable algorithms. 6 refs., 2 tabs

  9. Physical-layer network coding in coherent optical OFDM systems.

    Science.gov (United States)

    Guan, Xun; Chan, Chun-Kit

    2015-04-20

    We present the first experimental demonstration and characterization of the application of optical physical-layer network coding in coherent optical OFDM systems. It combines two optical OFDM frames to share the same link so as to enhance system throughput, while individual OFDM frames can be recovered with digital signal processing at the destined node.

  10. Progress on China nuclear data processing code system

    Science.gov (United States)

    Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu

    2017-09-01

    China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.

  11. Adaptive Morse code communication system for severely disabled individuals.

    Science.gov (United States)

    Yang, C H

    2000-01-01

    Morse code with an easy-to-operate, single switch input system has been shown to be an excellent communication adaptive device. Because maintaining a stable typing rate is not easy for the disabled, the automatic recognition of Morse code is difficult. Therefore, a suitable adaptive automatic recognition method is needed. This paper presents the application of a Least-Mean-Square algorithm to adaptive Morse code recognition for persons with impaired hand coordination and dexterity. Four processes are involved in this adaptive Morse code recognition method: space recognition, tone recognition, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method results in a better recognition rate for the participants tested in comparison to other methods from the literature.

  12. Code conversion for system design and safety analysis of NSSS

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hae Cho; Kim, Young Tae; Choi, Young Gil; Kim, Hee Kyung [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-01-01

    This report describes overall project works related to conversion, installation and validation of computer codes which are used in NSSS design and safety analysis of nuclear power plants. Domain/os computer codes for system safety analysis are installed and validated on Apollo DN10000, and then Apollo version are converted and installed again on HP9000/700 series with appropriate validation. Also, COOLII and COAST which are cyber version computer codes are converted into versions of Apollo DN10000 and HP9000/700, and installed with validation. This report details whole processes of work involved in the computer code conversion and installation, as well as software verification and validation results which are attached to this report. 12 refs., 8 figs. (author)

  13. Application of neutron/gamma transport codes for the design of explosive detection systems

    International Nuclear Information System (INIS)

    Elias, E.; Shayer, Z.

    1994-01-01

    Applications of neutron and gamma transport codes to the design of nuclear techniques for detecting concealed explosives material are discussed. The methodology of integrating radiation transport computations in the development, optimization and analysis phases of these new technologies is discussed. Transport and Monte Carlo codes are used for proof of concepts, guide the system integration, reduce the extend of experimental program and provide insight into the physical problem involved. The paper concentrates on detection techniques based on thermal and fast neutron interactions in the interrogated object. (authors). 6 refs., 1 tab., 5 figs

  14. Noncooperatively optimized tolerance: decentralized strategic optimization in complex systems.

    Science.gov (United States)

    Vorobeychik, Yevgeniy; Mayo, Jackson R; Armstrong, Robert C; Ruthruff, Joseph R

    2011-09-02

    We introduce noncooperatively optimized tolerance (NOT), a game theoretic generalization of highly optimized tolerance (HOT), which we illustrate in the forest fire framework. As the number of players increases, NOT retains features of HOT, such as robustness and self-dissimilar landscapes, but also develops features of self-organized criticality. The system retains considerable robustness even as it becomes fractured, due in part to emergent cooperation between players, and at the same time exhibits increasing resilience against changes in the environment, giving rise to intermediate regimes where the system is robust to a particular distribution of adverse events, yet not very fragile to changes.

  15. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  16. Simulation of water hammer phenomena using the system code ATHLET

    Energy Technology Data Exchange (ETDEWEB)

    Bratfisch, Christoph; Koch, Marco K. [Bochum Univ. (Germany). Reactor Simulation and Safety Group

    2017-07-15

    Water Hammer Phenomena can endanger the integrity of structures leading to a possible failure of pipes in nuclear power plants as well as in many industrial applications. These phenomena can arise in nuclear power plants in the course of transients and accidents induced by the start-up of auxiliary feed water systems or emergency core cooling systems in combination with rapid acting valves and pumps. To contribute to further development and validation of the code ATHLET (Analysis of Thermalhydraulics of Leaks and Transients), an experiment performed in the test facility Pilot Plant Pipework (PPP) at Fraunhofer UMSICHT is simulated using the code version ATHLET 3.0A.

  17. Simulation of water hammer phenomena using the system code ATHLET

    International Nuclear Information System (INIS)

    Bratfisch, Christoph; Koch, Marco K.

    2017-01-01

    Water Hammer Phenomena can endanger the integrity of structures leading to a possible failure of pipes in nuclear power plants as well as in many industrial applications. These phenomena can arise in nuclear power plants in the course of transients and accidents induced by the start-up of auxiliary feed water systems or emergency core cooling systems in combination with rapid acting valves and pumps. To contribute to further development and validation of the code ATHLET (Analysis of Thermalhydraulics of Leaks and Transients), an experiment performed in the test facility Pilot Plant Pipework (PPP) at Fraunhofer UMSICHT is simulated using the code version ATHLET 3.0A.

  18. Java Source Code Analysis for API Migration to Embedded Systems

    Energy Technology Data Exchange (ETDEWEB)

    Winter, Victor [Univ. of Nebraska, Omaha, NE (United States); McCoy, James A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guerrero, Jonathan [Univ. of Nebraska, Omaha, NE (United States); Reinke, Carl Werner [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Perry, James Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered by APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.

  19. OSCAR-4 Code System Application to the SAFARI-1 Reactor

    International Nuclear Information System (INIS)

    Stander, Gerhardt; Prinsloo, Rian H.; Tomasevic, Djordje I.; Mueller, Erwin

    2008-01-01

    The OSCAR reactor calculation code system consists of a two-dimensional lattice code, the three-dimensional nodal core simulator code MGRAC and related service codes. The major difference between the new version of the OSCAR system, OSCAR-4, and its predecessor, OSCAR-3, is the new version of MGRAC which contains many new features and model enhancements. In this work some of the major improvements in the nodal diffusion solution method, history tracking, nuclide transmutation and cross section models are described. As part of the validation process of the OSCAR-4 code system (specifically the new MGRAC version), some of the new models are tested by comparing computational results to SAFARI-1 reactor plant data for a number of operational cycles and for varying applications. A specific application of the new features allows correct modeling of, amongst others, the movement of fuel-follower type control rods and dynamic in-core irradiation schedules. It is found that the effect of the improved control rod model, applied over multiple cycles of the SAFARI-1 reactor operation history, has a significant effect on in-cycle reactivity prediction and fuel depletion. (authors)

  20. Solving linear systems in FLICA-4, thermohydraulic code for 3-D transient computations

    International Nuclear Information System (INIS)

    Allaire, G.

    1995-01-01

    FLICA-4 is a computer code, developed at the CEA (France), devoted to steady state and transient thermal-hydraulic analysis of nuclear reactor cores, for small size problems (around 100 mesh cells) as well as for large ones (more than 100000), on, either standard workstations or vector super-computers. As for time implicit codes, the largest time and memory consuming part of FLICA-4 is the routine dedicated to solve the linear system (the size of which is of the order of the number of cells). Therefore, the efficiency of the code is crucially influenced by the optimization of the algorithms used in assembling and solving linear systems: direct methods as the Gauss (or LU) decomposition for moderate size problems, iterative methods as the preconditioned conjugate gradient for large problems. 6 figs., 13 refs

  1. Multivariate optimization of production systems

    International Nuclear Information System (INIS)

    Carroll, J.A.; Horne, R.N.

    1992-01-01

    This paper reports that mathematically, optimization involves finding the extreme values of a function. Given a function of several variables, Z = ∫(rvec x 1 , rvec x 2 ,rvec x 3 ,→x n ), an optimization scheme will find the combination of these variables that produces an extreme value in the function, whether it is a minimum or a maximum value. Many examples of optimization exist. For instance, if a function gives and investor's expected return on the basis of different investments, numerical optimization of the function will determine the mix of investments that will yield the maximum expected return. This is the basis of modern portfolio theory. If a function gives the difference between a set of data and a model of the data, numerical optimization of the function will produce the best fit of the model to the data. This is the basis for nonlinear parameter estimation. Similar examples can be given for network analysis, queuing theory, decision analysis, etc

  2. libreta: Computerized Optimization and Code Synthesis for Electron Repulsion Integral Evaluation.

    Science.gov (United States)

    Zhang, Jun

    2018-02-13

    A new library called libreta for the evaluation of electron repulsion integrals (ERIs) over segmented and contracted Gaussian functions is developed. Our libreta is optimized from three aspects: (1) The Obara-Saika, Dupuis-Rys-King, and McMurchie-Davidson method are all employed. The recurrence relations involved are optimized by tree-search for each combination of angular momenta, and in the best case, 50% of the intermediates can be eliminated to reduce the computational cost. (2) The optimized codes for recurrence relations are combined with different contraction orders, each of which is suitable for ERIs of different angular momenta and contraction patterns. In practice, libreta will determine and use the best scheme to evaluate each ERI. (3) libreta is also optimized at the coding level. For example, with common subexpression elimination and local memory access, the performance can be increased by about 6% and 20%, respectively. The performance was compared with libint2. For both popular segmented and contracted basis sets, libreta can be faster than libint2 by 7.2-912.0%. For basis sets of heavy elements that contain Gaussian basis functions of large contraction degrees, the performance can be increased 20-30 times. We also tested the performance of libreta in direct self-consistent field (SCF) calculations and compared it with NWChem. In most cases, the average time for one SCF iteration by libreta is less than NWChem by 144.2-495.9%. Finally, we discuss the origin of redundancies occurring in the recurrence relations and derive an upper bound of the least number of intermediates required to be calculated in a McMurchie-Davidson recurrence, which is confirmed by ours as well as previous authors' results. We expect that libreta can become a useful tool for theoretical and computational chemists to develop their own algorithms rapidly.

  3. User's guide for the GSMP/OCMHD system code

    Energy Technology Data Exchange (ETDEWEB)

    Dennis, C. B.; Berry, G. F.

    1980-12-01

    The Systems Analysis group of the ANL Engineering Division conducts overall system studies for various power plant concepts, utilizing a computer simulation code. Analytical investigations explore a range of possible performance variables, in order to determine the sensitivity of a specific plant design to variation in key system parameters and, ultimately, to establish probable system performance limits. To accomplish this task, a Generalized System Modeling Program (GSMP) has been developed that will analyze and simulate the particular system of interest for any number of different configurations, automatically holding constraints while conducting either sensitivity studies or optimizations. One system investigated, while developing the ANL/GSMP code, is an open-cycle magneto-hydrodynamic (OCMHD) power plant. By linking mathematical models representing these OCMHD power plant components to the executive level GSMP driver the resulting system code, GSMP/OCMHD, can be used to simulate any OCMHD power plant configuration. This report, a user's guide for GSMP/OCMHD, describes the process for setting up an OCMHD configuration, preparing the input defining that configuration, running the computer code and interpreting the results generated.

  4. Electric power system applications of optimization

    CERN Document Server

    Momoh, James A

    2008-01-01

    Introduction Structure of a Generic Electric Power System  Power System Models  Power System Control Power System Security Assessment  Power System Optimization as a Function of Time  Review of Optimization Techniques Applicable to Power Systems Electric Power System Models  Complex Power Concepts Three-Phase Systems Per Unit Representation  Synchronous Machine Modeling Reactive Capability Limits Prime Movers and Governing Systems  Automatic Gain Control Transmission Subsystems  Y-Bus Incorporating the Transformer Effect  Load Models  Available Transfer Capability  Illustrative Examples  Power

  5. Satellite link protocols design for the CODE system

    Science.gov (United States)

    Fernandez, A.; Vidaller, L.; Miguel, C.; Briones, D.

    1989-05-01

    The design of satellite link protocols for Very Small Aperture Terminals (VSAT) systems is outlined. The CODE system (Cooperative Olympus Data Experiment) is a VSAT system with two main characteristics: very low bit error rate, and multiple access over FDM channels in the inbound link. The design of the link protocols for this system covers two main aspects: error control procedures and medium access control procedures. In order to analyze both aspects, a profile of the average user of the CODE system is defined in terms of types of traffic and of messages arrival and service rates for every type of traffic. An analysis of the mean time between failures is made, and the average delay and through-put for different access methods are computed, including stability analysis for Aloha-based systems.

  6. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems......) a sequential formulation based on optimality criteria; and (4) a sequential formulation including a new so-called bounds iteration method (BIM). Numerical tests indicate that the sequential technique including the BIM is particularly fast and stable. The B1M is not only effective in reliabilitybased...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  7. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  8. Study on the properties of infrared wavefront coding athermal system under several typical temperature gradient distributions

    Science.gov (United States)

    Cai, Huai-yu; Dong, Xiao-tong; Zhu, Meng; Huang, Zhan-hua

    2018-01-01

    Wavefront coding for athermal technique can effectively ensure the stability of the optical system imaging in large temperature range, as well as the advantages of compact structure and low cost. Using simulation method to analyze the properties such as PSF and MTF of wavefront coding athermal system under several typical temperature gradient distributions has directive function to characterize the working state of non-ideal temperature environment, and can effectively realize the system design indicators as well. In this paper, we utilize the interoperability of data between Solidworks and ZEMAX to simplify the traditional process of structure/thermal/optical integrated analysis. Besides, we design and build the optical model and corresponding mechanical model of the infrared imaging wavefront coding athermal system. The axial and radial temperature gradients of different degrees are applied to the whole system by using SolidWorks software, thus the changes of curvature, refractive index and the distance between the lenses are obtained. Then, we import the deformation model to ZEMAX for ray tracing, and obtain the changes of PSF and MTF in optical system. Finally, we discuss and evaluate the consistency of the PSF (MTF) of the wavefront coding athermal system and the image restorability, which provides the basis and reference for the optimal design of the wavefront coding athermal system. The results show that the adaptability of single material infrared wavefront coding athermal system to axial temperature gradient can reach the upper limit of temperature fluctuation of 60°C, which is much higher than that of radial temperature gradient.

  9. Dynamical System Approaches to Combinatorial Optimization

    DEFF Research Database (Denmark)

    Starke, Jens

    2013-01-01

    Several dynamical system approaches to combinatorial optimization problems are described and compared. These include dynamical systems derived from penalty methods; the approach of Hopfield and Tank; self-organizing maps, that is, Kohonen networks; coupled selection equations; and hybrid methods...

  10. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  11. Methodology for coding the energy emergency management information system. [Facility ID's and energy codes

    Energy Technology Data Exchange (ETDEWEB)

    D' Acierno, J.; Hermelee, A.; Fredrickson, C.P.; Van Valkenburg, K.

    1979-11-01

    The coding methodology for creating facility ID's and energy codes from information existing in EIA data systems currently being mapped into the EEMIS data structure is presented. A comprehensive approach is taken to facilitate implementation of EEMIS. A summary of EIA data sources which will be a part of the final system is presented in a table showing the intersection of 19 EIA data systems with the EEMIS data structure. The methodology for establishing ID codes for EIA sources and the corresponding EEMIS facilities in this table is presented. Detailed energy code translations from EIA source systems to the EEMIS energy codes are provided in order to clarify the transfer of energy data from many EIA systems which use different coding schemes. 28 tables.

  12. Opacity calculations for extreme physical systems: code RACHEL

    Science.gov (United States)

    Drska, Ladislav; Sinor, Milan

    1996-08-01

    Computer simulations of physical systems under extreme conditions (high density, temperature, etc.) require the availability of extensive sets of atomic data. This paper presents basic information on a self-consistent approach to calculations of radiative opacity, one of the key characteristics of such systems. After a short explanation of general concepts of the atomic physics of extreme systems, the structure of the opacity code RACHEL is discussed and some of its applications are presented.

  13. Structural Optimization of a Distributed Actuation System in a Flexible In-Plane Morphing Wing

    Science.gov (United States)

    2007-06-01

    DMAP Code These are the DMAP statements to inlude in the NASTRAN TM input file so that the stiffness matrix is an output of the analysis. A.1 DMAP ...67 Appendix A. DMAP Code . . . . . . . . . . . . . . . . . . . . . . . . 69 A.1 DMAP Stiffness Output...25 DMAP Direct Matrix Abstraction Program . . . . . . . . . . . . . 43 xv Structural Optimization of a Distributed Actuation System in a

  14. General structure and functions of the OPAL optimization system

    International Nuclear Information System (INIS)

    Mikolas, P.; Sustek, J.; Svarny, J.

    2005-01-01

    Presented version of OPAL - the in core fuel management system is under development also for core loading optimization of NPP Temelin (WWER-1000 type reactor). Description of the algorithm of separate modules was presented in several AER papers. The optimization process of NPP Temelin loading patterns comprises problems like preparation input data for NPP SW, loading searching, fixing and splitting of fuel enrichments, BP-assignment, FA rotation and fuel cycle economics. In application for NPP Temelin has been used NPP Temelin code system (spectral code with macrocode). The objective of fuel management is to design a fuel-loading scheme that is capable of producing the required energy at the minimum cost while satisfying the safety constraints. Usually the objectives are: a) To meet the energy production requirements (loaded fuel should have sufficient reactivity that covers reactivity defects associated with startup as well as reactivity loss due to the fuel depletion); b) To satisfy all safety-related limits (loaded fuel should preserve adequate power peaking limits (given namely LOCA), shutdown margins and no positive Moderator Temperature Coefficient (MTC); c) To minimize the power generation cost ($/kWh(e)). Flow of optimization process OPAL management system is in detail described and application for NPP Temelin cores optimization presented (Authors)

  15. [Quality management and strategic consequences of assessing documentation and coding under the German Diagnostic Related Groups system].

    Science.gov (United States)

    Schnabel, M; Mann, D; Efe, T; Schrappe, M; V Garrel, T; Gotzen, L; Schaeg, M

    2004-10-01

    The introduction of the German Diagnostic Related Groups (D-DRG) system requires redesigning administrative patient management strategies. Wrong coding leads to inaccurate grouping and endangers the reimbursement of treatment costs. This situation emphasizes the roles of documentation and coding as factors of economical success. The aims of this study were to assess the quantity and quality of initial documentation and coding (ICD-10 and OPS-301) and find operative strategies to improve efficiency and strategic means to ensure optimal documentation and coding quality. In a prospective study, documentation and coding quality were evaluated in a standardized way by weekly assessment. Clinical data from 1385 inpatients were processed for initial correctness and quality of documentation and coding. Principal diagnoses were found to be accurate in 82.7% of cases, inexact in 7.1%, and wrong in 10.1%. Effects on financial returns occurred in 16%. Based on these findings, an optimized, interdisciplinary, and multiprofessional workflow on medical documentation, coding, and data control was developed. Workflow incorporating regular assessment of documentation and coding quality is required by the DRG system to ensure efficient accounting of hospital services. Interdisciplinary and multiprofessional cooperation is recognized to be an important factor in establishing an efficient workflow in medical documentation and coding.

  16. Numerical optimization of the ramp-down phase with the RAPTOR code

    Science.gov (United States)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

  17. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  18. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  19. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  20. Development of BERMUDA: a radiation transport code system, 1

    International Nuclear Information System (INIS)

    Suzuki, Tomoo; Hasegawa, Akira; Tanaka, Shun-ichi; Nakashima, Hiroshi

    1992-05-01

    A radiation transport code system BERMUDA has been developed for one-, two- and three-dimensional geometries. The time-independent transport equation is numerically solved using a direct integration method in a multigroup model, to obtain spatial, angular and energy distributions of neutron, gamma rays or adjoint neutron flux. As to group constants, a library with an any structure of energy groups is capable to be produced from a data base JSSTDL, or by a processing code PROF-GROUCH-G/B, selecting objective nuclear data through a retrieval system EDFSRS. Validity of the present code system has been tested by analyzing the shielding benchmark experiments. The test has shown that accurate results are obtainable with this system especially in deep penetration calculation. Described are the devised calculation method and the results of validity tests. Input data specification, job control languages and output data are also described as a user's manual for the following four neutron transport codes: BERMUDA-1DN : sphere, slab(S 20 ), BERMUDA-2DN : cylinder (S 8 ), BERMUDA-2DN-S16 : cylinder (S 16 ), and BERMUDA-3DN : rectangular parallelpiped (S 8 ). (J.P.N.)

  1. Revised SWAT. The integrated burnup calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Mochizuki, Hiroki [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kiyosumi, Takehide [The Japan Research Institute, Ltd., Tokyo (Japan)

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  2. Two-Factor Authentication System based on QR-Codes

    Directory of Open Access Journals (Sweden)

    Andrey Yunusovich Iskhakov

    2014-09-01

    Full Text Available The opportunity of two-factor authentication usage in the control systems and access management on the basis of Quick Response codes with one-time passwords is analyzed in the work. The mobile application is proposed to use as a software token.

  3. Revised SWAT. The integrated burnup calculation code system

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Kiyosumi, Takehide

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  4. Adaptive Wavelet Coding Applied in a Wireless Control System

    Science.gov (United States)

    Gama, Felipe O. S.; O. Salazar, Andrés

    2017-01-01

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus Eb/N0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop. PMID:29236048

  5. Coded aperture imaging system for nuclear fuel motion detection

    International Nuclear Information System (INIS)

    Stalker, K.T.; Kelly, J.G.

    1980-01-01

    A Coded Aperature Imaging System (CAIS) has been developed at Sandia National Laboratories to image the motion of nuclear fuel rods undergoing tests simulating accident conditions within a liquid metal fast breeder reactor. The tests require that the motion of the test fuel be monitored while it is immersed in a liquid sodium coolant precluding the use of normal optical means of imaging. However, using the fission gamma rays emitted by the fuel itself and coded aperture techniques, images with 1.5 mm radial and 5 mm axial resolution have been attained. Using an electro-optical detection system coupled to a high speed motion picture camera a time resolution of one millisecond can be achieved. This paper will discuss the application of coded aperture imaging to the problem, including the design of the one-dimensional Fresnel zone plate apertures used and the special problems arising from the reactor environment and use of high energy gamma ray photons to form the coded image. Also to be discussed will be the reconstruction techniques employed and the effect of various noise sources on system performance. Finally, some experimental results obtained using the system will be presented

  6. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  7. Artificial intelligence in power system optimization

    CERN Document Server

    Ongsakul, Weerakorn

    2013-01-01

    With the considerable increase of AI applications, AI is being increasingly used to solve optimization problems in engineering. In the past two decades, the applications of artificial intelligence in power systems have attracted much research. This book covers the current level of applications of artificial intelligence to the optimization problems in power systems. This book serves as a textbook for graduate students in electric power system management and is also be useful for those who are interested in using artificial intelligence in power system optimization.

  8. Optimization of some eco-energetic systems

    International Nuclear Information System (INIS)

    Purica, I.; Pavelescu, M.; Stoica, M.

    1976-01-01

    An optimization problem of two eco-energetic systems is described. The first one is close to the actual eco-energetic system in Romania, while the second is a new one, based on nuclear energy as primary source and hydrogen energy as secondary source. The optimization problem solved is to find the optimal structure of the systems so that the objective functions adopted, namely unitary energy cost C and total pollution P, to be minimum at the same time. The problem can be modelated with a bimatrix cooperative mathematical game without side payments. We demonstrate the superiority of the new eco-energetic system. (author)

  9. PCS a code system for generating production cross section libraries

    International Nuclear Information System (INIS)

    Cox, L.J.

    1997-01-01

    This document outlines the use of the PCS Code System. It summarizes the execution process for generating FORMAT2000 production cross section files from FORMAT2000 reaction cross section files. It also describes the process of assembling the ASCII versions of the high energy production files made from ENDL and Mark Chadwick's calculations. Descriptions of the function of each code along with its input and output and use are given. This document is under construction. Please submit entries, suggestions, questions, and corrections to (ljc at sign llnl.gov) 3 tabs

  10. Energetic Optimal Control Of Adjustable Drive Systems

    Directory of Open Access Journals (Sweden)

    Ion BIVOL

    2002-12-01

    Full Text Available n the paper is developed a new control strategy for the adjustable speed drives. The strategy consists in the energetic optimal control of the dynamic regimes as starting, stopping and reversing. The main developed problems: formulation of energetic optimal problem, solution, experimental results via simulation and some considerations concerning the use of the control. The optimal developed solution can be applied for the both AC and DC drives, but only for linear systems.

  11. Color image coding based on recurrent iterated function systems

    Science.gov (United States)

    Kim, Kwon; Park, Rae-Hong

    1998-02-01

    This paper proposes a color image coding method based on recurrent iterated function systems (RIFSs). To encode a set of multispectral images, we apply a RIFS to multiset data consisting of three images. In the proposed method, the mappings not only between blocks within an individual spectral image but also between blocks of different spectral images are performed with contraction constraint. Simulation results show that the fractal coding based on the RIFS is useful for encoding concurrently a set of images by describing the similarity existing between a pair of images. In addition, the proposed color coding method can be applied to subband images and moving image sequences consisting of a set of images having similar gray patterns.

  12. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin [POSCO Nuclear Technology, Seoul (Korea, Republic of)

    2013-05-15

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors.

  13. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    International Nuclear Information System (INIS)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin

    2013-01-01

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors

  14. Testing geochemical modeling codes using New Zealand hydrothermal systems

    International Nuclear Information System (INIS)

    Bruton, C.J.; Glassley, W.E.; Bourcier, W.L.

    1993-12-01

    Hydrothermal systems in the Taupo Volcanic Zone, North Island, New Zealand are being used as field-based modeling exercises for the EQ3/6 geochemical modeling code package. Comparisons of the observed state and evolution of selected portions of the hydrothermal systems with predictions of fluid-solid equilibria made using geochemical modeling codes will: (1) ensure that we are providing adequately for all significant processes occurring in natural systems; (2) determine the adequacy of the mathematical descriptions of the processes; (3) check the adequacy and completeness of thermodynamic data as a function of temperature for solids, aqueous species and gases; and (4) determine the sensitivity of model results to the manner in which the problem is conceptualized by the user and then translated into constraints in the code input. Preliminary predictions of mineral assemblages in equilibrium with fluids sampled from wells in the Wairakei geothermal field suggest that affinity-temperature diagrams must be used in conjunction with EQ6 to minimize the effect of uncertainties in thermodynamic and kinetic data on code predictions. The kinetics of silica precipitation in EQ6 will be tested using field data from silica-lined drain channels carrying hot water away from the Wairakei borefield

  15. Development of new two-dimensional spectral/spatial code based on dynamic cyclic shift code for OCDMA system

    Science.gov (United States)

    Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria

    2017-07-01

    In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.

  16. Optimization of a computer simulation code to analyse thermal systems for solar energy water heating; Aperfeicoamento de um programa de simulacao computacional para analise de sistemas termicos de aquecimento de agua por energia solar

    Energy Technology Data Exchange (ETDEWEB)

    Pozzebon, Felipe Barin

    2009-02-15

    The potential of solar water heating systems through solar energy in Brazil is excellent due to the climatic features of the country. The performance of these systems is highly influenced also by the materials used to build it and by the dimension of its equipment and components. In face of global warming, solar energy gains more attention, since it is one of the renewable energy that will be largely used to replace some of the existing polluting types of energy. This paper presents the improvement of a software that conducts simulations of water heating systems using solar energy in thermosyphon regime or forced circulation. TermoSim, as it is called, was initiated at the Solar Labs, and is in its version 3.0. The current version is capable of simulating 6 different arrangements' possibilities combined with auxiliary energy: systems with solar collectors with auxiliary energy with gas, electric energy, internal electric energy, electric energy in series with the consumption line, and no auxiliary energy. The software is a tool to aid studies and analysis of solar heating systems, it has a friendly interface that is easy to comprehend and results are simple to use. Besides that, this version also allows simulations that consider heat losses at night, situation in which a reverse circulation can occur and mean efficiency loss, depending on the simulated system type. There were many simulations with the mathematical models used and comparisons were made with the climatic data of the city of Caxias do Sul, in Rio Grande do Sul state, in Brazil, determining the system with the most efficient configuration for the simulated water consume profile. The paper is finalized with simple economic analyses with the intention of foreseeing the time for payback on the investment, taking into account the current prices for electrical energy in the simulated area and the possible monthly economy provided with the use of a solar energy heating system. (author)

  17. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-01-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs

  18. LOLA SYSTEM: A code block for nodal PWR simulation. Part. II - MELON-3, CONCON and CONAXI Codes

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J. M.; Ahnert, C.; Gomez Santamaria, J.; Rodriguez Olabarria, I.

    1985-07-01

    Description of the theory and users manual of the MELON-3, CONCON and CONAXI codes, which are part of the core calculation system by nodal theory in one group, called LOLA SYSTEM. These auxiliary codes, provide some of the input data for the main module SIMULA-3; these are, the reactivity correlations constants, the albe does and the transport factors. (Author) 7 refs.

  19. Optimalization of selected RFID systems Parameters

    Directory of Open Access Journals (Sweden)

    Peter Vestenicky

    2004-01-01

    Full Text Available This paper describes procedure for maximization of RFID transponder read range. This is done by optimalization of magnetics field intensity at transponder place and by optimalization of antenna and transponder coils coupling factor. Results of this paper can be used for RFID with inductive loop, i.e. system working in near electromagnetic field.

  20. SRAC: JAERI thermal reactor standard code system for reactor design and analysis

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Takano, Hideki; Horikami, Kunihiko; Ishiguro, Yukio; Kaneko, Kunio; Hara, Toshiharu.

    1983-01-01

    The SRAC (Standard Reactor Analysis Code) is a code system for nuclear reactor analysis and design. It is composed of neutron cross section libraries and auxiliary processing codes, neutron spectrum routines, a variety of transport, 1-, 2- and 3-D diffusion routines, dynamic parameters and cell burn-up routines. By making the best use of the individual code function in the SRAC system, the user can select either the exact method for an accurate estimate of reactor characteristics or the economical method aiming at a shorter computer time, depending on the purpose of study. The user can select cell or core calculation; fixed source or eigenvalue problem; transport (collision probability or Sn) theory or diffusion theory. Moreover, smearing and collapsing of macroscopic cross sections are separately done by the user's selection. And a special attention is paid for double heterogeneity. Various techniques are employed to access the data storage and to optimize the internal data transfer. Benchmark calculations using the SRAC system have been made extensively for the Keff values of various types of critical assemblies (light water, heavy water and graphite moderated systems, and fast reactor systems). The calculated results show good prediction for the experimental Keff values. (author)

  1. Joint Power Allocation for Multicast Systems with Physical-Layer Network Coding

    Directory of Open Access Journals (Sweden)

    Chunguo Li

    2010-01-01

    Full Text Available This paper addresses the joint power allocation issue in physical-layer network coding (PLNC of multicast systems with two sources and two destinations communicating via a large number of distributed relays. By maximizing the achievable system rate, a constrained optimization problem is first formulated to jointly allocate powers for the source and relay terminals. Due to the nonconvex nature of the cost function, an iterative algorithm with guaranteed convergence is developed to solve the joint power allocation problem. As an alternative, an upper bound of the achievable rate is also derived to modify the original cost function in order to obtain a convex optimization solution. This approximation is shown to be asymptotically optimal in the sense of maximizing the achievable rate. It is confirmed through Monte Carlo simulations that the proposed joint power allocation schemes are superior to the existing schemes in terms of achievable rate and cumulative distribution function (CDF.

  2. Distributed magnetic field positioning system using code division multiple access

    Science.gov (United States)

    Prigge, Eric A. (Inventor)

    2003-01-01

    An apparatus and methods for a magnetic field positioning system use a fundamentally different, and advantageous, signal structure and multiple access method, known as Code Division Multiple Access (CDMA). This signal architecture, when combined with processing methods, leads to advantages over the existing technologies, especially when applied to a system with a large number of magnetic field generators (beacons). Beacons at known positions generate coded magnetic fields, and a magnetic sensor measures a sum field and decomposes it into component fields to determine the sensor position and orientation. The apparatus and methods can have a large `building-sized` coverage area. The system allows for numerous beacons to be distributed throughout an area at a number of different locations. A method to estimate position and attitude, with no prior knowledge, uses dipole fields produced by these beacons in different locations.

  3. A guide to the AUS modular neutronics code system

    International Nuclear Information System (INIS)

    Robinson, G.S.

    1987-04-01

    A general description is given of the AUS modular neutronics code system, which may be used for calculations of a very wide range of fission reactors, fusion blankets and other neutron applications. The present system has cross-section libraries derived from ENDF/B-IV and includes modules which provide for lattice calculations, one-dimensional transport calculations, and one, two, and three-dimensional diffusion calculations, burnup calculations and the flexible editing of results. Details of all system aspects of AUS are provided but the major individual modules are only outlined. Sufficient information is given to enable other modules to be added to the system

  4. An Expert System for the Development of Efficient Parallel Code

    Science.gov (United States)

    Jost, Gabriele; Chun, Robert; Jin, Hao-Qiang; Labarta, Jesus; Gimenez, Judit

    2004-01-01

    We have built the prototype of an expert system to assist the user in the development of efficient parallel code. The system was integrated into the parallel programming environment that is currently being developed at NASA Ames. The expert system interfaces to tools for automatic parallelization and performance analysis. It uses static program structure information and performance data in order to automatically determine causes of poor performance and to make suggestions for improvements. In this paper we give an overview of our programming environment, describe the prototype implementation of our expert system, and demonstrate its usefulness with several case studies.

  5. Multiple Description Coding for Closed Loop Systems over Erasure Channels

    DEFF Research Database (Denmark)

    Østergaard, Jan; Quevedo, Daniel

    2013-01-01

    ) and the decoder (plant). The feedback channel from the decoder to the encoder is assumed noiseless. Since the forward channel is digital, we need to employ quantization.We combine two techniques to enhance the reliability of the system. First, in order to guarantee that the system remains stable during packet......In this paper, we consider robust source coding in closed-loop systems. In particular, we consider a (possibly) unstable LTI system, which is to be stabilized via a network. The network has random delays and erasures on the data-rate limited (digital) forward channel between the encoder (controller...

  6. Stochastic optimization of GeantV code by use of genetic algorithms

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  7. Improving system modeling accuracy with Monte Carlo codes

    International Nuclear Information System (INIS)

    Johnson, A.S.

    1996-01-01

    The use of computer codes based on Monte Carlo methods to perform criticality calculations has become common-place. Although results frequently published in the literature report calculated k eff values to four decimal places, people who use the codes in their everyday work say that they only believe the first two decimal places of any result. The lack of confidence in the computed k eff values may be due to the tendency of the reported standard deviation to underestimate errors associated with the Monte Carlo process. The standard deviation as reported by the codes is the standard deviation of the mean of the k eff values for individual generations in the computer simulation, not the standard deviation of the computed k eff value compared with the physical system. A more subtle problem with the standard deviation of the mean as reported by the codes is that all the k eff values from the separate generations are not statistically independent since the k eff of a given generation is a function of k eff of the previous generation, which is ultimately based on the starting source. To produce a standard deviation that is more representative of the physical system, statistically independent values of k eff are needed

  8. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...... problems are described. Numerical tests indicate that a sequential technique called the bounds iteration method (BIM) is particularly fast and stable....

  9. Implementation, of the superfish computer code in the CYBER computer system of IEAV (Instituto de Estudos Avancados) in Brazil

    International Nuclear Information System (INIS)

    Silva, R. da.

    1982-10-01

    The computer code SUPERFISH has been implemented in CYBER - IEAv computer system. This code locates eletromagnetic modes in rf ressonant cavities. The manipulation of the boundary conditions and of the driving point was optimized. A computer program (ARRUELA) was developed in order to make easier SUPERFISH analysis of the rf properties of disc-and-washer cavities. This version of SUPERFISH showed satisfactory performance under tests. (Author) [pt

  10. Coded aperture material motion detection system for the ACPR

    International Nuclear Information System (INIS)

    McArthur, D.A.; Kelly, J.G.

    1975-01-01

    Single LMFBR fuel pins are being irradiated in Sandia's Annular Core Pulsed Reactor (ACPR). In these experiments single fuel pins have been driven well into the melt and vaporization regions in transients with pulse widths of about 5 ms. The ACPR is being upgraded so that it can be used to irradiate bundles of seven LMFBR fuel pins. The coded aperture material motion detection system described is being developed for this upgraded ACPR, and has for its design goals 1 mm transverse resolution (i.e., in the axial and radial directions), depth resolution of a few cm, and time resolution of 0.1 ms. The target date for development of this system is fall 1977. The paper briefly reviews the properties of coded aperture imaging, describes one possible system for the ACPR upgrade, discusses experiments which have been performed to investigate the feasibility of such a system, and describes briefly the further work required to develop such a system. The type of coded aperture to be used has not yet been fixed, but a one-dimensional section of a Fresnel zone plate appears at this time to have significant advantages

  11. A new two dimensional spectral/spatial multi-diagonal code for noncoherent optical code division multiple access (OCDMA) systems

    Science.gov (United States)

    Kadhim, Rasim Azeez; Fadhil, Hilal Adnan; Aljunid, S. A.; Razalli, Mohamad Shahrazel

    2014-10-01

    A new two dimensional codes family, namely two dimensional multi-diagonal (2D-MD) codes, is proposed for spectral/spatial non-coherent OCDMA systems based on the one dimensional MD code. Since the MD code has the property of zero cross correlation, the proposed 2D-MD code also has this property. So that, the multi-access interference (MAI) is fully eliminated and the phase induced intensity noise (PIIN) is suppressed with the proposed code. Code performance is analyzed in terms of bit error rate (BER) while considering the effect of shot noise, PIIN, and thermal noise. The performance of the proposed code is compared with the related MD, modified quadratic congruence (MQC), two dimensional perfect difference (2D-PD) and two dimensional diluted perfect difference (2D-DPD) codes. The analytical and the simulation results reveal that the proposed 2D-MD code outperforms the other codes. Moreover, a large number of simultaneous users can be accommodated at low BER and high data rate.

  12. Optimization problems in the Bulgarian electoral system

    Science.gov (United States)

    Konstantinov, Mihail; Yanev, Kostadin; Pelova, Galina; Boneva, Juliana

    2013-12-01

    In this paper we consider several optimization problems for the Bulgarian bi-proportional electoral systems. Experiments with data from real elections are presented. In this way a series of previous investigations of the authors is further developed.

  13. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.

    1975-01-01

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  14. Solution of optimization problems by means of the CASTEM 2000 computer code

    International Nuclear Information System (INIS)

    Charras, Th.; Millard, A.; Verpeaux, P.

    1991-01-01

    In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)

  15. Development of hydraulic analysis code for optimizing thermo-chemical is process reactors

    International Nuclear Information System (INIS)

    Terada, Atsuhiko; Hino, Ryutaro; Hirayama, Toshio; Nakajima, Norihiro; Sugiyama, Hitoshi

    2007-01-01

    The Japan Atomic Energy Agency has been conducting study on thermochemical IS process for water splitting hydrogen production. Based on the test results and know-how obtained through the bench-scale test, a pilot test plant, which has a hydrogen production performance of 30 Nm 3 /h, is being designed conceptually as the next step of the IS process development. In design of the IS pilot plant, it is important to make chemical reactors compact with high performance from the viewpoint of plant cost reduction. A new hydraulic analytical code has been developed for optimizing mixing performance of multi-phase flow involving chemical reactions especially in the Bunsen reactor. Complex flow pattern with gas-liquid chemical interaction involving flow instability will be characterized in the Bunsen reactor. Preliminary analytical results obtained with above mentioned code, especially flow patterns induced by swirling flow agreed well with that measured by water experiments, which showed vortex breakdown pattern in a simplified Bunsen reactor. (author)

  16. Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services

    DEFF Research Database (Denmark)

    Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani

    2016-01-01

    Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...

  17. Channel estimation for physical layer network coding systems

    CERN Document Server

    Gao, Feifei; Wang, Gongpu

    2014-01-01

    This SpringerBrief presents channel estimation strategies for the physical later network coding (PLNC) systems. Along with a review of PLNC architectures, this brief examines new challenges brought by the special structure of bi-directional two-hop transmissions that are different from the traditional point-to-point systems and unidirectional relay systems. The authors discuss the channel estimation strategies over typical fading scenarios, including frequency flat fading, frequency selective fading and time selective fading, as well as future research directions. Chapters explore the performa

  18. Photovoltaic power systems and the National Electrical Code: Suggested practices

    Energy Technology Data Exchange (ETDEWEB)

    Wiles, J. [New Mexico State Univ., Las Cruces, NM (United States). Southwest Technology Development Inst.

    1996-12-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently. Application of this information and results obtained are the responsibility of the user.

  19. Photovoltaic Power Systems and the National Electrical Code: Suggested Practices

    Energy Technology Data Exchange (ETDEWEB)

    None

    2002-02-01

    This guide provides information on how the National Electrical Code (NEC) applies to photovoltaic systems. The guide is not intended to supplant or replace the NEC; it paraphrases the NEC where it pertains to photovoltaic systems and should be used with the full text of the NEC. Users of this guide should be thoroughly familiar with the NEC and know the engineering principles and hazards associated with electrical and photovoltaic power systems. The information in this guide is the best available at the time of publication and is believed to be technically accurate; it will be updated frequently.

  20. Structure and operation of the ITS code system

    International Nuclear Information System (INIS)

    Halbleib, J.

    1988-01-01

    The TIGER series of time-independent coupled electron-photon Monte Carlo transport codes is a group of multimaterial and multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron-photon cascade by combining microscopic photon transport with a macroscopic random walk for electron transport. Major contributors to its evolution are listed. The author and his associates are primarily code users rather than code developers, and have borrowed freely from existing work wherever possible. Nevertheless, their efforts have resulted in various software packages for describing the production and transport of the electron-photon cascade that they found sufficiently useful to warrant dissemination through the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory. The ITS system (Integrated TIGER Series) represents the organization and integration of this combined software, along with much additional capability from previously unreleased work, into a single convenient package of exceptional user friendliness and portability. Emphasis is on simplicity and flexibility of application without sacrificing the rigor or sophistication of the physical model

  1. Neural map formation and sensory coding in the vomeronasal system.

    Science.gov (United States)

    Brignall, Alexandra C; Cloutier, Jean-François

    2015-12-01

    Sensory systems enable us to encode a clear representation of our environment in the nervous system by spatially organizing sensory stimuli being received. The organization of neural circuitry to form a map of sensory activation is critical for the interpretation of these sensory stimuli. In rodents, social communication relies strongly on the detection of chemosignals by the vomeronasal system, which regulates a wide array of behaviours, including mate recognition, reproduction, and aggression. The binding of these chemosignals to receptors on vomeronasal sensory neurons leads to activation of second-order neurons within glomeruli of the accessory olfactory bulb. Here, vomeronasal receptor activation by a stimulus is organized into maps of glomerular activation that represent phenotypic qualities of the stimuli detected. Genetic, electrophysiological and imaging studies have shed light on the principles underlying cell connectivity and sensory map formation in the vomeronasal system, and have revealed important differences in sensory coding between the vomeronasal and main olfactory system. In this review, we summarize the key factors and mechanisms that dictate circuit formation and sensory coding logic in the vomeronasal system, emphasizing differences with the main olfactory system. Furthermore, we discuss how detection of chemosignals by the vomeronasal system regulates social behaviour in mice, specifically aggression.

  2. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  3. Optimization of a wearable power system

    Energy Technology Data Exchange (ETDEWEB)

    Kovacevic, I.; Round, S. D.; Kolar, J. W.; Boulouchos, K.

    2008-07-01

    In this paper the optimization of wearable power system comprising of an internal combustion engine, motor/generator, inverter/rectifier, Li-battery pack, DC/DC converters, and controller is performed. The Wearable Power System must have the capability to supply an average 20 W for 4 days with peak power of 200 W and have a system weight less then 4 kg. The main objectives are to select the engine, fuel and battery type, to match the weight of fuel and the number of battery cells, to find the optimal working point of engine and minimizing the system weight. The minimization problem is defined in Matlab as a nonlinear constrained optimization task. The optimization procedure returns the optimal system design parameters: the Li-polymer battery with eight cells connected in series for a 28 V DC output voltage, the selection of gasoline/oil fuel mixture and the optimal engine working point of 12 krpm for a 4.5 cm{sup 3} 4-stroke engine. (author)

  4. An optimized cosine-modulated nonuniform filter bank design for subband coding of ECG signal

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2015-07-01

    Full Text Available A simple iterative technique for the design of nonuniform cosine modulated filter banks (CMFBS is presented in this paper. The proposed technique employs a single parameter for optimization. The nonuniform cosine modulated filter banks are derived by merging the adjacent filters of uniform cosine modulated filter banks. The prototype filter is designed with the aid of different adjustable window functions such as Kaiser, Cosh and Exponential, and by using the constrained equiripple finite impulse response (FIR digital filter design technique. In this method, either cut off frequency or passband edge frequency is varied in order to adjust the filter coefficients so that reconstruction error could be optimized/minimized to zero. Performance and effectiveness of the proposed method in terms of peak reconstruction error (PRE, aliasing distortion (AD, computational (CPU time, and number of iteration (NOI have been shown through the numerical examples and comparative studies. Finally, the technique is exploited for the subband coding of electrocardiogram (ECG and speech signals.

  5. SWAT3.1 - the integrated burnup code system driving continuous energy Monte Carlo codes MVP and MCNP

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Takada, Tomoyuki; Ryufuku, Susumu; Okuno, Hiroshi; Murazaki, Minoru; Ohkubo, Kiyoshi

    2009-05-01

    Integrated burnup calculation code system SWAT is a system that combines neutronics calculation code SRAC,which is widely used in Japan, and point burnup calculation code ORIGEN2. It has been used to evaluate the composition of the uranium, plutonium, minor actinides and the fission products in the spent nuclear fuel. Based on this idea, the integrated burnup calculation code system SWAT3.1 was developed by combining the continuous energy Monte Carlo code MVP and MCNP, and ORIGEN2. This enables us to treat the arbitrary fuel geometry and to generate the effective cross section data to be used in the burnup calculation with few approximations. This report describes the outline, input data instruction and several examples of the calculation. (author)

  6. INTERVALS OPTIMIZATION OF SYSTEMS INFORMATION SECURITY INSPECTION

    Directory of Open Access Journals (Sweden)

    V. A. Bogatyrev

    2014-09-01

    Full Text Available A Markov model is suggested for secure information systems, functioning under conditions of destructive impacts, which aftereffects are found by on-line and test control. It is assumed that on-line control, in contrast to the test one, is char- acterized by the limited control completeness, but does not require the stopping of computational process. The aim of re- search is to create models that optimize intervals of test control initialization by the criterion of probability maximization for system stay in the ready state to secure fulfillment of the functional requests and minimization of the dangerous system states in view of the uncertainty and intensity variance of the destructive impacts. Variants of testing intervals optimization are con- sidered depending on the intensity of destructive impacts by the criterion of the maximum system availability for the safe execution of queries. Optimization is carried out with and without adaptation to the actual intensity change of destructive impacts. The efficiency of adaptive change for testing periods is shown depending on the observed activity of destructive impacts. The solution of optimization problem is obtained by built-in tools of computer mathematics Mathcad 15, including symbolic mathematics for solution of systems of algebraic equations. The proposed models and methods of determining the optimal testing intervals can find their application in the system design of computer systems and networks of critical applications, working under conditions of destabilizing actions with the increased requirements for their safety.

  7. Optimal control applications in electric power systems

    CERN Document Server

    Christensen, G S; Soliman, S A

    1987-01-01

    Significant advances in the field of optimal control have been made over the past few decades. These advances have been well documented in numerous fine publications, and have motivated a number of innovations in electric power system engineering, but they have not yet been collected in book form. Our purpose in writing this book is to provide a description of some of the applications of optimal control techniques to practical power system problems. The book is designed for advanced undergraduate courses in electric power systems, as well as graduate courses in electrical engineering, applied mathematics, and industrial engineering. It is also intended as a self-study aid for practicing personnel involved in the planning and operation of electric power systems for utilities, manufacturers, and consulting and government regulatory agencies. The book consists of seven chapters. It begins with an introductory chapter that briefly reviews the history of optimal control and its power system applications and also p...

  8. Study on a new meteorological sampling scheme developed for the OSCAAR code system

    International Nuclear Information System (INIS)

    Liu Xinhe; Tomita, Kenichi; Homma, Toshimitsu

    2002-03-01

    One important step in Level-3 Probabilistic Safety Assessment is meteorological sequence sampling, on which the previous studies were mainly related to code systems using the straight-line plume model and more efforts are needed for those using the trajectory puff model such as the OSCAAR code system. This report describes the development of a new meteorological sampling scheme for the OSCAAR code system that explicitly considers population distribution. A group of principles set for the development of this new sampling scheme includes completeness, appropriate stratification, optimum allocation, practicability and so on. In this report, discussions are made about the procedures of the new sampling scheme and its application. The calculation results illustrate that although it is quite difficult to optimize stratification of meteorological sequences based on a few environmental parameters the new scheme do gather the most inverse conditions in a single subset of meteorological sequences. The size of this subset may be as small as a few dozens, so that the tail of a complementary cumulative distribution function is possible to remain relatively static in different trials of the probabilistic consequence assessment code. (author)

  9. Force optimized recoil control system

    Science.gov (United States)

    Townsend, P. E.; Radkiewicz, R. J.; Gartner, R. F.

    1982-05-01

    Reduction of the recoil force of high rate of fire automatic guns was proven effective. This system will allow consideration of more powerful guns for use in both helicopter and armored personnel carrier applications. By substituting the large shock loads of firing guns with a nearly constant force, both vibration and fatigue problems that prevent mounting of powerful automatic guns is eliminated.

  10. Application of the OPTIMUS Code to the Neutral Beam Injection System of TJ-II

    International Nuclear Information System (INIS)

    Fuentes, C.; Liniers, M.; Guasp, J.

    1998-01-01

    The different losses processes affecting a neutral beam since is born into the ions source until is coming into the fusion machine, are dependent of the residual gas pressure distribution inside injector. The OPTIMUS code analyzes that losses and calculates the pressure distribution inside one injector with specific geometry. The adaptation of injector to TJ-II has not required important design changes, only the operating range of the gas flow and the pumping speed have modified. The calculations show that the required gas flows for the optimal operation of the system ned an independent pumping system for the calorimeter box with a pumping speed of 1200001/s. The system efficiency is not affected by an hypothetical beaming effect and it is found also that with a proper conditioning of the injector walls, so that the absorption coefficients do not surpass excessively the unity value, the injector operation remains optimal. (Author) 8 refs

  11. Receiver System Analysis and Optimization

    Science.gov (United States)

    2013-01-01

    for several devices from the IBM SiGe 8HP process design kit (the manufacturing process used for the MDREX project): bipolar transistor ( BJT ), spiral...of the project. Most significantly, a transistor -level simulation algorithm compatible with the system level simulation algorithm was developed. This... transistor -level simulation program simultaneously and synchronizing them at time intervals. Since the new capability allows the simulation of the entire

  12. Optimization of high-definition video coding and hybrid fiber-wireless transmission in the 60 GHz band

    DEFF Research Database (Denmark)

    Lebedev, Alexander; Pham, Tien Thang; Beltrán, Marta

    2011-01-01

    We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality.......We demonstrate that, by jointly optimizing video coding and radio-over-fibre transmission, we extend the reach of 60-GHz wireless distribution of high-quality high-definition video satisfying low complexity and low delay constraints, while preserving superb video quality....

  13. BER performance comparison of optical CDMA systems with/without turbo codes

    Science.gov (United States)

    Kulkarni, Muralidhar; Chauhan, Vijender S.; Dutta, Yashpal; Sinha, Ravindra K.

    2002-08-01

    In this paper, we have analyzed and simulated the BER performance of a turbo coded optical code-division multiple-access (TC-OCDMA) system. A performance comparison has been made between uncoded OCDMA and TC-OCDMA systems employing various OCDMA address codes (optical orthogonal codes (OOCs), Generalized Multiwavelength Prime codes (GMWPC's), and Generalized Multiwavelength Reed Solomon code (GMWRSC's)). The BER performance of TC-OCDMA systems has been analyzed and simulated by varying the code weight of address code employed by the system. From the simulation results, it is observed that lower weight address codes can be employed for TC-OCDMA systems that can have the equivalent BER performance of uncoded systems employing higher weight address codes for a fixed number of active users.

  14. VACOSS - variable coding seal system for nuclear material control

    International Nuclear Information System (INIS)

    Kennepohl, K.; Stein, G.

    1977-12-01

    VACOSS - Variable Coding Seal System - is intended to seal: rooms and containers with nuclear material, nuclear instrumentation and equipment of the operator, instrumentation and equipment at the supervisory authority. It is easy to handle, reusable, transportable and consists of three components: 1. Seal. The light guide in fibre optics with infrared light emitter and receiver serves as lead. The statistical treatment of coded data given in the seal via adapter box guarantees an extremely high degree of access reliability. It is possible to store the data of two undue seal openings together with data concerning time and duration of the opening. 2. The adapter box can be used for input or input and output of data indicating the seal integrity. 3. The simulation programme is located in the computing center of the supervisory authority and permits to determine date and time of opening by decoding the seal memory data. (orig./WB) [de

  15. Security Concerns and Countermeasures in Network Coding Based Communications Systems

    DEFF Research Database (Denmark)

    Talooki, Vahid; Bassoli, Riccardo; Roetter, Daniel Enrique Lucani

    2015-01-01

    This survey paper shows the state of the art in security mechanisms, where a deep review of the current research and the status of this topic is carried out. We start by introducing network coding and its variety applications in enhancing current traditional networks. In particular, we analyze two...... key protocol types, namely, state-aware and stateless protocols, specifying the benefits and disadvantages of each one of them. We also present the key security assumptions of network coding (NC) systems as well as a detailed analysis of the security goals and threats, both passive and active....... This paper also presents a detailed taxonomy and a timeline of the different NC security mechanisms and schemes reported in the literature. Current proposed security mechanisms and schemes for NC in the literature are classified later. Finally a timeline of these mechanism and schemes is presented....

  16. Nexus: A modular workflow management system for quantum simulation codes

    Science.gov (United States)

    Krogel, Jaron T.

    2016-01-01

    The management of simulation workflows represents a significant task for the individual computational researcher. Automation of the required tasks involved in simulation work can decrease the overall time to solution and reduce sources of human error. A new simulation workflow management system, Nexus, is presented to address these issues. Nexus is capable of automated job management on workstations and resources at several major supercomputing centers. Its modular design allows many quantum simulation codes to be supported within the same framework. Current support includes quantum Monte Carlo calculations with QMCPACK, density functional theory calculations with Quantum Espresso or VASP, and quantum chemical calculations with GAMESS. Users can compose workflows through a transparent, text-based interface, resembling the input file of a typical simulation code. A usage example is provided to illustrate the process.

  17. DESIGN OPTIMIZATION OF ROTOR-BEARING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Hamit SARUHAN

    2003-03-01

    Full Text Available This paper presents a brief study of the information from the published literature and author's works regarding rotor-bearing systems analysis with respect to optimization. The main goal of this work is to motivate and give an idea to designers who are willing to deal with optimization of rotor-bearing sytems. The results obtained and presented in this study are to provide a comparison with numerical optimum design methods such as gradientbased method, and to show the potential of genetic algorithms in optimization of rotor-bearing systems. Genetic algorithms have been used as optimization problem solving techniques. They are parameter search procedures based on the idea of natural selection and genetics. These robust methods have increasingly recognized and applied in many applications.

  18. Genetic optimization of steam multi-turbines system

    International Nuclear Information System (INIS)

    Olszewski, Pawel

    2014-01-01

    Optimization analysis of partially loaded cogeneration, multiple-stages steam turbines system was numerically investigated by using own-developed code (C++). The system can be controlled by following variables: fresh steam temperature, pressure, and flow rates through all stages in steam turbines. Five various strategies, four thermodynamics and one economical, which quantify system operation, were defined and discussed as an optimization functions. Mathematical model of steam turbines calculates steam properties according to the formulation proposed by the International Association for the Properties of Water and Steam. Genetic algorithm GENOCOP was implemented as a solving engine for non–linear problem with handling constrains. Using formulated methodology, example solution for partially loaded system, composed of five steam turbines (30 input variables) with different characteristics, was obtained for five strategies. The genetic algorithm found multiple solutions (various input parameters sets) giving similar overall results. In real application it allows for appropriate scheduling of machine operation that would affect equable time load of every system compounds. Also based on these results three strategies where chosen as the most complex: the first thermodynamic law energy and exergy efficiency maximization and total equivalent energy minimization. These strategies can be successfully used in optimization of real cogeneration applications. - Highlights: • Genetic optimization model for a set of five various steam turbines was presented. • Four various thermodynamic optimization strategies were proposed and discussed. • Operational parameters (steam pressure, temperature, flow) influence was examined. • Genetic algorithm generated optimal solutions giving the best estimators values. • It has been found that similar energy effect can be obtained for various inputs

  19. Distributed-Computer System Optimizes SRB Joints

    Science.gov (United States)

    Rogers, James L., Jr.; Young, Katherine C.; Barthelemy, Jean-Francois M.

    1991-01-01

    Initial calculations of redesign of joint on solid rocket booster (SRB) that failed during Space Shuttle tragedy showed redesign increased weight. Optimization techniques applied to determine whether weight could be reduced while keeping joint closed and limiting stresses. Analysis system developed by use of existing software coupling structural analysis with optimization computations. Software designed executable on network of computer workstations. Took advantage of parallelism offered by finite-difference technique of computing gradients to enable several workstations to contribute simultaneously to solution of problem. Key features, effective use of redundancies in hardware and flexible software, enabling optimization to proceed with minimal delay and decreased overall time to completion.

  20. Optimization of photovoltaic power systems

    CERN Document Server

    Rekioua, Djamila

    2012-01-01

    Photovoltaic generation is one of the cleanest forms of energy conversion available. One of the advantages offered by solar energy is its potential to provide sustainable electricity in areas not served by the conventional power grid. Optimisation of Photovoltaic Power Systems details explicit modelling, control and optimisation of the most popular stand-alone applications such as pumping, power supply, and desalination. Each section is concluded by an example using the MATLAB(R) and Simulink(R) packages to help the reader understand and evaluate the performance of different photovoltaic syste

  1. Evaluation of CFETR as a Fusion Nuclear Science Facility using multiple system codes

    International Nuclear Information System (INIS)

    Chan, V.S.; Garofalo, A.M.; Leuer, J.A.; Costley, A.E.; Wan, B.N.

    2015-01-01

    This paper presents the results of a multi-system codes benchmarking study of the recently published China Fusion Engineering Test Reactor (CFETR) pre-conceptual design (Wan et al 2014 IEEE Trans. Plasma Sci. 42 495). Two system codes, General Atomics System Code (GASC) and Tokamak Energy System Code (TESC), using different methodologies to arrive at CFETR performance parameters under the same CFETR constraints show that the correlation between the physics performance and the fusion performance is consistent, and the computed parameters are in good agreement. Optimization of the first wall surface for tritium breeding and the minimization of the machine size are highly compatible. Variations of the plasma currents and profiles lead to changes in the required normalized physics performance, however, they do not significantly affect the optimized size of the machine. GASC and TESC have also been used to explore a lower aspect ratio, larger volume plasma taking advantage of the engineering flexibility in the CFETR design. Assuming the ITER steady-state scenario physics, the larger plasma together with a moderately higher B T and I p can result in a high gain Q fus  ∼ 12, P fus  ∼ 1 GW machine approaching DEMO-like performance. It is concluded that the CFETR baseline mode can meet the minimum goal of the Fusion Nuclear Science Facility (FNSF) mission and advanced physics will enable it to address comprehensively the outstanding critical technology gaps on the path to a demonstration reactor (DEMO). Before proceeding with CFETR construction steady-state operation has to be demonstrated, further development is needed to solve the divertor heat load issue, and blankets have to be designed with tritium breeding ratio (TBR) >1 as a target. (paper)

  2. Multi-agent for manufacturing systems optimization

    Science.gov (United States)

    Ciortea, E. M.; Tulbure, A.; Huţanu, C.-tin

    2016-08-01

    The paper is meant to be a dynamic approach to optimize manufacturing systems based on multi-agent systems. Multi-agent systems are semiautonomous decision makers and cooperate to optimize the manufacturing process. Increasing production the capacity is achieved by developing, implementing efficient and effective systems from control based on current manufacturing process. The model multi-agent proposed in this paper is based on communication between agents who, based on their mechanisms drive to autonomous decision making. Methods based on multi-agent programming are applied between flexible manufacturing processes and cooperation with agents. Based on multi-agent technology and architecture of intelligent manufacturing can lead to development of strategies for control and optimization of scheduled production resulting from the simulation.

  3. Collaborative Systems Driven Aircraft Configuration Design Optimization

    OpenAIRE

    Shiva Prakasha, Prajwal; Ciampa, Pier Davide; Nagel, Björn

    2016-01-01

    A Collaborative, Inside-Out Aircraft Design approach is presented in this paper. An approach using physics based analysis to evaluate the correlations between the airframe design, as well as sub-systems integration from the early design process, and to exploit the synergies within a simultaneous optimization process. Further, the disciplinary analysis modules involved in the optimization task are located in different organization. Hence, the Airframe and Subsystem design tools are integrated ...

  4. Adaptive stimulus optimization for sensory systems neuroscience

    OpenAIRE

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system...

  5. Space-Frequency Block Code with Matched Rotation for MIMO-OFDM System with Limited Feedback

    Directory of Open Access Journals (Sweden)

    Thushara D. Abhayapala

    2009-01-01

    Full Text Available This paper presents a novel matched rotation precoding (MRP scheme to design a rate one space-frequency block code (SFBC and a multirate SFBC for MIMO-OFDM systems with limited feedback. The proposed rate one MRP and multirate MRP can always achieve full transmit diversity and optimal system performance for arbitrary number of antennas, subcarrier intervals, and subcarrier groupings, with limited channel knowledge required by the transmit antennas. The optimization process of the rate one MRP is simple and easily visualized so that the optimal rotation angle can be derived explicitly, or even intuitively for some cases. The multirate MRP has a complex optimization process, but it has a better spectral efficiency and provides a relatively smooth balance between system performance and transmission rate. Simulations show that the proposed SFBC with MRP can overcome the diversity loss for specific propagation scenarios, always improve the system performance, and demonstrate flexible performance with large performance gain. Therefore the proposed SFBCs with MRP demonstrate flexibility and feasibility so that it is more suitable for a practical MIMO-OFDM system with dynamic parameters.

  6. Hierarchical sparse coding in the sensory system of Caenorhabditis elegans.

    Science.gov (United States)

    Zaslaver, Alon; Liani, Idan; Shtangel, Oshrat; Ginzburg, Shira; Yee, Lisa; Sternberg, Paul W

    2015-01-27

    Animals with compact sensory systems face an encoding problem where a small number of sensory neurons are required to encode information about its surrounding complex environment. Using Caenorhabditis elegans worms as a model, we ask how chemical stimuli are encoded by a small and highly connected sensory system. We first generated a comprehensive library of transgenic worms where each animal expresses a genetically encoded calcium indicator in individual sensory neurons. This library includes the vast majority of the sensory system in C. elegans. Imaging from individual sensory neurons while subjecting the worms to various stimuli allowed us to compile a comprehensive functional map of the sensory system at single neuron resolution. The functional map reveals that despite the dense wiring, chemosensory neurons represent the environment using sparse codes. Moreover, although anatomically closely connected, chemo- and mechano-sensory neurons are functionally segregated. In addition, the code is hierarchical, where few neurons participate in encoding multiple cues, whereas other sensory neurons are stimulus specific. This encoding strategy may have evolved to mitigate the constraints of a compact sensory system.

  7. Writing systems: not optimal, but good enough.

    Science.gov (United States)

    Seidenberg, Mark S

    2012-10-01

    Languages and writing systems result from satisfying multiple constraints related to learning, comprehension, production, and their biological bases. Orthographies are not optimal because these constraints often conflict, with further deviations due to accidents of history and geography. Things tend to even out because writing systems and the languages they represent exhibit systematic trade-offs between orthographic depth and morphological complexity.

  8. OPTIMIZATION OF COMBINED SEWER OVERFLOW CONTROL SYSTEMS

    Science.gov (United States)

    The highly variable and intermittent pollutant concentrations and flowrates associated with wet-weather events in combined sewersheds necessitates the use of storage-treatment systems to control pollution.An optimized combined-sewer-overflow (CSO) control system requires a manage...

  9. Performance enhancement of successive interference cancellation scheme based on spectral amplitude coding for optical code-division multiple-access systems using Hadamard codes

    Science.gov (United States)

    Eltaif, Tawfig; Shalaby, Hossam M. H.; Shaari, Sahbudin; Hamarsheh, Mohammad M. N.

    2009-04-01

    A successive interference cancellation scheme is applied to optical code-division multiple-access (OCDMA) systems with spectral amplitude coding (SAC). A detailed analysis of this system, with Hadamard codes used as signature sequences, is presented. The system can easily remove the effect of the strongest signal at each stage of the cancellation process. In addition, simulation of the prose system is performed in order to validate the theoretical results. The system shows a small bit error rate at a large number of active users compared to the SAC OCDMA system. Our results reveal that the proposed system is efficient in eliminating the effect of the multiple-user interference and in the enhancement of the overall performance.

  10. Application of the code Slac to the study of Ion Extraction Systems in Neutral Injectors

    International Nuclear Information System (INIS)

    Garcia, M.; Liniers, M.; Guasp, J.

    1997-01-01

    In this study different extraction geometries for intense ion beams have been analyzed with the code SLAC, in view of its possible application to the neutral injectors of TJ-II. With this aim, we have introduced several modifications in the code in order to correctly simulate the transition between the ion source plasma and the extraction region, which has great impact on the beam optics. These modifications include the introduction of a population of Boltzmann electrons in the transition region, and the implementation of an option to simulate the thermal velocity of the ions in the source. We have found a better agreement between the results obtained with the new version of the code and the experimental data in two well known systems. With this new version of the code two different studies have been carried out: in the first place an optimization of the ATF injectors extraction system for its use on TJ-II, leading to an optimum value of the gap in the energy range 30-40 KeV, and in the second place a systematic study of extraction geometries at 40 KeV. As a result of this second study we have found the combinations of parameters that can be used under different working conditions (e.g. different pulse lengths), leading to acceptable values of the beam divergence. (Author)

  11. Simplified modeling and code usage in the PASC-3 code system by the introduction of a programming environment

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Oppe, J.; Oudshoorn, H.L.; Slobben, J.

    1991-06-01

    A brief description is given of the PASC-3 (Petten-AMPX-SCALE) Reactor Physics code system and associated UNIPASC work environment. The PASC-3 code system is used for criticality and reactor calculations and consists of a selection from the Oak Ridge National Laboratory AMPX-SCALE-3 code collection complemented with a number of additional codes and nuclear data bases. The original codes have been adapted to run under the UNIX operating system. The recommended nuclear data base is a complete 219 group cross section library derived from JEF-1 of which some benchmark results are presented. By the addition of the UNIPASC work environment the usage of the code system is greatly simplified. Complex chains of programs can easily be coupled together to form a single job. In addition, the model parameters can be represented by variables instead of literal values which enhances the readability and may improve the integrity of the code inputs. (author). 8 refs.; 6 figs.; 1 tab

  12. SEJITS: embedded specializers to turn patterns-based designs into optimized parallel code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    All software should be parallel software. This is natural result of the transition to a many core world. For a small fraction of the world's programmers (efficiency programmers), this is not a problem. They enjoy mapping algorithms onto the details of a particular system and are well served by low level languages and OpenMP, MPI, or OpenCL. Most programmers, however, are "domain specialists" who write code. They are too busy working in their domain of choice (such as physics) to master the intricacies of each computer they use. How do we make these programmers productive without giving up performance? We have been working with a team at UC Berkeley's ParLab to address this problem. The key is a clear software architecture expressed in terms of design patterns that exposes the concurrency in a problem. The resulting code is written using a patterns-based framework within a high level, productivity language (such as Python). Then a separate system is used by a small group o...

  13. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    Full Text Available Purpose. To demonstrate feasibility of the proposed integrated optimization of various MTS parameters to reduce capital investments as well as decrease any operational and maintenance expense. This will make use of MTS reasonable. At present, the Maglev Transport Systems (MTS for High-Speed Ground Transportation (HSGT almost do not apply. Significant capital investments, high operational and maintenance costs are the main reasons why Maglev Transport Systems (MTS are hardly currently used for the High-Speed Ground Transportation (HSGT. Therefore, this article justifies use of Theory of Complex Optimization of Transport (TCOT, developed by one of the co-authors, to reduce MTS costs. Methodology. According to TCOT, authors developed an abstract model of the generalized transport system (AMSTG. This model mathematically determines the optimal balance between all components of the system and thus provides the ultimate adaptation of any transport systems to the conditions of its application. To identify areas for effective use of MTS, by TCOT, the authors developed a dynamic model of distribution and expansion of spheres of effective use of transport systems (DMRRSEPTS. Based on this model, the most efficient transport system was selected for each individual track. The main estimated criterion at determination of efficiency of application of MTS is the size of the specific transportation tariff received from calculation of payback of total given expenses to a standard payback period or term of granting the credit. Findings. The completed multiple calculations of four types of MTS: TRANSRAPID, MLX01, TRANSMAG and TRANSPROGRESS demonstrated efficiency of the integrated optimization of the parameters of such systems. This research made possible expending the scope of effective usage of MTS in about 2 times. The achieved results were presented at many international conferences in Germany, Switzerland, United States, China, Ukraine, etc. Using MTS as an

  14. Thermoelectric power generation system optimization studies

    Science.gov (United States)

    Karri, Madhav A.

    A significant amount of energy we consume each year is rejected as waste heat to the ambient. Conservative estimates place the quantity of energy wasted at about 70%. Converting the waste heat into electrical power would be convenient and effective for a number of primary and secondary applications. A viable solution for converting waste heat into electrical energy is to use thermoelectric power conversion. Thermoelectric power generation is based on solid state technology with no moving parts and works on the principle of Seebeck effect. In this work a thermoelectric generator (TEG) system simulator was developed to perform various parametric and system optimization studies. Optimization studies were performed to determine the effect of system size, exhaust and coolant ow conditions, and thermoelectric material on the net gains produced by the TEG system and on the optimum TEG system design. A sports utility vehicle was used as a case study for the application of TEG in mobile systems.

  15. Complex energy system management using optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bridgeman, Stuart; Hurdowar-Castro, Diana; Allen, Rick; Olason, Tryggvi; Welt, Francois

    2010-09-15

    Modern energy systems are often very complex with respect to the mix of generation sources, energy storage, transmission, and avenues to market. Historically, power was provided by government organizations to load centers, and pricing was provided in a regulatory manner. In recent years, this process has been displaced by the independent system operator (ISO). This complexity makes the operation of these systems very difficult, since the components of the system are interdependent. Consequently, computer-based large-scale simulation and optimization methods like Decision Support Systems are now being used. This paper discusses the application of a DSS to operations and planning systems.

  16. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs. The procedure for sensor configuration is based on simultaneous perturbation stochastic approximation (SPSA). SPSA avoids the need for detailed modeling of the sensor response by simply......Considers the problem of sensor configuration for complex systems. Our approach involves definition of an appropriate optimality criterion or performance measure, and description of an efficient and practical algorithm for achieving the optimality objective. The criterion for optimal sensor...... relying on observed responses as obtained by limited experimentation with test sensor configurations. We illustrate the approach with the optimal placement of acoustic sensors for signal detection in structures. This includes both a computer simulation study for an aluminum plate, and real...

  17. EquiFACS: The Equine Facial Action Coding System.

    Directory of Open Access Journals (Sweden)

    Jen Wathan

    Full Text Available Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats. EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

  18. Multiphysics simulation electromechanical system applications and optimization

    CERN Document Server

    Dede, Ercan M; Nomura, Tsuyoshi

    2014-01-01

    This book highlights a unique combination of numerical tools and strategies for handling the challenges of multiphysics simulation, with a specific focus on electromechanical systems as the target application. Features: introduces the concept of design via simulation, along with the role of multiphysics simulation in today's engineering environment; discusses the importance of structural optimization techniques in the design and development of electromechanical systems; provides an overview of the physics commonly involved with electromechanical systems for applications such as electronics, ma

  19. Reward optimization of a repairable system

    International Nuclear Information System (INIS)

    Castro, I.T.; Perez-Ocon, R.

    2006-01-01

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures

  20. Reward optimization of a repairable system

    Energy Technology Data Exchange (ETDEWEB)

    Castro, I.T. [Departamento de Matematicas, Facultad de Veterinaria, Universidad de Extremadura, Avenida de la Universidad, s/n. 10071 Caceres (Spain)]. E-mail: inmatorres@unex.es; Perez-Ocon, R. [Departamento de Estadistica e Investigacion Operativa, Facultad de Ciencias, Universidad de Granada, Avenida de Severo Ochoa, s/n. 18071 Granada (Spain)]. E-mail: rperezo@ugr.es

    2006-03-15

    This paper analyzes a system subject to repairable and non-repairable failures. Non-repairable failures lead to replacement of the system. Repairable failures, first lead to repair but they lead to replacement after a fixed number of repairs. Operating and repair times follow phase type distributions (PH-distributions) and the pattern of the operating times is modelled by a geometric process. In this context, the problem is to find the optimal number of repairs, which maximizes the long-run average reward per unit time. To this end, the optimal number is determined and it is obtained by efficient numerical procedures.

  1. Topology optimization of nano-photonic systems

    DEFF Research Database (Denmark)

    Elesin, Yuriy; Wang, Fengwen; Andkjær, Jacob Anders

    2012-01-01

    We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles.......We describe recent developments within nano-photonic systems design based on topology optimization. Applications include linear and non-linear optical waveguides, slow-light waveguides, as well as all-dielectric cloaks that minimize scattering or back-scattering from hard obstacles....

  2. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  3. New constructions of MDS codes with complementary duals

    OpenAIRE

    Chen, Bocong; Liu, Hongwei

    2017-01-01

    Linear complementary-dual (LCD for short) codes are linear codes that intersect with their duals trivially. LCD codes have been used in certain communication systems. It is recently found that LCD codes can be applied in cryptography. This application of LCD codes renewed the interest in the construction of LCD codes having a large minimum distance. MDS codes are optimal in the sense that the minimum distance cannot be improved for given length and code size. Constructing LCD MDS codes is thu...

  4. Electronic health record standards, coding systems, frameworks, and infrastructures

    CERN Document Server

    Sinha, Pradeep K; Bendale, Prashant; Mantri, Manisha; Dande, Atreya

    2013-01-01

    Discover How Electronic Health Records Are Built to Drive the Next Generation of Healthcare Delivery The increased role of IT in the healthcare sector has led to the coining of a new phrase ""health informatics,"" which deals with the use of IT for better healthcare services. Health informatics applications often involve maintaining the health records of individuals, in digital form, which is referred to as an Electronic Health Record (EHR). Building and implementing an EHR infrastructure requires an understanding of healthcare standards, coding systems, and frameworks. This book provides an

  5. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    Science.gov (United States)

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  6. Nonterminals, homomorphisms and codings in different variations of OL-systems. II. Nondeterministic systems

    DEFF Research Database (Denmark)

    Nielsen, Mogens; Rozenberg, Grzegorz; Salomaa, Arto

    1974-01-01

    Continuing the work begun in Part I of this paper, we consider now variations of nondeterministic OL-systems. The present Part II of the paper contains a systematic classification of the effect of nonterminals, codings, weak codings, nonerasing homomorphisms and homomorphisms for all basic variat...

  7. An asynchronous writing method for restart files in the gysela code in prevision of exascale systems*

    Directory of Open Access Journals (Sweden)

    Thomine O.

    2013-12-01

    Full Text Available The present work deals with an optimization procedure developed in the full-f global GYrokinetic SEmi-LAgrangian code (GYSELA. Optimizing the writing of the restart files is necessary to reduce the computing impact of crashes. These files require a very large memory space, and particularly so for very large mesh sizes. The limited bandwidth of the data pipe between the comput- ing nodes and the storage system induces a non-scalable part in the GYSELA code, which increases with the mesh size. Indeed the transfer time of RAM to data depends linearly on the files size. The necessity of non synchronized writing-in-file procedure is therefore crucial. A new GYSELA module has been developed. This asynchronous procedure allows the frequent writ- ing of the restart files, whilst preventing a severe slowing down due to the limited writing bandwidth. This method has been improved to generate a checksum control of the restart files, and automatically rerun the code in case of a crash for any cause.

  8. The Optimization of Dispersion Properties of Photonic Crystal Fibers Using a Real-Coded Genetic Algorithm

    International Nuclear Information System (INIS)

    Yin Guo-Bing; Li Shu-Guang; Liu Shuo; Wang Xiao-Yan

    2011-01-01

    A real-coded genetic algorithm (GA) combined with a fully vectorial effective index method (FVEIM) is employed to design structures of photonic crystal fibers (PCFs) with user defined dispersion properties theoretically. The structures of PCFs whose solid cores are doped GeO 2 with zero-dispersions at 0.7–3.9μm are optimized and the flat dispersion ranges through the R+L+C band and the negative dispersion is −1576.26 ps·km −1 ·nm −1 at 1.55 μm. Analyses show that the zero-dispersion wavelength (ZDW) could be one of many ZDWs for the same fiber structure; PCFs could alter the dispersion to be Battened through the R+L+C band with a single air-hole diameter; and negative dispersion requires high air filling rate at 1.55 μm. The method is proved to be elegant for solving this inverse problem. (fundamental areas of phenomenology(including applications))

  9. The Optimization of Dispersion Properties of Photonic Crystal Fibers Using a Real-Coded Genetic Algorithm

    Science.gov (United States)

    Yin, Guo-Bing; Li, Shu-Guang; Liu, Shuo; Wang, Xiao-Yan

    2011-06-01

    A real-coded genetic algorithm (GA) combined with a fully vectorial effective index method (FVEIM) is employed to design structures of photonic crystal fibers (PCFs) with user defined dispersion properties theoretically. The structures of PCFs whose solid cores are doped GeO2 with zero-dispersions at 0.7-3.9μm are optimized and the flat dispersion ranges through the R+L+C band and the negative dispersion is -1576.26 ps·km-1·nm-1 at 1.55 μm. Analyses show that the zero-dispersion wavelength (ZDW) could be one of many ZDWs for the same fiber structure; PCFs could alter the dispersion to be Battened through the R+L+C band with a single air-hole diameter; and negative dispersion requires high air filling rate at 1.55 μm. The method is proved to be elegant for solving this inverse problem.

  10. Adaptive stimulus optimization for sensory systems neuroscience.

    Science.gov (United States)

    DiMattina, Christopher; Zhang, Kechen

    2013-01-01

    In this paper, we review several lines of recent work aimed at developing practical methods for adaptive on-line stimulus generation for sensory neurophysiology. We consider various experimental paradigms where on-line stimulus optimization is utilized, including the classical optimal stimulus paradigm where the goal of experiments is to identify a stimulus which maximizes neural responses, the iso-response paradigm which finds sets of stimuli giving rise to constant responses, and the system identification paradigm where the experimental goal is to estimate and possibly compare sensory processing models. We discuss various theoretical and practical aspects of adaptive firing rate optimization, including optimization with stimulus space constraints, firing rate adaptation, and possible network constraints on the optimal stimulus. We consider the problem of system identification, and show how accurate estimation of non-linear models can be highly dependent on the stimulus set used to probe the network. We suggest that optimizing stimuli for accurate model estimation may make it possible to successfully identify non-linear models which are otherwise intractable, and summarize several recent studies of this type. Finally, we present a two-stage stimulus design procedure which combines the dual goals of model estimation and model comparison and may be especially useful for system identification experiments where the appropriate model is unknown beforehand. We propose that fast, on-line stimulus optimization enabled by increasing computer power can make it practical to move sensory neuroscience away from a descriptive paradigm and toward a new paradigm of real-time model estimation and comparison.

  11. Systemization of burnup sensitivity analysis code (2) (Contract research)

    International Nuclear Information System (INIS)

    Tatsumi, Masahiro; Hyoudou, Hideaki

    2008-08-01

    Towards the practical use of fast reactors, it is a very important subject to improve prediction accuracy for neutronic properties in LMFBR cores from the viewpoint of improvements on plant economic efficiency with rationally high performance cores and that on reliability and safety margins. A distinct improvement on accuracy in nuclear core design has been accomplished by the development of adjusted nuclear library using the cross-section adjustment method, in which the results of critical experiments of JUPITER and so on are reflected. In the design of large LMFBR cores, however, it is important to accurately estimate not only neutronic characteristics, for example, reaction rate distribution and control rod worth but also burnup characteristics, for example, burnup reactivity loss, breeding ratio and so on. For this purpose, it is desired to improve prediction accuracy of burnup characteristics using the data widely obtained in actual core such as the experimental fast reactor 'JOYO'. The analysis of burnup characteristic is needed to effectively use burnup characteristics data in the actual cores based on the cross-section adjustment method. So far, a burnup sensitivity analysis code, SAGEP-BURN, has been developed and confirmed its effectiveness. However, there is a problem that analysis sequence become inefficient because of a big burden to users due to complexity of the theory of burnup sensitivity and limitation of the system. It is also desired to rearrange the system for future revision since it is becoming difficult to implement new functions in the existing large system. It is not sufficient to unify each computational component for the following reasons: the computational sequence may be changed for each item being analyzed or for purpose such as interpretation of physical meaning. Therefore, it is needed to systemize the current code for burnup sensitivity analysis with component blocks of functionality that can be divided or constructed on occasion

  12. Hydroelectric Optimized System Sensitivity to Climate

    Science.gov (United States)

    Howard, J. C.; Howard, C. D.

    2009-12-01

    This paper compares the response of a large hydro system, globally optimized for daily operations, under a range of reservoir system inflows. The modeled system consists of Projects and hydro operating constraints on the South Saskatchewan River, Lake Winnipeg, Southern Indian Lake, Churchill Diversion, Red River, Winnipeg River, and the Nelson River. The river system is continental in scale, stretching from the Rocky Mountains to Hudson Bay. The hydro storage is large enough to operate with a two year cycle, which includes freezeup conditions. The objective is to maximize seasonal value of energy generation over a two year time horizon. Linear and quadratic constraints represent reservoir stage-storage curves, tailwater stage-discharge curves, transient river routing, and seasonally dependent environmental constraints on operations. This paper describes the optimization modeling approaches used to represent an actual physical system and to accommodate uncertainties in the historical datasets used for calibration. The results are hypothetical, not a forecast.

  13. Improved differential pulse code modulation-block truncation coding method adopting two-level mean squared error near-optimal quantizers

    Science.gov (United States)

    Choi, Kang-Sun; Ko, Sung-Jea

    2011-04-01

    The conventional hybrid method of block truncation coding (BTC) and differential pulse code modulation (DPCM), namely the DPCM-BTC method, offers better rate-distortion performance than the standard BTC. However, the quantization error in the hybrid method is easily increased for large block sizes due to the use of two representative levels in BTC. In this paper, we first derive a bivariate quadratic function representing the mean squared error (MSE) between the original block and the block reconstructed in the DPCM framework. The near-optimal representatives obtained by quantizing the minimum of the derived function can prevent the rapid increase of the quantization error. Experimental results show that the proposed method improves peak signal-to-noise ratio performance by up to 2dB at 1.5 bit/pixel (bpp) and by 1.2dB even at a low bit rate of 1.1 bpp as compared with the DPCM-BTC method without optimization. Even with the additional computation for the quantizer optimization, the computational complexity of the proposed method is still much lower than those of transform-based compression techniques.

  14. Quality assurance and verification of the MACCS [MELCOR Accident Consequence Code System] code, Version 1.5

    International Nuclear Information System (INIS)

    Dobbe, C.A.; Carlson, E.R.; Marshall, N.H.; Marwil, E.S.; Tolli, J.E.

    1990-02-01

    An independent quality assurance (QA) and verification of Version 1.5 of the MELCOR Accident Consequence Code System (MACCS) was performed. The QA and verification involved examination of the code and associated documentation for consistent and correct implementation of the models in an error-free FORTRAN computer code. The QA and verification was not intended to determine either the adequacy or appropriateness of the models that are used MACCS 1.5. The reviews uncovered errors which were fixed by the SNL MACCS code development staff prior to the release of MACCS 1.5. Some difficulties related to documentation improvement and code restructuring are also presented. The QA and verification process concluded that Version 1.5 of the MACCS code, within the scope and limitations process concluded that Version 1.5 of the MACCS code, within the scope and limitations of the models implemented in the code is essentially error free and ready for widespread use. 15 refs., 11 tabs

  15. Optimized FIR filters for digital pulse compression of biphase codes with low sidelobes

    Science.gov (United States)

    Sanal, M.; Kuloor, R.; Sagayaraj, M. J.

    In miniaturized radars where power, real estate, speed and low cost are tight constraints and Doppler tolerance is not a major concern biphase codes are popular and FIR filter is used for digital pulse compression (DPC) implementation to achieve required range resolution. Disadvantage of low peak to sidelobe ratio (PSR) of biphase codes can be overcome by linear programming for either single stage mismatched filter or two stage approach i.e. matched filter followed by sidelobe suppression filter (SSF) filter. Linear programming (LP) calls for longer filter lengths to obtain desirable PSR. Longer the filter length greater will be the number of multipliers, hence more will be the requirement of logic resources used in the FPGAs and many time becomes design challenge for system on chip (SoC) requirement. This requirement of multipliers can be brought down by clustering the tap weights of the filter by kmeans clustering algorithm at the cost of few dB deterioration in PSR. The cluster centroid as tap weight reduces logic used in FPGA for FIR filters to a great extent by reducing number of weight multipliers. Since k-means clustering is an iterative algorithm, centroid for weights cluster is different in different iterations and causes different clusters. This causes difference in clustering of weights and sometimes even it may happen that lesser number of multiplier and lesser length of filter provide better PSR.

  16. Evaluation of system codes for analyzing naturally circulating gas loop

    International Nuclear Information System (INIS)

    Lee, Jeong Ik; No, Hee Cheon; Hejzlar, Pavel

    2009-01-01

    Steady-state natural circulation data obtained in a 7 m-tall experimental loop with carbon dioxide and nitrogen are presented in this paper. The loop was originally designed to encompass operating range of a prototype gas-cooled fast reactor passive decay heat removal system, but the results and conclusions are applicable to any natural circulation loop operating in regimes having buoyancy and acceleration parameters within the ranges validated in this loop. Natural circulation steady-state data are compared to numerical predictions by two system analysis codes: GAMMA and RELAP5-3D. GAMMA is a computational tool for predicting various transients which can potentially occur in a gas-cooled reactor. The code has a capability of analyzing multi-dimensional multi-component mixtures and includes models for friction, heat transfer, chemical reaction, and multi-component molecular diffusion. Natural circulation data with two gases show that the loop operates in the deteriorated turbulent heat transfer (DTHT) regime which exhibits substantially reduced heat transfer coefficients compared to the forced turbulent flow. The GAMMA code with an original heat transfer package predicted conservative results in terms of peak wall temperature. However, the estimated peak location did not successfully match the data. Even though GAMMA's original heat transfer package included mixed-convection regime, which is a part of the DTHT regime, the results showed that the original heat transfer package could not reproduce the data with sufficient accuracy. After implementing a recently developed correlation and corresponding heat transfer regime map into GAMMA to cover the whole range of the DTHT regime, we obtained better agreement with the data. RELAP5-3D results are discussed in parallel.

  17. Optimization and Control of Electric Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lesieutre, Bernard C. [Univ. of Wisconsin, Madison, WI (United States); Molzahn, Daniel K. [Univ. of Wisconsin, Madison, WI (United States)

    2014-10-17

    The analysis and optimization needs for planning and operation of the electric power system are challenging due to the scale and the form of model representations. The connected network spans the continent and the mathematical models are inherently nonlinear. Traditionally, computational limits have necessitated the use of very simplified models for grid analysis, and this has resulted in either less secure operation, or less efficient operation, or both. The research conducted in this project advances techniques for power system optimization problems that will enhance reliable and efficient operation. The results of this work appear in numerous publications and address different application problems include optimal power flow (OPF), unit commitment, demand response, reliability margins, planning, transmission expansion, as well as general tools and algorithms.

  18. OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation

    Directory of Open Access Journals (Sweden)

    Mehdi Abolfazli

    2013-04-01

    Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.

  19. Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions

    Science.gov (United States)

    Gilland, James H.

    1991-01-01

    The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.

  20. Neutron kinetics for system thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Diamond, D.J.

    1996-01-01

    There is general agreement that for many light water reactor (LWR) calculations for licensing safety analysis, probabilistic risk assessment, operational support, and training, it is necessary to use a multidimensional neutron kinetics model coupled to a thermal-hydraulics model in order to obtain satisfactory results. This need coincides with the fact that in recent years there has been considerable research and development in this field, with modelers taking advantage of the increase in computing power that has become available. This progress has now led to coupling multidimensional neutron kinetics models to the nuclear steam supply system thermal hydraulics. This is not new since some coupled codes have always been available. What is new is that the coupling can now be done with very sophisticated models, and the planning of this coupling and the requisite modeling can take advantage of the experience of many code developers in many countries. The U.S. Nuclear Regulatory Commission and other organizations are in the process of reviewing the state of the art and making recommendations for future development. This paper summarizes one contribution to this review process: a review of the multidimensional neutron kinetics modeling, and ancillary modeling, which would be used in conjunction with system thermal-hydraulic models to perform core dynamics calculations

  1. Development of GUI systems for the MIDAS code

    International Nuclear Information System (INIS)

    Kim, K.R.; Park, S.H.; Kim, D.H.

    2004-01-01

    MIDAS is being developed at KAERI based on MELCOR as an integrated severe accident analysis code with existing model modification and new model addition. MIDAS was restructured to avoid the pointer based variable referencing style of MELCOR, and enhanced the memory effectiveness using the dynamic allocation method of Fortran 90. This paper describes recent activities of developing the GUI environments for MIDAS code at KAERI. Up to now, we have developed the four PC-based subsystems, which are IEDIT, IPLOT, SATS and HyperKAMG. IEDIT is an input management system that can read MELCOR input files and display its information in the Window panels. Users can modify each item in the panel and the input file will be modified according to that changes. IPLOT is a simple plotting system that can draw MIDAS plot variables trend graphs. SATS is developed as a severe accident training simulator that can display nuclear plant behavior graphically. Moreover SATS provides several controllable pumps and valves which appeared in the severe accidence. Together with SATS and the online severe accident guidance HyperKAMG, combined properly, severe accident mitigation scenarios could be presented graphically and dramatically without any change of MELCOR inputs. GUI development as a part of a severe accident management program package, MIDAS. (author)

  2. Optimizing use of course management systems.

    Science.gov (United States)

    Wink, Diane M

    2011-01-01

    In this bimonthly series, the author examines how nurse educators can use Internet and Web-based computer technologies such as search, communication, and collaborative writing tools; social networking and social bookmarking sites; virtual worlds; and Web-based teaching and learning programs. The focus of this article is optimizing the use of a course management system.

  3. Optimization of energy storage in power systems

    International Nuclear Information System (INIS)

    Abou Chacra, F.

    2005-07-01

    For more than a century, electric transmission and distribution systems have been developed assuming that electric energy was almost impossible to store. Technical progress, new environmental requirements and electrical industry reforms now lead us to believe that storage in the future will be one of the main challenges in the development of power systems. Storage would have potential applications to deal with current technical constraints such as the system load, peak-load value, faults in parts of the system, control issues, etc. and economic ones such as upgrades deferral, renewable energy deployment, etc. In this study, energy storage is considered in two strategic locations in the French power system: HT/MT substations and wind farms. Possible applications and economic flags are formulated and appropriate optimization methods (genetic algorithms, Pareto) are used to maximize the project net present value. This optimization results in defining optimal capacities and control strategies for the energy storage system, taken from a set of storage technologies suitable for this problem, and in assessing the technical-economic impact of energy storage as a solution in power systems. (author)

  4. OPAL- the in-core fuel management code system for WWER reactors

    International Nuclear Information System (INIS)

    Krysl, V.; Mikolas, P.; Sustek, J.; Svarny, J.; Vlachovsky, K.

    2002-01-01

    Fuel management optimization is a complex problem namely for WWER reactors, which at present are utilizing burnable poisons (BP) to great extent. In this paper, first the concept and methodologies of a fuel management system for WWER 440 (NPP Dukovany) and NPP WWER 1000 (NPP Temelin) under development in Skoda JS a.s. are described and followed by some practical applications. The objective of this advanced system is to minimize fuel cost by preserving all safety constraints and margins. Future enhancements of the system will allow is it to perform fuel management optimization in the multi-cycle mode. The general objective functions of the system are the maximization of EOC reactivity, the maximization of discharge burnup, the minimization of fresh fuel inventory / or the minimization of feed enrichment, the minimization of the BP inventory. There are also safety related constraints, in which the minimization of power peaking plays a dominant role. The core part of the system requires meeting the major objective: maximizing the EOC Keff for a given fuel cycle length and consists of four coupled calculation steps. The first is the calculation of a Loading Priority Scheme (LPS). which is used to rank the core positions in terms of assembly Kinf values. In the second step the Haling power distribution is calculated and by using fuel shuffle and/or enrichment splitting algorithms and heuristic rules the core pattern is modified to meet core constraints. In this second step a directive/evolutionary algorithm with expert rules based optimization code is used. The optimal BP assignment is alternatively considered to be a separate third step of the procedure. In the fourth step the core is depleted in normal up to 3D pin wise level using the BP distribution developed in step three and meeting all constraints is checked. One of the options of this optimization system is expert friendly interactive mode (Authors)

  5. User's guide for the BNW-III optimization code for modular dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Faletti, D.W.

    1984-09-01

    This user's guide describes BNW-III, a computer code developed by the Pacific Northwest Laboratory (PNL) as part of the Dry Cooling Enhancement Program sponsored by the US Department of Energy (DOE). The BNW-III code models a modular dry/wet cooling system for a nuclear or fossil fuel power plant. The purpose of this guide is to give the code user a brief description of what the BNW-III code is and how to use it. It describes the cooling system being modeled and the various models used. A detailed description of code input and code output is also included. The BNW-III code was developed to analyze a specific cooling system layout. However, there is a large degree of freedom in the type of cooling modules that can be selected and in the performance of those modules. The costs of the modules are input to the code, giving the user a great deal of flexibility.

  6. Hybrid Intelligent Systems in Manufacturing Optimization

    OpenAIRE

    Gelgele, Hirpa Lemu

    2002-01-01

    The main objective of the work reported in this thesis has been to study and develop methodologies that can improve the communication gap between design and manufacturing systems. The emphasis has been on searching for the possible means of modeling and optimizing processes in an integrated design and manufacturing system environment using the combined capabilities (hybrids) of computational intelligence tools particularly that of artificial neural networks and genetic algorithms. Within...

  7. Optimal Quality Assurance Systems for Agricultural Outputs

    OpenAIRE

    Miguel Carriquiry; Bruce A. Babcock; Roxana Carbone

    2003-01-01

    New quality assurance systems (QASs) are being put in place to facilitate the flow of information about agricultural and food products. But what constitutes a proper mix of public and private efforts in setting up QASs is an unsettled question. A better understanding of private sector incentives for setting up such systems will help clarify what role the public sector might have in establishing standards. We contribute to this understanding by modeling the optimal degree of "stringency" or as...

  8. MIND. Optimization method for industrial energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Nilsson, Katarina.

    1990-04-01

    It is of great importance to encourage the consciousness of energy demand and energy conservation issues in industrial applications as the potential for savings in many cases is very good. The MIND optimization method is a tool for life cycle cost minimization of a flexible range of industrial energy systems. It can be used in analyses of energy systems in response to changes within the systems, changes of the boundary conditions and synthesis of industrial energy systems. The aim is to find an optimal structure in the energy system where several alternative process routes and kinds of energy are available. Equipment alternatives may concern choices of recondition, exchange, new tehnology, time of investment and size considerations. Energy can be supplied to the industrial energy system as electricity, steam and with various kinds of fuel. Energy and material flows are represented in the optimization as well as non-linearities in energy demand functions and investment cost functions. Boundary conditions and process variations can be represented with a time division where the length of each time step and the number of time steps can be chosen. Two applications are presented to show the flexibility of the MIND method, heat treating processes in the engineering industry and milk processing in a dairy. (36 refs.).

  9. Tank Waste Remediation System optimized processing strategy

    International Nuclear Information System (INIS)

    Slaathaug, E.J.; Boldt, A.L.; Boomer, K.D.; Galbraith, J.D.; Leach, C.E.; Waldo, T.L.

    1996-03-01

    This report provides an alternative strategy evolved from the current Hanford Site Tank Waste Remediation System (TWRS) programmatic baseline for accomplishing the treatment and disposal of the Hanford Site tank wastes. This optimized processing strategy performs the major elements of the TWRS Program, but modifies the deployment of selected treatment technologies to reduce the program cost. The present program for development of waste retrieval, pretreatment, and vitrification technologies continues, but the optimized processing strategy reuses a single facility to accomplish the separations/low-activity waste (LAW) vitrification and the high-level waste (HLW) vitrification processes sequentially, thereby eliminating the need for a separate HLW vitrification facility

  10. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  11. Implications of Sepedi/English code switching for ASR systems

    CSIR Research Space (South Africa)

    Modipa, TI

    2013-12-01

    Full Text Available Code switching (the process of switching from one language to another during a conversation) is a common phenomenon in multilingual environments. Where a minority and dominant language coincide, code switching from the minority language...

  12. Linear systems optimal and robust control

    CERN Document Server

    Sinha, Alok

    2007-01-01

    Introduction Overview Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multi...

  13. The neural code for auditory space depends on sound frequency and head size in an optimal manner.

    Directory of Open Access Journals (Sweden)

    Nicol S Harper

    Full Text Available A major cue to the location of a sound source is the interaural time difference (ITD-the difference in sound arrival time at the two ears. The neural representation of this auditory cue is unresolved. The classic model of ITD coding, dominant for a half-century, posits that the distribution of best ITDs (the ITD evoking a neuron's maximal response is unimodal and largely within the range of ITDs permitted by head-size. This is often interpreted as a place code for source location. An alternative model, based on neurophysiology in small mammals, posits a bimodal distribution of best ITDs with exquisite sensitivity to ITDs generated by means of relative firing rates between the distributions. Recently, an optimal-coding model was proposed, unifying the disparate features of these two models under the framework of efficient coding by neural populations. The optimal-coding model predicts that distributions of best ITDs depend on head size and sound frequency: for high frequencies and large heads it resembles the classic model, for low frequencies and small head sizes it resembles the bimodal model. The optimal-coding model makes key, yet unobserved, predictions: for many species, including humans, both forms of neural representation are employed, depending on sound frequency. Furthermore, novel representations are predicted for intermediate frequencies. Here, we examine these predictions in neurophysiological data from five mammalian species: macaque, guinea pig, cat, gerbil and kangaroo rat. We present the first evidence supporting these untested predictions, and demonstrate that different representations appear to be employed at different sound frequencies in the same species.

  14. Optimal sensor configuration for complex systems

    DEFF Research Database (Denmark)

    Sadegh, Payman; Spall, J. C.

    1998-01-01

    . The procedure for sensor configuration is based on the simultaneous perturbation stochastic approximation (SPSA) algorithm. SPSA avoids the need for detailed modeling of the sensor response by simply relying on the observed responses obtained by limited experimentation with test sensor configurations. We......The paper considers the problem of sensor configuration for complex systems with the aim of maximizing the useful information about certain quantities of interest. Our approach involves: 1) definition of an appropriate optimality criterion or performance measure; and 2) description of an efficient...... and practical algorithm for achieving the optimality objective. The criterion for optimal sensor configuration is based on maximizing the overall sensor response while minimizing the correlation among the sensor outputs, so as to minimize the redundant information being provided by the multiple sensors...

  15. Overview of Particle and Heavy Ion Transport Code System PHITS

    Science.gov (United States)

    Sato, Tatsuhiko; Niita, Koji; Matsuda, Norihiro; Hashimoto, Shintaro; Iwamoto, Yosuke; Furuta, Takuya; Noda, Shusaku; Ogawa, Tatsuhiko; Iwase, Hiroshi; Nakashima, Hiroshi; Fukahori, Tokio; Okumura, Keisuke; Kai, Tetsuya; Chiba, Satoshi; Sihver, Lembit

    2014-06-01

    A general purpose Monte Carlo Particle and Heavy Ion Transport code System, PHITS, is being developed through the collaboration of several institutes in Japan and Europe. The Japan Atomic Energy Agency is responsible for managing the entire project. PHITS can deal with the transport of nearly all particles, including neutrons, protons, heavy ions, photons, and electrons, over wide energy ranges using various nuclear reaction models and data libraries. It is written in Fortran language and can be executed on almost all computers. All components of PHITS such as its source, executable and data-library files are assembled in one package and then distributed to many countries via the Research organization for Information Science and Technology, the Data Bank of the Organization for Economic Co-operation and Development's Nuclear Energy Agency, and the Radiation Safety Information Computational Center. More than 1,000 researchers have been registered as PHITS users, and they apply the code to various research and development fields such as nuclear technology, accelerator design, medical physics, and cosmic-ray research. This paper briefly summarizes the physics models implemented in PHITS, and introduces some important functions useful for specific applications, such as an event generator mode and beam transport functions.

  16. SURE: a system of computer codes for performing sensitivity/uncertainty analyses with the RELAP code. [PWR

    Energy Technology Data Exchange (ETDEWEB)

    Bjerke, M.A.

    1983-02-01

    A package of computer codes has been developed to perform a nonlinear uncertainty analysis on transient thermal-hydraulic systems which are modeled with the RELAP computer code. Using an uncertainty around the analyses of experiments in the PWR-BDHT Separate Effects Program at Oak Ridge National Laboratory. The use of FORTRAN programs running interactively on the PDP-10 computer has made the system very easy to use and provided great flexibility in the choice of processing paths. Several experiments simulating a loss-of-coolant accident in a nuclear reactor have been successfully analyzed. It has been shown that the system can be automated easily to further simplify its use and that the conversion of the entire system to a base code other than RELAP is possible.

  17. Discrete optimization in architecture extremely modular systems

    CERN Document Server

    Zawidzki, Machi

    2017-01-01

    This book is comprised of two parts, both of which explore modular systems: Pipe-Z (PZ) and Truss-Z (TZ), respectively. It presents several methods of creating PZ and TZ structures subjected to discrete optimization. The algorithms presented employ graph-theoretic and heuristic methods. The underlying idea of both systems is to create free-form structures using the minimal number of types of modular elements. PZ is more conceptual, as it forms single-branch mathematical knots with a single type of module. Conversely, TZ is a skeletal system for creating free-form pedestrian ramps and ramp networks among any number of terminals in space. In physical space, TZ uses two types of modules that are mirror reflections of each other. The optimization criteria discussed include: the minimal number of units, maximal adherence to the given guide paths, etc.

  18. An Architectural Style for Optimizing System Qualities in Adaptive Embedded Systems using Multi-Objective Optimization

    NARCIS (Netherlands)

    de Roo, Arjan; Sözer, Hasan; Aksit, Mehmet

    Customers of today's complex embedded systems demand the optimization of multiple system qualities under varying operational conditions. To be able to influence the system qualities, the system must have parameters that can be adapted. Constraints may be defined on the value of these parameters.

  19. Integrated design by optimization of electrical energy systems

    CERN Document Server

    Roboam, Xavier

    2013-01-01

    This book proposes systemic design methodologies applied to electrical energy systems, in particular integrated optimal design with modeling and optimization methods and tools. It is made up of six chapters dedicated to integrated optimal design. First, the signal processing of mission profiles and system environment variables are discussed. Then, optimization-oriented analytical models, methods and tools (design frameworks) are proposed. A "multi-level optimization" smartly coupling several optimization processes is the subject of one chapter. Finally, a technico-economic optimizatio

  20. User's manual for the BNW-I optimization code for dry-cooled power plants. Volume III. [PLCIRI

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Daniel, D.J.; De Mier, W.V.; Faletti, D.W.; Wiles, L.E.

    1977-01-01

    This appendix to User's Manual for the BNW-1 Optimization Code for Dry-Cooled Power Plants provides a listing of the BNW-I optimization code for determining, for a particular size power plant, the optimum dry cooling tower design using a plastic tube cooling surface and circular tower arrangement of the tube bundles. (LCL)

  1. Optimal allocation of resources in systems

    International Nuclear Information System (INIS)

    Derman, C.; Lieberman, G.J.; Ross, S.M.

    1975-01-01

    In the design of a new system, or the maintenance of an old system, allocation of resources is of prime consideration. In allocating resources it is often beneficial to develop a solution that yields an optimal value of the system measure of desirability. In the context of the problems considered in this paper the resources to be allocated are components already produced (assembly problems) and money (allocation in the construction or repair of systems). The measure of desirability for system assembly will usually be maximizing the expected number of systems that perform satisfactorily and the measure in the allocation context will be maximizing the system reliability. Results are presented for these two types of general problems in both a sequential (when appropriate) and non-sequential context

  2. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    Directory of Open Access Journals (Sweden)

    Azura M. S. A.

    2017-01-01

    Full Text Available This paper presents a realization of Wavelength/Time (W/T Two-Dimensional Modified Double Weight (2-D MDW code for Optical Code Division Multiple Access (OCDMA system based on Spectral Amplitude Coding (SAC approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN and minimizing the Multiple Access Interference (MAI noises. At the permissible BER 10-9, the 2-D MDW (APD had shown minimum effective received power (Psr = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN only received -61 dBm. The results show that 2-D MDW (APD has better performance in achieving same BER with longer optical fiber length and with less received power (Psr. Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  3. Simulation realization of 2-D wavelength/time system utilizing MDW code for OCDMA system

    Science.gov (United States)

    Azura, M. S. A.; Rashidi, C. B. M.; Aljunid, S. A.; Endut, R.; Ali, N.

    2017-11-01

    This paper presents a realization of Wavelength/Time (W/T) Two-Dimensional Modified Double Weight (2-D MDW) code for Optical Code Division Multiple Access (OCDMA) system based on Spectral Amplitude Coding (SAC) approach. The MDW code has the capability to suppress Phase-Induce Intensity Noise (PIIN) and minimizing the Multiple Access Interference (MAI) noises. At the permissible BER 10-9, the 2-D MDW (APD) had shown minimum effective received power (Psr) = -71 dBm that can be obtained at the receiver side as compared to 2-D MDW (PIN) only received -61 dBm. The results show that 2-D MDW (APD) has better performance in achieving same BER with longer optical fiber length and with less received power (Psr). Also, the BER from the result shows that MDW code has the capability to suppress PIIN ad MAI.

  4. Software coding for reliable data communication in a reactor safety system

    International Nuclear Information System (INIS)

    Maghsoodi, R.

    1978-01-01

    A software coding method is proposed to improve the communication reliability of a microprocessor based fast-reactor safety system. This method which replaces the conventional coding circuitry, applies a program to code the data which is communicated between the processors via their data memories. The system requirements are studied and the suitable codes are suggested. The problems associated with hardware coders, and the advantages of software coding methods are discussed. The product code which proves a faster coding time over the cyclic code is chosen as the final code. Then the improvement of the communication reliability is derived for a processor and its data memory. The result is used to calculate the reliability improvement of the processing channel as the basic unit for the safety system. (author)

  5. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2011-07-15

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  6. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich; Schuetze, Jochen; Frank, Thomas; Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo

    2011-01-01

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  7. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey

    OpenAIRE

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospi...

  8. Optimally Controlled Flexible Fuel Powertrain System

    Energy Technology Data Exchange (ETDEWEB)

    Hakan Yilmaz; Mark Christie; Anna Stefanopoulou

    2010-12-31

    The primary objective of this project was to develop a true Flex Fuel Vehicle capable of running on any blend of ethanol from 0 to 85% with reduced penalty in usable vehicle range. A research and development program, targeting 10% improvement in fuel economy using a direct injection (DI) turbocharged spark ignition engine was conducted. In this project a gasoline-optimized high-technology engine was considered and the hardware and configuration modifications were defined for the engine, fueling system, and air path. Combined with a novel engine control strategy, control software, and calibration this resulted in a highly efficient and clean FFV concept. It was also intended to develop robust detection schemes of the ethanol content in the fuel integrated with adaptive control algorithms for optimized turbocharged direct injection engine combustion. The approach relies heavily on software-based adaptation and optimization striving for minimal modifications to the gasoline-optimized engine hardware system. Our ultimate objective was to develop a compact control methodology that takes advantage of any ethanol-based fuel mixture and not compromise the engine performance under gasoline operation.

  9. Maximal imaginery eigenvalues in optimal systems

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1991-07-01

    Full Text Available In this note we present equations that uniquely determine the maximum possible imaginary value of the closed loop eigenvalues in an LQ-optimal system, irrespective of how the state weight matrix is chosen, provided a real symmetric solution of the algebraic Riccati equation exists. In addition, the corresponding state weight matrix and the solution to the algebraic Riccati equation are derived for a class of linear systems. A fundamental lemma for the existence of a real symmetric solution to the algebraic Riccati equation is derived for this class of linear systems.

  10. Verification of the CONPAS (CONtainment Performance Analysis System) code package

    International Nuclear Information System (INIS)

    Kim, See Darl; Ahn, Kwang Il; Song, Yong Man; Choi, Young; Park, Soo Yong; Kim, Dong Ha; Jin, Young Ho.

    1997-09-01

    CONPAS is a computer code package to integrate the numerical, graphical, and results-oriented aspects of Level 2 probabilistic safety assessment (PSA) for nuclear power plants under a PC window environment automatically. For the integrated analysis of Level 2 PSA, the code utilizes four distinct, but closely related modules: (1) ET Editor, (2) Computer, (3) Text Editor, and (4) Mechanistic Code Plotter. Compared with other existing computer codes for Level 2 PSA, and CONPAS code provides several advanced features: computational aspects including systematic uncertainty analysis, importance analysis, sensitivity analysis and data interpretation, reporting aspects including tabling and graphic as well as user-friendly interface. The computational performance of CONPAS has been verified through a Level 2 PSA to a reference plant. The results of the CONPAS code was compared with an existing level 2 PSA code (NUCAP+) and the comparison proves that CONPAS is appropriate for Level 2 PSA. (author). 9 refs., 8 tabs., 14 figs

  11. The APR1400 Core Design by Using APA Code System

    International Nuclear Information System (INIS)

    Choi, Yu Sun; Koh, Byung Marn

    2008-01-01

    The nuclear design for APR1400 has been performed to prepare the core model for Automatic Load Follow Operation Simulation. APA (ALPHA/ PHOENIXP/ ANC) code system is a tool for the multi-cycle depletion calculations for APR1400. Its detail versions for ALPHA, PHOENIX-P and ANC are 8.9.3, 8.6.1 and 8.10.5, respectively. The first and equilibrium core depletion calculations for APR1400 have been performed to assure the target cycle length and confirm the safety parameters. The parameters are satisfied within limitation about nuclear design criteria. This APR1400 core models will be based on the design parameters for APR1400 Simulator

  12. HPLWR equilibrium core design with the KARATE code system

    Energy Technology Data Exchange (ETDEWEB)

    Maraczy, Cs.; Hegyi, Gy.; Hordosy, G.; Temesvari, E. [KFKI Atomic Energy Research Inst., Hungarian Academy of Sciences, Budapest (Hungary)

    2011-07-01

    The High Performance Light Water Reactor (HPLWR) is the European version of the various supercritical water cooled reactor proposals. The paper presents the activity of KFKI-AEKI in the field of neutronic core design within the framework of the 'HPLWR Phase 2' FP-6 and the Hungarian 'NUKENERG' projects. As the coolant density along the axial direction shows remarkable change, coupled neutronic- thermohydraulic calculations are essential which take into account the heating of moderator in the special water rods of the assemblies. A parametrized diffusion cross section library was prepared for the HPLWR assembly with the MULTICELL neutronic transport code. The parametrized cross sections are used by the KARATE program system, which was verified for supercritical conditions by comparative Monte Carlo calculations. To design the HPLWR equilibrium core preliminary loadings were assessed, which contain insulated assemblies with Gd burnable absorbers. The fuel assemblies have radial and axial enrichment zoning to reduce hot spots. (author)

  13. Control code for laboratory adaptive optics teaching system

    Science.gov (United States)

    Jin, Moonseob; Luder, Ryan; Sanchez, Lucas; Hart, Michael

    2017-09-01

    By sensing and compensating wavefront aberration, adaptive optics (AO) systems have proven themselves crucial in large astronomical telescopes, retinal imaging, and holographic coherent imaging. Commercial AO systems for laboratory use are now available in the market. One such is the ThorLabs AO kit built around a Boston Micromachines deformable mirror. However, there are limitations in applying these systems to research and pedagogical projects since the software is written with limited flexibility. In this paper, we describe a MATLAB-based software suite to interface with the ThorLabs AO kit by using the MATLAB Engine API and Visual Studio. The software is designed to offer complete access to the wavefront sensor data, through the various levels of processing, to the command signals to the deformable mirror and fast steering mirror. In this way, through a MATLAB GUI, an operator can experiment with every aspect of the AO system's functioning. This is particularly valuable for tests of new control algorithms as well as to support student engagement in an academic environment. We plan to make the code freely available to the community.

  14. Performance analysis of multiple interference suppression over asynchronous/synchronous optical code-division multiple-access system based on complementary/prime/shifted coding scheme

    Science.gov (United States)

    Nieh, Ta-Chun; Yang, Chao-Chin; Huang, Jen-Fa

    2011-08-01

    A complete complementary/prime/shifted prime (CPS) code family for the optical code-division multiple-access (OCDMA) system is proposed. Based on the ability of complete complementary (CC) code, the multiple-access interference (MAI) can be suppressed and eliminated via spectral amplitude coding (SAC) OCDMA system under asynchronous/synchronous transmission. By utilizing the shifted prime (SP) code in the SAC scheme, the hardware implementation of encoder/decoder can be simplified with a reduced number of optical components, such as arrayed waveguide grating (AWG) and fiber Bragg grating (FBG). This system has a superior performance as compared to previous bipolar-bipolar coding OCDMA systems.

  15. Optimal Control of Switching Linear Systems

    Directory of Open Access Journals (Sweden)

    Ali Benmerzouga

    2004-06-01

    Full Text Available A solution to the control of switching linear systems with input constraints was given in Benmerzouga (1997 for both the conventional enumeration approach and the new approach. The solution given there turned out to be not unique. The main objective in this work is to determine the optimal control sequences {Ui(k ,  i = 1,..., M ;  k = 0, 1, ...,  N -1} which transfer the system from a given initial state  X0  to a specific target state  XT  (or to be as close as possible by using the same discrete time solution obtained in Benmerzouga (1997 and minimizing a running cost-to-go function. By using the dynamic programming technique, the optimal solution is found for both approaches given in Benmerzouga (1997. The computational complexity of the modified algorithm is also given.

  16. Development of Attendance Database System Using Bar-coded Student Card

    Directory of Open Access Journals (Sweden)

    Abdul Fadlil

    2008-04-01

    Full Text Available The calculation of the level of attendance is very important, because one indicator of a person's credibility can be seen from the level of attendance. For example, at a university, data about the level of attendance of a student in a lecture is very important as one of components in the assessment. The manual presence system is considered less effective. This research presents the draft of presence system using bar codes (barcodes as input data representing the attendance. The presence system is supported by three main components, those are a bar code found on the student card (KTM, a CCD barcode scanner series and a CD-108E computer. Management of attendance list using this system allows for optimization of functions of KTM. The presence system has been tested with several KTM through a variety of distances and positions of the barcode scanner barcode. The test results is obtained at ideal position for reading a barcode when a barcode scanner is at 2 cm from the object with 90 degree. At this position the level of accuracy reach 100%.

  17. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    Reducible codes for the rank metric were introduced for cryptographic purposes. They have fast encoding and decoding algorithms, include maximum rank distance (MRD) codes, and can correct many rank errors beyond half of their minimum rank distance, which makes them suitable for error correction i...

  18. Optimized trajectory planning for Cybernetic Transportation Systems

    OpenAIRE

    Garrido, Fernando; Gonzalez Bautista, David; Milanés, Vicente; Pérez, Joshué; Nashashibi, Fawzi

    2016-01-01

    International audience; This paper describes the development of an optimized path planning algorithm for automated vehicles in urban environments. This path planning is developed on the basis of urban environments, where Cybernetic Transportation Systems (CTS) will operate. Our approach is mainly affected by vehicle's kinematics and physical road constraints. Based on this assumptions, computational time for path planning can be significantly reduced by creating an off-line database that alre...

  19. OPTIMAL PORTFOLIOS IN DEFINED CONTRIBUTION PENSION SYSTEMS

    OpenAIRE

    EDUARDO WALKER

    2006-01-01

    We study optimal portfolios for defined contribution (possibly mandatory) pension systems, which maximize expected pensions subject to a risk level. By explicitly considering the present value of future individual contributions and changing the risk-return numeraire to future pension units we obtain interesting insights, consistent with the literature, in a simpler context. Results naturally imply that the local indexed (inflation-adjusted) currency is the benchmark and that the investment ho...

  20. Optimization of an Electromagnetics Code with Multicore Wavefront Diamond Blocking and Multi-dimensional Intra-Tile Parallelization

    KAUST Repository

    Malas, Tareq M.

    2016-07-21

    Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell\\'s Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.

  1. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    International Nuclear Information System (INIS)

    Ratnam, Challa; Rao, Vadlamudi Lakshmana; Goud, Sivagouni Lachaa

    2006-01-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper

  2. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    Science.gov (United States)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  3. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    Energy Technology Data Exchange (ETDEWEB)

    Ratnam, Challa [Physics Department, New Science College, Ameerpet, Hyderabad (India); Rao, Vadlamudi Lakshmana [Physics Department, New Science College, Ameerpet, Hyderabad (India); Goud, Sivagouni Lachaa [Department of Physics, Osmania University, Hyderabad (India)

    2006-10-07

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  4. A study on the nuclear computer codes installation and management system

    International Nuclear Information System (INIS)

    Kim, Yeon Seung; Huh, Young Hwan; Kim, Hee Kyung; Kang, Byung Heon; Kim, Ko Ryeo; Suh, Soong Hyok; Choi, Young Gil; Lee, Jong Bok

    1990-12-01

    From 1987 a number of technical transfer related to nuclear power plant had been performed from C-E for YGN 3 and 4 construction. Among them, installation and management of the computer codes for YGN 3 and 4 fuel and nuclear steam supply system was one of the most important project. Main objectives of this project are to establish the nuclear computer code management system, to develop QA procedure for nuclear codes, to secure the nuclear code reliability and to extend techanical applicabilities including the user-oriented utility programs for nuclear codes. Contents of performing the project in this year was to produce 215 transmittal packages of nuclear codes installation including making backup magnetic tape and microfiche for software quality assurance. Lastly, for easy reference about the nuclear codes information we presented list of code names and information on the codes which were introduced from C-E. (Author)

  5. Performance Analysis of Wavelength Multiplexed Sac Ocdma Codes in Beat Noise Mitigation in Sac Ocdma Systems

    Science.gov (United States)

    Alhassan, A. M.; Badruddin, N.; Saad, N. M.; Aljunid, S. A.

    2013-07-01

    In this paper we investigate the use of wavelength multiplexed spectral amplitude coding (WM SAC) codes in beat noise mitigation in coherent source SAC OCDMA systems. A WM SAC code is a low weight SAC code, where the whole code structure is repeated diagonally (once or more) in the wavelength domain to achieve the same cardinality as a higher weight SAC code. Results show that for highly populated networks, the WM SAC codes provide better performance than SAC codes. However, for small number of active users the situation is reversed. Apart from their promising improvement in performance, these codes are more flexible and impose less complexity on the system design than their SAC counterparts.

  6. An Optimized Version of a New Absolute Linear Encoder Dedicated to Intelligent Transportation Systems

    DEFF Research Database (Denmark)

    Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika

    2009-01-01

    This paper proposes an optimized version of a new absolute linear encoder (ALE). The innovative ALE can be used for long distance applications (more then 150m) and the accuracy of the measurements is 0.5mm. To obtain these performances the ALE uses a new coding algorithm. This new coding algorithm...... is the core of the ALE and it allows an economical device solution. The optimized version is able to measure a double distance (more then 300m) with a better accuracy (0.25mm). These performances are obtained using the same device, the same number of sensors and the same ALE structure. The only changes were...... made in the coding algorithm, in the ruler topology and in the dedicated software. The optimized ALE is a robust device able to work in industrial environment, with a high level of vibrations. By this reason it is ideal for the transport system control in automating manufacturing processes, intelligent...

  7. Novel BCH Code Design for Mitigation of Phase Noise Induced Cycle Slips in DQPSK Systems

    DEFF Research Database (Denmark)

    Leong, M. Y.; Larsen, Knud J.; Jacobsen, G.

    2014-01-01

    We show that by proper code design, phase noise induced cycle slips causing an error floor can be mitigated for 28 Gbau d DQPSK systems. Performance of BCH codes are investigated in terms of required overhead......We show that by proper code design, phase noise induced cycle slips causing an error floor can be mitigated for 28 Gbau d DQPSK systems. Performance of BCH codes are investigated in terms of required overhead...

  8. Flow analysis and port optimization of geRotor pump using commercial CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byung Jo; Seong, Seung Hak; Yoon, Soon Hyun [Pusan National Univ., Pusan (Korea, Republic of)

    2005-07-01

    GeRotor pump is widely used in the automotive industry for fuel lift, injection, engine oil lubrication, and also in transmission systems. The CFD study of the pump, which is characterized by transient flow with moving rotor boundaries, has been performed to obtain the most optimum shape of the inlet/outlet port of the pump. Various shapes of the port have been tested to investigate how they affect flow rates and fluctuations. Based on the parametric study, an optimum shape has been determined for the maximum flow rate and minimum fluctuations. The result has been confirmed by experiments. For the optimization, Taguchi method has been adapted. The groove shape has been found to be the most important factor among the selected several parameters related to flow rate and fluctuations.

  9. Variable weight Khazani-Syed code using hybrid fixed-dynamic technique for optical code division multiple access system

    Science.gov (United States)

    Anas, Siti Barirah Ahmad; Seyedzadeh, Saleh; Mokhtar, Makhfudzah; Sahbudin, Ratna Kalos Zakiah

    2016-10-01

    Future Internet consists of a wide spectrum of applications with different bit rates and quality of service (QoS) requirements. Prioritizing the services is essential to ensure that the delivery of information is at its best. Existing technologies have demonstrated how service differentiation techniques can be implemented in optical networks using data link and network layer operations. However, a physical layer approach can further improve system performance at a prescribed received signal quality by applying control at the bit level. This paper proposes a coding algorithm to support optical domain service differentiation using spectral amplitude coding techniques within an optical code division multiple access (OCDMA) scenario. A particular user or service has a varying weight applied to obtain the desired signal quality. The properties of the new code are compared with other OCDMA codes proposed for service differentiation. In addition, a mathematical model is developed for performance evaluation of the proposed code using two different detection techniques, namely direct decoding and complementary subtraction.

  10. Moment Tensor code for the Antelope Environmental Monitoring System

    Science.gov (United States)

    Reyes, J.; Newman, R.; Vernon, F.; van den Hazel, G.

    2012-04-01

    The time domain seismic moment tensor inversion software package written by Dreger (2003) and updated by Minson & Dreger (2008) has been rewritten for inclusion into the open-source contributed code repository for the Boulder Real Time Technology (BRTT) Antelope Environmental Monitoring System. The new code-base was written natively in the Python language and utilizes the Python interface to Antelope (Lindquist et al., 2008) for data access, Scientific Tools for Python library (Eric Jones et al., 2001) for computation and analysis, and the ObsPy library (Beyreuther et al., 2010) for graphical representation. The new code archives all data products into a Center for Seismic Studies (CSS) 3.0 schema table for easy access and distribution of solutions. Stability of the analysis, verification of results and correlation of solutions with similar methods are discussed in this presentation. Analysis is focused on regional earthquakes recorded by Earthscope's USArray network and event parameters are taken from real time and post-event processed data analysis at the Array Network Facility (ANF). A calibrated velocity model representative of the south-west continental United States is used for the analysis. Beyreuther, M., Barsch, R., Krischer, L., Megies, T., Behr, Y. and Wassermann, J. (2010) ObsPy: A Python Toolbox for Seismology, Seismic Research Letters, 81(3), 530-533. Dreger, D. (2003) TDMT_INV: Time Domain Seismic Moment Tensor INVersion, International Handbook of Earthquake and Engineering Seismology, Volume 81B, p 1627. Eric Jones, Travis Oliphant, Pearu Peterson (2001) SciPy: Open Source Scientific Tools for Python, "http://www.scipy.org/" Lindquist, K.G., Clemesha, A., Newman, R.L. and Vernon, F.L. (2008) The Python Interface to Antelope and Applications. Eos Trans. AGU 89(53), Fall Meet. Suppl., Abstract G43A-0671 Minson, S. & Dreger, D. (2008) Stable inversions for complete moment tensors. Geophys. J. Int., 174, 585-592 Saikia, C. (1994) Modified frequency

  11. Distributed Robust Optimization in Networked System.

    Science.gov (United States)

    Wang, Shengnan; Li, Chunguang

    2016-10-11

    In this paper, we consider a distributed robust optimization (DRO) problem, where multiple agents in a networked system cooperatively minimize a global convex objective function with respect to a global variable under the global constraints. The objective function can be represented by a sum of local objective functions. The global constraints contain some uncertain parameters which are partially known, and can be characterized by some inequality constraints. After problem transformation, we adopt the Lagrangian primal-dual method to solve this problem. We prove that the primal and dual optimal solutions of the problem are restricted in some specific sets, and we give a method to construct these sets. Then, we propose a DRO algorithm to find the primal-dual optimal solutions of the Lagrangian function, which consists of a subgradient step, a projection step, and a diffusion step, and in the projection step of the algorithm, the optimized variables are projected onto the specific sets to guarantee the boundedness of the subgradients. Convergence analysis and numerical simulations verifying the performance of the proposed algorithm are then provided. Further, for nonconvex DRO problem, the corresponding approach and algorithm framework are also provided.

  12. Applied optimal control theory of distributed systems

    CERN Document Server

    Lurie, K A

    1993-01-01

    This book represents an extended and substantially revised version of my earlierbook, Optimal Control in Problems ofMathematical Physics,originally published in Russian in 1975. About 60% of the text has been completely revised and major additions have been included which have produced a practically new text. My aim was to modernize the presentation but also to preserve the original results, some of which are little known to a Western reader. The idea of composites, which is the core of the modern theory of optimization, was initiated in the early seventies. The reader will find here its implementation in the problem of optimal conductivity distribution in an MHD-generatorchannel flow.Sincethen it has emergedinto an extensive theory which is undergoing a continuous development. The book does not pretend to be a textbook, neither does it offer a systematic presentation of the theory. Rather, it reflects a concept which I consider as fundamental in the modern approach to optimization of dis­ tributed systems. ...

  13. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  14. Multilevel LDPC Codes Design for Multimedia Communication CDMA System

    Directory of Open Access Journals (Sweden)

    Hou Jia

    2004-01-01

    Full Text Available We design multilevel coding (MLC with a semi-bit interleaved coded modulation (BICM scheme based on low density parity check (LDPC codes. Different from the traditional designs, we joined the MLC and BICM together by using the Gray mapping, which is suitable to transmit the data over several equivalent channels with different code rates. To perform well at signal-to-noise ratio (SNR to be very close to the capacity of the additive white Gaussian noise (AWGN channel, random regular LDPC code and a simple semialgebra LDPC (SA-LDPC code are discussed in MLC with parallel independent decoding (PID. The numerical results demonstrate that the proposed scheme could achieve both power and bandwidth efficiency.

  15. Optimal economic and environment operation of micro-grid power systems

    International Nuclear Information System (INIS)

    Elsied, Moataz; Oukaour, Amrane; Gualous, Hamid; Lo Brutto, Ottavio A.

    2016-01-01

    Highlights: • Real-time energy management system for Micro-Grid power systems is introduced. • The management system considered cost objective function and emission constraints. • The optimization problem is solved using Binary Particle Swarm Algorithm. • Advanced real-time interface libraries are used to run the optimization code. - Abstract: In this paper, an advanced real-time energy management system is proposed in order to optimize micro-grid performance in a real-time operation. The proposed strategy of the management system capitalizes on the power of binary particle swarm optimization algorithm to minimize the energy cost and carbon dioxide and pollutant emissions while maximizing the power of the available renewable energy resources. Advanced real-time interface libraries are used to run the optimization code. The simulation results are considered for three different scenarios considering the complexity of the proposed problem. The proposed management system along with its control system is experimentally tested to validate the simulation results obtained from the optimization algorithm. The experimental results highlight the effectiveness of the proposed management system for micro-grids operation.

  16. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  17. Optimization of Regenerators for AMRR Systems

    Energy Technology Data Exchange (ETDEWEB)

    Nellis, Gregory [University of Wisconsin, Madison, WI (United States); Klein, Sanford [University of Wisconsin, Madison, WI (United States); Brey, William [University of Wisconsin, Madison, WI (United States); Moine, Alexandra [University of Wisconsin, Madison, WI (United States); Nielson, Kaspar [University of Wisconsin, Madison, WI (United States)

    2015-06-18

    Active Magnetic Regenerative Refrigeration (AMRR) systems have no direct global warming potential or ozone depletion potential and hold the potential for providing refrigeration with efficiencies that are equal to or greater than the vapor compression systems used today. The work carried out in this project has developed and improved modeling tools that can be used to optimize and evaluate the magnetocaloric materials and geometric structure of the regenerator beds required for AMRR Systems. There has been an explosion in the development of magnetocaloric materials for AMRR systems over the past few decades. The most attractive materials, based on the magnitude of the measured magnetocaloric effect, tend to also have large amounts of hysteresis. This project has provided for the first time a thermodynamically consistent method for evaluating these hysteretic materials in the context of an AMRR cycle. An additional, practical challenge that has been identified for AMRR systems is related to the participation of the regenerator wall in the cyclic process. The impact of housing heat capacity on both passive and active regenerative systems has been studied and clarified within this project. This report is divided into two parts corresponding to these two efforts. Part 1 describes the work related to modeling magnetic hysteresis while Part 2 discusses the modeling of the heat capacity of the housing. A key outcome of this project is the development of a publically available modeling tool that allows researchers to identify a truly optimal magnetocaloric refrigerant. Typically, the refrigeration potential of a magnetocaloric material is judged entirely based on the magnitude of the magnetocaloric effect and other properties of the material that are deemed unimportant. This project has shown that a material with a large magnetocaloric effect (as evidenced, for example, by a large adiabatic temperature change) may not be optimal when it is accompanied by a large hysteresis

  18. Prototype demonstration of radiation therapy planning code system

    International Nuclear Information System (INIS)

    Little, R.C.; Adams, K.J.; Estes, G.P.; Hughes, L.S. III; Waters, L.S.

    1996-01-01

    This is the final report of a one-year, Laboratory-Directed Research and Development project at the Los Alamos National Laboratory (LANL). Radiation therapy planning is the process by which a radiation oncologist plans a treatment protocol for a patient preparing to undergo radiation therapy. The objective is to develop a protocol that delivers sufficient radiation dose to the entire tumor volume, while minimizing dose to healthy tissue. Radiation therapy planning, as currently practiced in the field, suffers from inaccuracies made in modeling patient anatomy and radiation transport. This project investigated the ability to automatically model patient-specific, three-dimensional (3-D) geometries in advanced Los Alamos radiation transport codes (such as MCNP), and to efficiently generate accurate radiation dose profiles in these geometries via sophisticated physics modeling. Modem scientific visualization techniques were utilized. The long-term goal is that such a system could be used by a non-expert in a distributed computing environment to help plan the treatment protocol for any candidate radiation source. The improved accuracy offered by such a system promises increased efficacy and reduced costs for this important aspect of health care

  19. Prototype demonstration of radiation therapy planning code system

    Energy Technology Data Exchange (ETDEWEB)

    Little, R.C.; Adams, K.J.; Estes, G.P.; Hughes, L.S. III; Waters, L.S. [and others

    1996-09-01

    This is the final report of a one-year, Laboratory-Directed Research and Development project at the Los Alamos National Laboratory (LANL). Radiation therapy planning is the process by which a radiation oncologist plans a treatment protocol for a patient preparing to undergo radiation therapy. The objective is to develop a protocol that delivers sufficient radiation dose to the entire tumor volume, while minimizing dose to healthy tissue. Radiation therapy planning, as currently practiced in the field, suffers from inaccuracies made in modeling patient anatomy and radiation transport. This project investigated the ability to automatically model patient-specific, three-dimensional (3-D) geometries in advanced Los Alamos radiation transport codes (such as MCNP), and to efficiently generate accurate radiation dose profiles in these geometries via sophisticated physics modeling. Modem scientific visualization techniques were utilized. The long-term goal is that such a system could be used by a non-expert in a distributed computing environment to help plan the treatment protocol for any candidate radiation source. The improved accuracy offered by such a system promises increased efficacy and reduced costs for this important aspect of health care.

  20. Optimization of Hierarchically Scheduled Heterogeneous Embedded Systems

    DEFF Research Database (Denmark)

    Pop, Traian; Pop, Paul; Eles, Petru

    2005-01-01

    We present an approach to the analysis and optimization of heterogeneous distributed embedded systems. The systems are heterogeneous not only in terms of hardware components, but also in terms of communication protocols and scheduling policies. When several scheduling policies share a resource......, they are organized in a hierarchy. In this paper, we address design problems that are characteristic to such hierarchically scheduled systems: assignment of scheduling policies to tasks, mapping of tasks to hardware components, and the scheduling of the activities. We present algorithms for solving these problems....... Our heuristics are able to find schedulable implementations under limited resources, achieving an efficient utilization of the system. The developed algorithms are evaluated using extensive experiments and a real-life example....

  1. Optimization of Neuro-Fuzzy System

    Directory of Open Access Journals (Sweden)

    M. Sarosa

    2007-05-01

    Full Text Available Neuro-fuzzy system has been shown to provide a good performance on chromosome classification but does not offer a simple method to obtain the accurate parameter values required to yield the best recognition rate. This paper presents a neuro-fuzzy system where its parameters can be automatically adjusted using genetic algorithms. The approach combines the advantages of fuzzy logic theory, neural networks, and genetic algorithms. The structure consists of a four layer feed-forward neural network that uses a GBell membership function as the output function. The proposed methodology has been applied and tested on banded chromosome classification from the Copenhagen Chromosome Database. Simulation result showed that the proposed neuro-fuzzy system optimized by genetic algorithms offers advantages in setting the parameter values, improves the recognition rate significantly and decreases the training/testing time which makes genetic neuro-fuzzy system suitable for chromosome classification.

  2. Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.

    Science.gov (United States)

    Uzun, Vassilya; Bilgin, Sami

    2016-01-01

    For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.

  3. Effective multi-objective optimization of Stirling engine systems

    International Nuclear Information System (INIS)

    Punnathanam, Varun; Kotecha, Prakash

    2016-01-01

    Highlights: • Multi-objective optimization of three recent Stirling engine models. • Use of efficient crossover and mutation operators for real coded Genetic Algorithm. • Demonstrated supremacy of the strategy over the conventionally used algorithm. • Improvements of up to 29% in comparison to literature results. - Abstract: In this article we demonstrate the supremacy of the Non-dominated Sorting Genetic Algorithm-II with Simulated Binary Crossover and Polynomial Mutation operators for the multi-objective optimization of Stirling engine systems by providing three examples, viz., (i) finite time thermodynamic model, (ii) Stirling engine thermal model with associated irreversibility and (iii) polytropic finite speed based thermodynamics. The finite time thermodynamic model involves seven decision variables and consists of three objectives: output power, thermal efficiency and rate of entropy generation. In comparison to literature, it was observed that the used strategy provides a better Pareto front and leads to improvements of up to 29%. The performance is also evaluated on a Stirling engine thermal model which considers the associated irreversibility of the cycle and consists of three objectives involving eleven decision variables. The supremacy of the suggested strategy is also demonstrated on the experimentally validated polytropic finite speed thermodynamics based Stirling engine model for optimization involving two objectives and ten decision variables.

  4. 14 CFR Sec. 1-4 - System of accounts coding.

    Science.gov (United States)

    2010-01-01

    ...) A fifth digit, appended as a decimal, has been assigned for internal control by the BTS of... different fifth digit code number from that assigned by the BTS may be adopted for internal recordkeeping by... the code number assigned by the BTS is employed in reporting to the BTS on Form 41 Reports. [ER-755...

  5. Simulation Propulsion System and Trajectory Optimization

    Science.gov (United States)

    Hendricks, Eric S.; Falck, Robert D.; Gray, Justin S.

    2017-01-01

    A number of new aircraft concepts have recently been proposed which tightly couple the propulsion system design and operation with the overall vehicle design and performance characteristics. These concepts include propulsion technology such as boundary layer ingestion, hybrid electric propulsion systems, distributed propulsion systems and variable cycle engines. Initial studies examining these concepts have typically used a traditional decoupled approach to aircraft design where the aerodynamics and propulsion designs are done a-priori and tabular data is used to provide inexpensive look ups to the trajectory ana-ysis. However the cost of generating the tabular data begins to grow exponentially when newer aircraft concepts require consideration of additional operational parameters such as multiple throttle settings, angle-of-attack effects on the propulsion system, or propulsion throttle setting effects on aerodynamics. This paper proposes a new modeling approach that eliminated the need to generate tabular data, instead allowing an expensive propulsion or aerodynamic analysis to be directly integrated into the trajectory analysis model and the entire design problem optimized in a fully coupled manner. The new method is demonstrated by implementing a canonical optimal control problem, the F-4 minimum time-to-climb trajectory optimization using three relatively new analysis tools: Open M-DAO, PyCycle and Pointer. Pycycle and Pointer both provide analytic derivatives and Open MDAO enables the two tools to be combined into a coupled model that can be run in an efficient parallel manner that helps to cost the increased cost of the more expensive propulsion analysis. Results generated with this model serve as a validation of the tightly coupled design method and guide future studies to examine aircraft concepts with more complex operational dependencies for the aerodynamic and propulsion models.

  6. Exploring a QoS Driven Scheduling Approach for Peer-to-Peer Live Streaming Systems with Network Coding

    Science.gov (United States)

    Cui, Laizhong; Lu, Nan; Chen, Fu

    2014-01-01

    Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968

  7. Optimization of power system voltage stability

    Science.gov (United States)

    Stamp, Jason Edwin

    Contemporary power systems exist under heavy stress, caused by higher asset utilization in electric power transmission. As networks are operated nearer to their limits, new stability issues have arisen. One of the more destructive problems is voltage instability, where large areas of an electrical network may experience reduced voltages or collapse because of high reactive power demand. Voltage stability margins may be improved through the adjustment of the system operating position, which alters the power flow profile of the transmission network. Furthermore, the margins may be optimized through the application of nonlinear programming, if they are quantified using an index of voltage collapse proximity. This dissertation details the maximization of the eigenvalues of the reduced reactive power-voltage matrix, in an effort to increase voltage security. The nonlinear optimization was solved using four different techniques. First, a conventional optimal power flow was applied to the problem, which solved linear approximations of the original problem and maintained feasibility for the intermediate points. This method was augmented to include a quadratic model of the objective function. In addition, the feasibility requirement was relaxed to produce a third solution technique. Finally, the stability optimization problem was solved using a quadratic model without the feasibility requirement. Tests of all four methods were performed on three sample power systems. The systems included six, 14, and 118 bus examples. In all three cases, each of the four methods effected improvement in the stability margin, as measured by a variety of indicators. The infeasible linear solution provided the best results, based on runtime and the relative stability improvement. Also, the results showed that the additional quadratic approximation did not provide any measurable benefit to the procedure. Moreover, the methods that specified feasibility at each step were inferior compared to the

  8. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    Science.gov (United States)

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer

  9. Optimal design of FIR high pass filter based on L1 error approximation using real coded genetic algorithm

    Directory of Open Access Journals (Sweden)

    Apoorva Aggarwal

    2015-12-01

    Full Text Available In this paper, an optimal design of linear phase digital finite impulse response (FIR highpass (HP filter using the L1-norm based real-coded genetic algorithm (RCGA is investigated. A novel fitness function based on L1 norm is adopted to enhance the design accuracy. Optimized filter coefficients are obtained by defining the filter objective function in L1 sense using RCGA. Simulation analysis unveils that the performance of the RCGA adopting this fitness function is better in terms of signal attenuation ability of the filter, flatter passband and the convergence rate. Observations are made on the percentage improvement of this algorithm over the gradient-based L1 optimization approach on various factors by a large amount. It is concluded that RCGA leads to the best solution under specified parameters for the FIR filter design on account of slight unnoticeable higher transition width.

  10. A Spanish version for the new ERA-EDTA coding system for primary renal disease

    Directory of Open Access Journals (Sweden)

    Óscar Zurriaga

    2015-07-01

    Conclusions: Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes.

  11. Performance analysis of wavelength/spatial coding system with fixed in-phase code matrices in OCDMA network

    Science.gov (United States)

    Tsai, Cheng-Mu; Liang, Tsair-Chun

    2011-12-01

    This paper proposes a wavelength/spatial (W/S) coding system with fixed in-phase code (FIPC) matrix in the optical code-division multiple-access (OCDMA) network. A scheme is presented to form the FIPC matrix which is applied to construct the W/S OCDMA network. The encoder/decoder in the W/S OCDMA network is fully able to eliminate the multiple-access-interference (MAI) at the balanced photo-detectors (PD), according to fixed in-phase cross correlation. The phase-induced intensity noise (PIIN) related to the power square is markedly suppressed in the receiver by spreading the received power into each PD while the net signal power is kept the same. Simulation results show that the W/S OCDMA network based on the FIPC matrices cannot only completely remove the MAI but effectively suppress the PIIN to upgrade the network performance.

  12. ELCOS: the PSI code system for LWR core analysis. Part II: user's manual for the fuel assembly code BOXER

    International Nuclear Information System (INIS)

    Paratte, J.M.; Grimm, P.; Hollard, J.M.

    1996-02-01

    ELCOS is a flexible code system for the stationary simulation of light water reactor cores. It consists of the four computer codes ETOBOX, BOXER, CORCOD and SILWER. The user's manual of the second one is presented here. BOXER calculates the neutronics in cartesian geometry. The code can roughly be divided into four stages: - organisation: choice of the modules, file manipulations, reading and checking of input data, - fine group fluxes and condensation: one-dimensional calculation of fluxes and computation of the group constants of homogeneous materials and cells, - two-dimensional calculations: geometrically detailed simulation of the configuration in few energy groups, - burnup: evolution of the nuclide densities as a function of time. This manual shows all input commands which can be used while running the different modules of BOXER. (author) figs., tabs., refs

  13. Cost Optimal System Identification Experiment Design

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    A structural system identification experiment design method is formulated in the light of decision theory, structural reliability theory and optimization theory. The experiment design is based on a preposterior analysis, well-known from the classical decision theory. I.e. the decisions concerning...... the experiment design are not based on obtained experimental data. Instead the decisions are based on the expected experimental data assumed to be obtained from the measurements, estimated based on prior information and engineering judgement. The design method provides a system identification experiment design...... reflecting the cost of the experiment and the value of obtained additional information. An example concerning design of an experiment for parametric identification of a single degree of freedom structural system shows the applicability of the experiment design method....

  14. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    International Nuclear Information System (INIS)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl

    2008-10-01

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained

  15. TASS/SMR Code Topical Report for SMART Plant, Vol. I: Code Structure, System Models, and Solution Methods

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Young Jong; Kim, Soo Hyoung; Kim, See Darl (and others)

    2008-10-15

    The TASS/SMR code has been developed with domestic technologies for the safety analysis of the SMART plant which is an integral type pressurized water reactor. It can be applied to the analysis of design basis accidents including non-LOCA (loss of coolant accident) and LOCA of the SMART plant. The TASS/SMR code can be applied to any plant regardless of the structural characteristics of a reactor since the code solves the same governing equations for both the primary and secondary system. The code has been developed to meet the requirements of the safety analysis code. This report describes the overall structure of the TASS/SMR, input processing, and the processes of a steady state and transient calculations. In addition, basic differential equations, finite difference equations, state relationships, and constitutive models are described in the report. First, the conservation equations, a discretization process for numerical analysis, search method for state relationship are described. Then, a core power model, heat transfer models, physical models for various components, and control and trip models are explained.

  16. Environmental performance of green building code and certification systems.

    Science.gov (United States)

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  17. Computer codes for ventilation in nuclear facilities

    International Nuclear Information System (INIS)

    Mulcey, P.

    1987-01-01

    In this paper the authors present some computer codes, developed in the last years, for ventilation and radioprotection. These codes are used for safety analysis in the conception, exploitation and dismantlement of nuclear facilities. The authors present particularly: DACC1 code used for aerosol deposit in sampling circuit of radiation monitors; PIAF code used for modelization of complex ventilation system; CLIMAT 6 code used for optimization of air conditioning system [fr

  18. Optimization of Hybrid Renewable Energy Systems

    Science.gov (United States)

    Contreras Cordero, Francisco Jose

    Use of diesel generators in remote communities is economically and environmentally unsustainable. Consequently, researchers have focussed on designing hybrid renewable energy systems (HRES) for distributed electricity generation in remote communities. However, the cost-effectiveness of interconnecting multiple remote communities (microgrids) has not been explored. The main objective of this thesis is to develop a methodology for optimal design of HRES and microgrids for remote communities. A set of case studies was developed to test this methodology and it was determined that a combination of stand-alone decentralized HRES and microgrids is the most cost-effective power generation scheme when studying a group of remote communities.

  19. Metaheuristics progress in complex systems optimization

    CERN Document Server

    Doerner, Karl F; Greistorfer, Peter; Gutjahr, Walter; Hartl, Richard F; Reimann, Marc

    2007-01-01

    The aim of ""Metaheuristics: Progress in Complex Systems Optimization"" is to provide several different kinds of information: a delineation of general metaheuristics methods, a number of state-of-the-art articles from a variety of well-known classical application areas as well as an outlook to modern computational methods in promising new areas. Therefore, this book may equally serve as a textbook in graduate courses for students, as a reference book for people interested in engineering or social sciences, and as a collection of new and promising avenues for researchers working in this field.

  20. Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System

    Science.gov (United States)

    Taft, James R.

    2000-01-01

    The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full

  1. Process-optimizing Multivariable Control of a Boiler System

    DEFF Research Database (Denmark)

    Pedersen, Tom Søndergaard; Hansen, T.; Hangstrup, M.

    1996-01-01

    This paper presents a method to apply multivariable controllers as optional process optimizing extensions to existing conventional control systems.......This paper presents a method to apply multivariable controllers as optional process optimizing extensions to existing conventional control systems....

  2. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  3. About the coding system of rivers, catchment basing and their characteristics of the republic of Armenia

    International Nuclear Information System (INIS)

    Avagyan, A.A.; Arakelyan, A.A.

    2011-01-01

    The coding of rivers, catchements, lakes and seas is one of the most important requirements of Water Framework Directive of the European Union. This coding provides solutions to actual problems of planning and management of water resources of the Republic of Armenia. The coding system provides the hierarchy of water bodies and watersheds with their typology as well as their geographic and natural conditions, anthropogenic pressures and ecological status. This approach is a fundamentally new complex solution to the coding of water resources. The coding technique allows you to automate the assessment and mapping of environmental risks and areas of water bodies which are subjected to significant pressure and also helps to solve other problems concerning the planning and the management of water resources. A complex code of each water body consists of the following groups of codes: Hydrographic code - an identifier of a water body in the hydrographic system of the country; Codes of static attributes in the system requirements of the Water Framework Directive of the European Union; Codes of static attributes of the qualifiers of the RA National Water Program; Codes of dynamic attributes that define the quality of water and characteristics of water use; Codes of dynamic attributes describing the human impact and determining the ecological status of water body

  4. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  5. Evaluation of the analysis models in the ASTRA nuclear design code system

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Jin; Park, Chang Jea; Kim, Do Sam; Lee, Kyeong Taek; Kim, Jong Woon [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of)

    2000-11-15

    In the field of nuclear reactor design, main practice was the application of the improved design code systems. During the process, a lot of basis and knowledge were accumulated in processing input data, nuclear fuel reload design, production and analysis of design data, et al. However less efforts were done in the analysis of the methodology and in the development or improvement of those code systems. Recently, KEPO Nuclear Fuel Company (KNFC) developed the ASTRA (Advanced Static and Transient Reactor Analyzer) code system for the purpose of nuclear reactor design and analysis. In the code system, two group constants were generated from the CASMO-3 code system. The objective of this research is to analyze the analysis models used in the ASTRA/CASMO-3 code system. This evaluation requires indepth comprehension of the models, which is important so much as the development of the code system itself. Currently, most of the code systems used in domestic Nuclear Power Plant were imported, so it is very difficult to maintain and treat the change of the situation in the system. Therefore, the evaluation of analysis models in the ASTRA nuclear reactor design code system in very important.

  6. Morse code recognition system with fuzzy algorithm for disabled persons.

    Science.gov (United States)

    Wu, C-M; Luo, C-H

    2002-01-01

    It is generally known that Morse code is an efficient input method for one or two switches and it is made from long and short sounds separated by silence between the sounds. The long-to-short ratio in the definition is always 3 to 1, but the long-to-short ratio variation for a disabled person is so large that it is difficult to recognize. In the last few years, several Morse code recognition methods have been successfully built on the LMS adaptive algorithms and neural network algorithm. But LMS-related adaptive algorithms need mass computation to infer the characteristic of the controller; also the neural network must learn first, by inputting some data before it is used to recognize the Morse code sequence. In this study, two fuzzy algorithms are used to recognize the unstable Morse code sequences and the result demonstrates a significant improvement of recognition for real time signal processing in a single-chip microprocessor.

  7. Structure and Operation of the ITS Code System

    Science.gov (United States)

    Halbleib, J.

    The TIGER series of time-independent coupled electron-photon Monte Carlo transport codes is a group of multimaterial and multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron-photon cascade by combining microscopic photon transport with a macroscopic random walk1 for electron transport. Major contributors to its evolution are listed in Table 10.1.

  8. On optimal designs of transparent WDM networks with 1 + 1 protection leveraged by all-optical XOR network coding schemes

    Science.gov (United States)

    Dao, Thanh Hai

    2018-01-01

    Network coding techniques are seen as the new dimension to improve the network performances thanks to the capability of utilizing network resources more efficiently. Indeed, the application of network coding to the realm of failure recovery in optical networks has been marking a major departure from traditional protection schemes as it could potentially achieve both rapid recovery and capacity improvement, challenging the prevailing wisdom of trading capacity efficiency for speed recovery and vice versa. In this context, the maturing of all-optical XOR technologies appears as a good match to the necessity of a more efficient protection in transparent optical networks. In addressing this opportunity, we propose to use a practical all-optical XOR network coding to leverage the conventional 1 + 1 optical path protection in transparent WDM optical networks. The network coding-assisted protection solution combines protection flows of two demands sharing the same destination node in supportive conditions, paving the way for reducing the backup capacity. A novel mathematical model taking into account the operation of new protection scheme for optimal network designs is formulated as the integer linear programming. Numerical results based on extensive simulations on realistic topologies, COST239 and NSFNET networks, are presented to highlight the benefits of our proposal compared to the conventional approach in terms of wavelength resources efficiency and network throughput.

  9. Fuel management and core design code systems for pressurized water reactor neutronic calculations

    International Nuclear Information System (INIS)

    Ahnert, C.; Arayones, J.M.

    1985-01-01

    A package of connected code systems for the neutronic calculations relevant in fuel management and core design has been developed and applied for validation to the startup tests and first operating cycle of a 900MW (electric) PWR. The package includes the MARIA code system for the modeling of the different types of PWR fuel assemblies, the CARMEN code system for detailed few group diffusion calculations for PWR cores at operating and burnup conditions, and the LOLA code system for core simulation using onegroup nodal theory parameters explicitly calculated from the detailed solutions

  10. Optimal Control of Solar Heating System

    KAUST Repository

    Huang, Bin-Juine

    2017-02-21

    Forced-circulation solar heating system has been widely used in process and domestic heating applications. Additional pumping power is required to circulate the water through the collectors to absorb the solar energy. The present study intends to develop a maximum-power point tracking control (MPPT) to obtain the minimum pumping power consumption at an optimal heat collection. The net heat energy gain Qnet (= Qs − Wp/ηe) was found to be the cost function for MPPT. The step-up-step-down controller was used in the feedback design of MPPT. The field test results show that the pumping power is 89 W at Qs = 13.7 kW and IT = 892 W/m2. A very high electrical COP of the solar heating system (Qs/Wp = 153.8) is obtained.

  11. Optimized systems for energy efficient optical tweezing

    Science.gov (United States)

    Kampmann, R.; Kleindienst, R.; Grewe, A.; Bürger, Elisabeth; Oeder, A.; Sinzinger, S.

    2013-03-01

    Compared to conventional optics like singlet lenses or even microscope objectives advanced optical designs help to develop properties specifically useful for efficient optical tweezers. We present an optical setup providing a customized intensity distribution optimized with respect to large trapping forces. The optical design concept combines a refractive double axicon with a reflective parabolic focusing mirror. The axicon arrangement creates an annular field distribution and thus clears space for additional integrated observation optics in the center of the system. Finally the beam is focused to the desired intensity distribution by a parabolic ring mirror. The compact realization of the system potentially opens new fields of applications for optical tweezers such as in production industries and micro-nano assembly.

  12. Optimizing Resource Utilization in Grid Batch Systems

    International Nuclear Information System (INIS)

    Gellrich, Andreas

    2012-01-01

    On Grid sites, the requirements of the computing tasks (jobs) to computing, storage, and network resources differ widely. For instance Monte Carlo production jobs are almost purely CPU-bound, whereas physics analysis jobs demand high data rates. In order to optimize the utilization of the compute node resources, jobs must be distributed intelligently over the nodes. Although the job resource requirements cannot be deduced directly, jobs are mapped to POSIX UID/GID according to the VO, VOMS group and role information contained in the VOMS proxy. The UID/GID then allows to distinguish jobs, if users are using VOMS proxies as planned by the VO management, e.g. ‘role=production’ for Monte Carlo jobs. It is possible to setup and configure batch systems (queuing system and scheduler) at Grid sites based on these considerations although scaling limits were observed with the scheduler MAUI. In tests these limitations could be overcome with a home-made scheduler.

  13. Optimization of catalyst system reaps economic benefits

    International Nuclear Information System (INIS)

    Le Roy, C.F.; Hanshaw, M.J.; Fischer, S.M.; Malik, T.; Kooiman, R.R.

    1991-01-01

    Champlin Refining and Chemicals Inc. is learning to optimize its catalyst systems for hydrotreating Venezuelan gas oils through a program of research, pilot plant testing, and commercial unit operation. The economic results of this project have been evaluated, and the benefits are most evident in improvements in product yields and qualities. The project has involved six commercial test runs, to date (Runs 10-15), with a seventh run planned. A summary of the different types of catalyst systems used in the test runs, and the catalyst philosophy that developed is given. Runs 10 and 11 used standard CoMo and NiMo catalysts for heavy gas oils hydrotreating. These catalysts had small pore sizes and suffered high deactivation rates because of metals contamination. When it was discovered that metals contamination was a problem, catalyst options were reviewed

  14. Trajectory of Sewerage System Development Optimization

    Science.gov (United States)

    Chupin, R. V.; Mayzel, I. V.; Chupin, V. R.

    2017-11-01

    The transition to market relations has determined a new technology for our country to manage the development of urban engineering systems. This technology has shifted to the municipal level and it can, in large, be presented in two stages. The first is the development of a scheme for the development of the water supply and sanitation system, the second is the implementation of this scheme on the basis of investment programs of utilities. In the investment programs, financial support is provided for the development and reconstruction of water disposal systems due to the investment component in the tariff, connection fees for newly commissioned capital construction projects and targeted financing for selected state and municipal programs, loans and credits. Financial provision with the development of sewerage systems becomes limited and the problem arises in their rational distribution between the construction of new water disposal facilities and the reconstruction of existing ones. The paper suggests a methodology for developing options for the development of sewerage systems, selecting the best of them by the life cycle cost criterion, taking into account the limited investments in their construction, models and methods of analysis, optimizing their reconstruction and development, taking into account reliability and seismic resistance.

  15. Optimizing Hydronic System Performance in Residential Applications

    Energy Technology Data Exchange (ETDEWEB)

    Arena, L.; Faakye, O.

    2013-10-01

    Even though new homes constructed with hydronic heat comprise only 3% of the market (US Census Bureau 2009), of the 115 million existing homes in the United States, almost 14 million of those homes (11%) are heated with steam or hot water systems according to 2009 US Census data. Therefore, improvements in hydronic system performance could result in significant energy savings in the US. When operating properly, the combination of a gas-fired condensing boiler with baseboard convectors and an indirect water heater is a viable option for high-efficiency residential space heating in cold climates. Based on previous research efforts, however, it is apparent that these types of systems are typically not designed and installed to achieve maximum efficiency. Furthermore, guidance on proper design and commissioning for heating contractors and energy consultants is hard to find and is not comprehensive. Through modeling and monitoring, CARB sought to determine the optimal combination(s) of components - pumps, high efficiency heat sources, plumbing configurations and controls - that result in the highest overall efficiency for a hydronic system when baseboard convectors are used as the heat emitter. The impact of variable-speed pumps on energy use and system performance was also investigated along with the effects of various control strategies and the introduction of thermal mass.

  16. Optimal Tax Depreciation under a Progressive Tax System

    NARCIS (Netherlands)

    Wielhouwer, J.L.; De Waegenaere, A.M.B.; Kort, P.M.

    2000-01-01

    The focus of this paper is on the effect of a progressive tax system on optimal tax depreciation. By using dynamic optimization we show that an optimal strategy exists, and we provide an analytical expression for the optimal depreciation charges. Depreciation charges initially decrease over time,

  17. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    Energy Technology Data Exchange (ETDEWEB)

    Son, Han Seong; Song, Deok Yong [ENESYS, Taejon (Korea, Republic of); Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon [Korea Institute of Nuclear Safety, Taejon (Korea, Republic of)

    2006-07-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors.

  18. Development of Coupled Interface System between the FADAS Code and a Source-term Evaluation Code XSOR for CANDU Reactors

    International Nuclear Information System (INIS)

    Son, Han Seong; Song, Deok Yong; Kim, Ma Woong; Shin, Hyeong Ki; Lee, Sang Kyu; Kim, Hyun Koon

    2006-01-01

    An accident prevention system is essential to the industrial security of nuclear industry. Thus, the more effective accident prevention system will be helpful to promote safety culture as well as to acquire public acceptance for nuclear power industry. The FADAS(Following Accident Dose Assessment System) which is a part of the Computerized Advisory System for a Radiological Emergency (CARE) system in KINS is used for the prevention against nuclear accident. In order to enhance the FADAS system more effective for CANDU reactors, it is necessary to develop the various accident scenarios and reliable database of source terms. This study introduces the construction of the coupled interface system between the FADAS and the source-term evaluation code aimed to improve the applicability of the CANDU Integrated Safety Analysis System (CISAS) for CANDU reactors

  19. Analysis of the KUCA MEU experiments using the ANL code system

    Energy Technology Data Exchange (ETDEWEB)

    Shiroya, S.; Hayashi, M.; Kanda, K.; Shibata, T.; Woodruff, W.L.; Matos, J.E.

    1982-01-01

    This paper provides some preliminary results on the analysis of the KUCA critical experiments using the ANL code system. Since this system was employed in the earlier neutronics calculations for the KUHFR, it is important to assess its capabilities for the KUHFR. The KUHFR has a unique core configuration which is difficult to model precisely with current diffusion theory codes. This paper also provides some results from a finite-element diffusion code (2D-FEM-KUR), which was developed in a cooperative research program between KURRI and JAERI. This code provides the capability for mockup of a complex core configuration as the KUHFR. Using the same group constants generated by the EPRI-CELL code, the results of the 2D-FEM-KUR code are compared with the finite difference diffusion code (DIF3D(2D) which is mainly employed in this analysis.

  20. Construction and performance analysis of variable-weight optical orthogonal codes for asynchronous OCDMA systems

    Science.gov (United States)

    Li, Chuan-qi; Yang, Meng-jie; Zhang, Xiu-rong; Chen, Mei-juan; He, Dong-dong; Fan, Qing-bin

    2014-07-01

    A construction scheme of variable-weight optical orthogonal codes (VW-OOCs) for asynchronous optical code division multiple access (OCDMA) system is proposed. According to the actual situation, the code family can be obtained by programming in Matlab with the given code weight and corresponding capacity. The formula of bit error rate (BER) is derived by taking account of the effects of shot noise, avalanche photodiode (APD) bulk, thermal noise and surface leakage currents. The OCDMA system with the VW-OOCs is designed and improved. The study shows that the VW-OOCs have excellent performance of BER. Despite of coming from the same code family or not, the codes with larger weight have lower BER compared with the other codes in the same conditions. By taking simulation, the conclusion is consistent with the analysis of BER in theory. And the ideal eye diagrams are obtained by the optical hard limiter.