WorldWideScience

Sample records for integration optimization code

  1. Numerical computation of molecular integrals via optimized (vectorized) FORTRAN code

    International Nuclear Information System (INIS)

    Scott, T.C.; Grant, I.P.; Saunders, V.R.

    1997-01-01

    The calculation of molecular properties based on quantum mechanics is an area of fundamental research whose horizons have always been determined by the power of state-of-the-art computers. A computational bottleneck is the numerical calculation of the required molecular integrals to sufficient precision. Herein, we present a method for the rapid numerical evaluation of molecular integrals using optimized FORTRAN code generated by Maple. The method is based on the exploitation of common intermediates and the optimization can be adjusted to both serial and vectorized computations. (orig.)

  2. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J. [Delft University of Technology, NL-2629 JB Delft (Netherlands)

    1998-07-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  3. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    International Nuclear Information System (INIS)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J.

    1998-01-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  4. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  5. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  6. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  7. Progress on DART code optimization

    International Nuclear Information System (INIS)

    Taboada, Horacio; Solis, Diego; Rest, Jeffrey

    1999-01-01

    This work consists about the progress made on the design and development of a new optimized version of DART code (DART-P), a mechanistic computer model for the performance calculation and assessment of aluminum dispersion fuel. It is part of a collaboration agreement between CNEA and ANL in the area of Low Enriched Uranium Advanced Fuels. It is held by the Implementation Arrangement for Technical Exchange and Cooperation in the Area of Peaceful Uses of Nuclear Energy, signed on October 16, 1997 between US DOE and the National Atomic Energy Commission of the Argentine Republic. DART optimization is a biannual program; it is operative since February 8, 1999 and has the following goals: 1. Design and develop a new DART calculation kernel for implementation within a parallel processing architecture. 2. Design and develop new user-friendly I/O routines to be resident on Personal Computer (PC)/WorkStation (WS) platform. 2.1. The new input interface will be designed and developed by means of a Visual interface, able to guide the user in the construction of the problem to be analyzed with the aid of a new database (described in item 3, below). The new I/O interface will include input data check controls in order to avoid corrupted input data. 2.2. The new output interface will be designed and developed by means of graphical tools, able to translate numeric data output into 'on line' graphic information. 3. Design and develop a new irradiated materials database, to be resident on PC/WS platform, so as to facilitate the analysis of the behavior of different fuel and meat compositions with DART-P. Currently, a different version of DART is used for oxide, silicide, and advanced alloy fuels. 4. Develop rigorous general inspection algorithms in order to provide valuable DART-P benchmarks. 5. Design and develop new models, such as superplasticity, elastoplastic feedback, improved models for the calculation of fuel deformation and the evolution of the fuel microstructure for

  8. Some optimizations of the animal code

    International Nuclear Information System (INIS)

    Fletcher, W.T.

    1975-01-01

    Optimizing techniques were performed on a version of the ANIMAL code (MALAD1B) at the source-code (FORTRAN) level. Sample optimizing techniques and operations used in MALADOP--the optimized version of the code--are presented, along with a critique of some standard CDC 7600 optimizing techniques. The statistical analysis of total CPU time required for MALADOP and MALAD1B shows a run-time saving of 174 msec (almost 3 percent) in the code MALADOP during one time step

  9. Scaling Optimization of the SIESTA MHD Code

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan

    2013-10-01

    SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  10. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  11. Space and Terrestrial Power System Integration Optimization Code BRMAPS for Gas Turbine Space Power Plants With Nuclear Reactor Heat Sources

    Science.gov (United States)

    Juhasz, Albert J.

    2007-01-01

    In view of the difficult times the US and global economies are experiencing today, funds for the development of advanced fission reactors nuclear power systems for space propulsion and planetary surface applications are currently not available. However, according to the Energy Policy Act of 2005 the U.S. needs to invest in developing fission reactor technology for ground based terrestrial power plants. Such plants would make a significant contribution toward drastic reduction of worldwide greenhouse gas emissions and associated global warming. To accomplish this goal the Next Generation Nuclear Plant Project (NGNP) has been established by DOE under the Generation IV Nuclear Systems Initiative. Idaho National Laboratory (INL) was designated as the lead in the development of VHTR (Very High Temperature Reactor) and HTGR (High Temperature Gas Reactor) technology to be integrated with MMW (multi-megawatt) helium gas turbine driven electric power AC generators. However, the advantages of transmitting power in high voltage DC form over large distances are also explored in the seminar lecture series. As an attractive alternate heat source the Liquid Fluoride Reactor (LFR), pioneered at ORNL (Oak Ridge National Laboratory) in the mid 1960's, would offer much higher energy yields than current nuclear plants by using an inherently safe energy conversion scheme based on the Thorium --> U233 fuel cycle and a fission process with a negative temperature coefficient of reactivity. The power plants are to be sized to meet electric power demand during peak periods and also for providing thermal energy for hydrogen (H2) production during "off peak" periods. This approach will both supply electric power by using environmentally clean nuclear heat which does not generate green house gases, and also provide a clean fuel H2 for the future, when, due to increased global demand and the decline in discovering new deposits, our supply of liquid fossil fuels will have been used up. This is

  12. CBP Phase I Code Integration

    International Nuclear Information System (INIS)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-01-01

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown and Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown and Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface

  13. CBP PHASE I CODE INTEGRATION

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was

  14. Optimal Codes for the Burst Erasure Channel

    Science.gov (United States)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure

  15. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  16. The Fireball integrated code package

    Energy Technology Data Exchange (ETDEWEB)

    Dobranich, D.; Powers, D.A.; Harper, F.T.

    1997-07-01

    Many deep-space satellites contain a plutonium heat source. An explosion, during launch, of a rocket carrying such a satellite offers the potential for the release of some of the plutonium. The fireball following such an explosion exposes any released plutonium to a high-temperature chemically-reactive environment. Vaporization, condensation, and agglomeration processes can alter the distribution of plutonium-bearing particles. The Fireball code package simulates the integrated response of the physical and chemical processes occurring in a fireball and the effect these processes have on the plutonium-bearing particle distribution. This integrated treatment of multiple phenomena represents a significant improvement in the state of the art for fireball simulations. Preliminary simulations of launch-second scenarios indicate: (1) most plutonium vaporization occurs within the first second of the fireball; (2) large non-aerosol-sized particles contribute very little to plutonium vapor production; (3) vaporization and both homogeneous and heterogeneous condensation occur simultaneously; (4) homogeneous condensation transports plutonium down to the smallest-particle sizes; (5) heterogeneous condensation precludes homogeneous condensation if sufficient condensation sites are available; and (6) agglomeration produces larger-sized particles but slows rapidly as the fireball grows.

  17. Optimal patch code design via device characterization

    Science.gov (United States)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  18. Optimized reversible binary-coded decimal adders

    DEFF Research Database (Denmark)

    Thomsen, Michael Kirkedal; Glück, Robert

    2008-01-01

    Abstract Babu and Chowdhury [H.M.H. Babu, A.R. Chowdhury, Design of a compact reversible binary coded decimal adder circuit, Journal of Systems Architecture 52 (5) (2006) 272-282] recently proposed, in this journal, a reversible adder for binary-coded decimals. This paper corrects and optimizes...... their design. The optimized 1-decimal BCD full-adder, a 13 × 13 reversible logic circuit, is faster, and has lower circuit cost and less garbage bits. It can be used to build a fast reversible m-decimal BCD full-adder that has a delay of only m + 17 low-power reversible CMOS gates. For a 32-decimal (128-bit....... Keywords: Reversible logic circuit; Full-adder; Half-adder; Parallel adder; Binary-coded decimal; Application of reversible logic synthesis...

  19. Grid Code Requirements for Wind Power Integration

    DEFF Research Database (Denmark)

    Wu, Qiuwei

    2018-01-01

    This chapter reviews the grid code requirements for integration of wind power plants (WPPs). The grid codes reviewed are from the UK, Ireland, Germany, Denmark, Spain, Sweden, the USA, and Canada. Transmission system operators (TSOs) around the world have specified requirements for WPPs under...

  20. Overview of Grid Codes for Photovoltaic Integration

    DEFF Research Database (Denmark)

    Zheng, Qianwei; Li, Jiaming; Ai, Xiaomeng

    2017-01-01

    The increasing grid-connected photovoltaic (PV) power stations might threaten the safety and stability of power system. Therefore, the grid code is developed for PV power stations to ensure the security of PV integrated power systems. In this paper, requirements for PV power integration in differ......The increasing grid-connected photovoltaic (PV) power stations might threaten the safety and stability of power system. Therefore, the grid code is developed for PV power stations to ensure the security of PV integrated power systems. In this paper, requirements for PV power integration...

  1. Overview of Grid Codes for Photovoltaic Integration

    DEFF Research Database (Denmark)

    Zheng, Qianwei; Li, Jiaming; Ai, Xiaomeng

    2017-01-01

    The increasing grid-connected photovoltaic (PV) power stations might threaten the safety and stability of power system. Therefore, the grid code is developed for PV power stations to ensure the security of PV integrated power systems. In this paper, requirements for PV power integration in differ...... in different grid codes are first investigated. On this basis, the future advocacy is concluded. Finally, several evaluation indices are proposed to quantify the grid code compliance so that the system operators can validate all these requirements by simulation....

  2. MINET [momentum integral network] code documentation

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.; Nepsee, T.C.; Guppy, J.G.

    1989-12-01

    The MINET computer code, developed for the transient analysis of fluid flow and heat transfer, is documented in this four-part reference. In Part 1, the MINET models, which are based on a momentum integral network method, are described. The various aspects of utilizing the MINET code are discussed in Part 2, The User's Manual. The third part is a code description, detailing the basic code structure and the various subroutines and functions that make up MINET. In Part 4, example input decks, as well as recent validation studies and applications of MINET are summarized. 32 refs., 36 figs., 47 tabs

  3. Optimizing Extender Code for NCSX Analyses

    International Nuclear Information System (INIS)

    Richman, M.; Ethier, S.; Pomphrey, N.

    2008-01-01

    Extender is a parallel C++ code for calculating the magnetic field in the vacuum region of a stellarator. The code was optimized for speed and augmented with tools to maintain a specialized NetCDF database. Two parallel algorithms were examined. An even-block work-distribution scheme was comparable in performance to a master-slave scheme. Large speedup factors were achieved by representing the plasma surface with a spline rather than Fourier series. The accuracy of this representation and the resulting calculations relied on the density of the spline mesh. The Fortran 90 module db access was written to make it easy to store Extender output in a manageable database. New or updated data can be added to existing databases. A generalized PBS job script handles the generation of a database from scratch

  4. Code Differentiation for Hydrodynamic Model Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Henninger, R.J.; Maudlin, P.J.

    1999-06-27

    Use of a hydrodynamics code for experimental data fitting purposes (an optimization problem) requires information about how a computed result changes when the model parameters change. These so-called sensitivities provide the gradient that determines the search direction for modifying the parameters to find an optimal result. Here, the authors apply code-based automatic differentiation (AD) techniques applied in the forward and adjoint modes to two problems with 12 parameters to obtain these gradients and compare the computational efficiency and accuracy of the various methods. They fit the pressure trace from a one-dimensional flyer-plate experiment and examine the accuracy for a two-dimensional jet-formation problem. For the flyer-plate experiment, the adjoint mode requires similar or less computer time than the forward methods. Additional parameters will not change the adjoint mode run time appreciably, which is a distinct advantage for this method. Obtaining ''accurate'' sensitivities for the j et problem parameters remains problematic.

  5. Parallel processing of structural integrity analysis codes

    International Nuclear Information System (INIS)

    Swami Prasad, P.; Dutta, B.K.; Kushwaha, H.S.

    1996-01-01

    Structural integrity analysis forms an important role in assessing and demonstrating the safety of nuclear reactor components. This analysis is performed using analytical tools such as Finite Element Method (FEM) with the help of digital computers. The complexity of the problems involved in nuclear engineering demands high speed computation facilities to obtain solutions in reasonable amount of time. Parallel processing systems such as ANUPAM provide an efficient platform for realising the high speed computation. The development and implementation of software on parallel processing systems is an interesting and challenging task. The data and algorithm structure of the codes plays an important role in exploiting the parallel processing system capabilities. Structural analysis codes based on FEM can be divided into two categories with respect to their implementation on parallel processing systems. The first category codes such as those used for harmonic analysis, mechanistic fuel performance codes need not require the parallelisation of individual modules of the codes. The second category of codes such as conventional FEM codes require parallelisation of individual modules. In this category, parallelisation of equation solution module poses major difficulties. Different solution schemes such as domain decomposition method (DDM), parallel active column solver and substructuring method are currently used on parallel processing systems. Two codes, FAIR and TABS belonging to each of these categories have been implemented on ANUPAM. The implementation details of these codes and the performance of different equation solvers are highlighted. (author). 5 refs., 12 figs., 1 tab

  6. Development of ADINA-J-integral code

    International Nuclear Information System (INIS)

    Kurihara, Ryoichi

    1988-07-01

    A general purpose finite element program ADINA (Automatic Dynamic Incremental Nonlinear Analysis), which was developed by Bathe et al., was revised to be able to calculate the J- and J-integral. This report introduced the numerical method to add this capability to the code, and the evaluation of the revised ADINA-J code by using a few of examples of the J estimation model, i.e. a compact tension specimen, a center cracked panel subjected to dynamic load, and a thick shell cylinder having inner axial crack subjected to thermal load. The evaluation testified the function of the revised code. (author)

  7. Status of the ASTEC integral code

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Jacq, F.; Allelein, H.J.

    2000-01-01

    The ASTEC (Accident Source Term Evaluation Code) integrated code is developed since 1997 in close collaboration by IPSN and GRS to predict an entire LWR severe accident sequence from the initiating event up to Fission Product (FP) release out of the containment. The applications of such a code are source term determination studies, scenario evaluations, accident management studies and Probabilistic Safety Assessment level 2 (PSA-2) studies. The version V0 of ASTEC is based on the RCS modules of the ESCADRE integrated code (IPSN) and on the upgraded RALOC and FIPLOC codes (GRS) for containment thermalhydraulics and aerosol behaviour. The latest version V0.2 includes the general feed-back from the overall validation performed in 1998 (25 separate-effect experiments, PHEBUS.FP FPT1 integrated experiment), some modelling improvements (i.e. silver-iodine reactions in the containment sump), and the implementation of the main safety systems for Severe Accident Management. Several reactor-applications are under way on French and German PWR, and on VVER-1000, all with a multi-compartment configuration of the containment. The total IPSN-GRS manpower involved in ASTEC project is today about 20 men/year. The main evolution of the next version V1, foreseen end of 2001, concerns the integration of the front-end phase and the improvement of the in-vessel degradation late-phase modelling. (author)

  8. Sensitivity Analysis for Design Optimization Integrated Software Tools, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of this proposed project is to provide a new set of sensitivity analysis theory and codes, the Sensitivity Analysis for Design Optimization Integrated...

  9. Integrated Multidisciplinary Optimization Objects

    Science.gov (United States)

    Alston, Katherine

    2014-01-01

    OpenMDAO is an open-source MDAO framework. It is used to develop an integrated analysis and design environment for engineering challenges. This Phase II project integrated additional modules and design tools into OpenMDAO to perform discipline-specific analysis across multiple flight regimes at varying levels of fidelity. It also showcased a refined system architecture that allows the system to be less customized to a specific configuration (i.e., system and configuration separation). By delivering a capable and validated MDAO system along with a set of example applications to be used as a template for future users, this work greatly expands NASA's high-fidelity, physics-based MDAO capabilities and enables the design of revolutionary vehicles in a cost-effective manner. This proposed work complements M4 Engineering's expertise in developing modeling and simulation toolsets that solve relevant subsonic, supersonic, and hypersonic demonstration applications.

  10. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  11. Shadowfax: Moving mesh hydrodynamical integration code

    Science.gov (United States)

    Vandenbroucke, Bert

    2016-05-01

    Shadowfax simulates galaxy evolution. Written in object-oriented modular C++, it evolves a mixture of gas, subject to the laws of hydrodynamics and gravity, and any collisionless fluid only subject to gravity, such as cold dark matter or stars. For the hydrodynamical integration, it makes use of a (co-) moving Lagrangian mesh. The code has a 2D and 3D version, contains utility programs to generate initial conditions and visualize simulation snapshots, and its input/output is compatible with a number of other simulation codes, e.g. Gadget2 (ascl:0003.001) and GIZMO (ascl:1410.003).

  12. Data exchange between zero dimensional code and physics platform in the CFETR integrated system code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Guoliang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Shi, Nan [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Zhou, Yifu; Mao, Shifeng [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Jian, Xiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Jiale [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Liu, Li; Chan, Vincent [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Ye, Minyou, E-mail: yemy@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China)

    2016-11-01

    Highlights: • The workflow of the zero dimensional code and the multi-dimension physics platform of CFETR integrated system codeis introduced. • The iteration process among the codes in the physics platform. • The data transfer between the zero dimensionalcode and the physical platform, including data iteration and validation, and justification for performance parameters.. - Abstract: The China Fusion Engineering Test Reactor (CFETR) integrated system code contains three parts: a zero dimensional code, a physics platform and an engineering platform. We use the zero dimensional code to identify a set of preliminary physics and engineering parameters for CFETR, which is used as input to initiate multi-dimension studies using the physics and engineering platform for design, verification and validation. Effective data exchange between the zero dimensional code and the physical platform is critical for the optimization of CFETR design. For example, in evaluating the impact of impurity radiation on core performance, an open field line code is used to calculate the impurity transport from the first-wall boundary to the pedestal. The impurity particle in the pedestal are used as boundary conditions in a transport code for calculating impurity transport in the core plasma and the impact of core radiation on core performance. Comparison of the results from the multi-dimensional study to those from the zero dimensional code is used to further refine the controlled radiation model. The data transfer between the zero dimensional code and the physical platform, including data iteration and validation, and justification for performance parameters will be presented in this paper.

  13. Non-binary Hybrid LDPC Codes: Structure, Decoding and Optimization

    OpenAIRE

    Sassatelli, Lucile; Declercq, David

    2007-01-01

    In this paper, we propose to study and optimize a very general class of LDPC codes whose variable nodes belong to finite sets with different orders. We named this class of codes Hybrid LDPC codes. Although efficient optimization techniques exist for binary LDPC codes and more recently for non-binary LDPC codes, they both exhibit drawbacks due to different reasons. Our goal is to capitalize on the advantages of both families by building codes with binary (or small finite set order) and non-bin...

  14. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  15. Optimal super dense coding over memory channels

    OpenAIRE

    Shadman, Zahra; Kampermann, Hermann; Macchiavello, Chiara; Bruß, Dagmar

    2011-01-01

    We study the super dense coding capacity in the presence of quantum channels with correlated noise. We investigate both the cases of unitary and non-unitary encoding. Pauli channels for arbitrary dimensions are treated explicitly. The super dense coding capacity for some special channels and resource states is derived for unitary encoding. We also provide an example of a memory channel where non-unitary encoding leads to an improvement in the super dense coding capacity.

  16. Efficient topology optimization in MATLAB using 88 lines of code

    DEFF Research Database (Denmark)

    Andreassen, Erik; Clausen, Anders; Schevenels, Mattias

    2011-01-01

    The paper presents an efficient 88 line MATLAB code for topology optimization. It has been developed using the 99 line code presented by Sigmund (Struct Multidisc Optim 21(2):120–127, 2001) as a starting point. The original code has been extended by a density filter, and a considerable improvemen...... of the basic code to include recent PDE-based and black-and-white projection filtering methods. The complete 88 line code is included as an appendix and can be downloaded from the web site www.topopt.dtu.dk....

  17. Using Peephole Optimization on Intermediate Code

    NARCIS (Netherlands)

    Tanenbaum, A.S.; van Staveren, H.; Stevenson, J.W.

    1982-01-01

    Many portable compilers generate an intermediate code that is subsequently translated into the target machine's assembly language. In this paper a stack-machine-based intermediate code suitable for algebraic languages (e.g., PASCAL, C, FORTRAN) and most byte-addressed mini- and microcomputers is

  18. Integrated solar energy system optimization

    Science.gov (United States)

    Young, S. K.

    1982-11-01

    The computer program SYSOPT, intended as a tool for optimizing the subsystem sizing, performance, and economics of integrated wind and solar energy systems, is presented. The modular structure of the methodology additionally allows simulations when the solar subsystems are combined with conventional technologies, e.g., a utility grid. Hourly energy/mass flow balances are computed for interconnection points, yielding optimized sizing and time-dependent operation of various subsystems. The program requires meteorological data, such as insolation, diurnal and seasonal variations, and wind speed at the hub height of a wind turbine, all of which can be taken from simulations like the TRNSYS program. Examples are provided for optimization of a solar-powered (wind turbine and parabolic trough-Rankine generator) desalinization plant, and a design analysis for a solar powered greenhouse.

  19. Integrated burnup calculation code system SWAT

    International Nuclear Information System (INIS)

    Suyama, Kenya; Hirakawa, Naohiro; Iwasaki, Tomohiko.

    1997-11-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. It enables us to analyze the burnup problem using neutron spectrum depending on environment of irradiation, combining SRAC which is Japanese standard thermal reactor analysis code system and ORIGEN2 which is burnup code widely used all over the world. SWAT makes effective cross section library based on results by SRAC, and performs the burnup analysis with ORIGEN2 using that library. SRAC and ORIGEN2 can be called as external module. SWAT has original cross section library on based JENDL-3.2 and libraries of fission yield and decay data prepared from JNDC FP Library second version. Using these libraries, user can use latest data in the calculation of SWAT besides the effective cross section prepared by SRAC. Also, User can make original ORIGEN2 library using the output file of SWAT. This report presents concept and user's manual of SWAT. (author)

  20. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  1. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-01-01

    and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design

  2. Optimization Specifications for CUDA Code Restructuring Tool

    KAUST Repository

    Khan, Ayaz

    2017-03-13

    In this work we have developed a restructuring software tool (RT-CUDA) following the proposed optimization specifications to bridge the gap between high-level languages and the machine dependent CUDA environment. RT-CUDA takes a C program and convert it into an optimized CUDA kernel with user directives in a configuration file for guiding the compiler. RTCUDA also allows transparent invocation of the most optimized external math libraries like cuSparse and cuBLAS enabling efficient design of linear algebra solvers. We expect RT-CUDA to be needed by many KSA industries dealing with science and engineering simulation on massively parallel computers like NVIDIA GPUs.

  3. ARC Code TI: Optimal Alarm System Design and Implementation

    Data.gov (United States)

    National Aeronautics and Space Administration — An optimal alarm system can robustly predict a level-crossing event that is specified over a fixed prediction horizon. The code contained in this packages provides...

  4. VVER-440 loading patterns optimization using ATHENA code

    International Nuclear Information System (INIS)

    Katovsky, K.; Sustek, J.; Bajgl, J.; Cada, R.

    2009-01-01

    In this paper the Czech optimization state-of-the-art, new code system development goals and OPAL optimization system are briefly mentioned. The algorithms, maths, present status and future developments of the ATHENA code are described. A calculation exercise of the Dukovany NPP cycles, on increased power using ATHENA, starting with on-coming 24th cycle (303 FPD) continuing with 25th (322 FPD), and 26th (336 FPD); for all cycles K R ≤1.54 is presented

  5. Optimizing the ATLAS code with different profilers

    CERN Document Server

    Kama, S; The ATLAS collaboration

    2013-01-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...

  6. Optimization of Particle-in-Cell Codes on RISC Processors

    Science.gov (United States)

    Decyk, Viktor K.; Karmesin, Steve Roy; Boer, Aeint de; Liewer, Paulette C.

    1996-01-01

    General strategies are developed to optimize particle-cell-codes written in Fortran for RISC processors which are commonly used on massively parallel computers. These strategies include data reorganization to improve cache utilization and code reorganization to improve efficiency of arithmetic pipelines.

  7. Adaptive RD Optimized Hybrid Sound Coding

    NARCIS (Netherlands)

    Schijndel, N.H. van; Bensa, J.; Christensen, M.G.; Colomes, C.; Edler, B.; Heusdens, R.; Jensen, J.; Jensen, S.H.; Kleijn, W.B.; Kot, V.; Kövesi, B.; Lindblom, J.; Massaloux, D.; Niamut, O.A.; Nordén, F.; Plasberg, J.H.; Vafin, R.; Virette, D.; Wübbolt, O.

    2008-01-01

    Traditionally, sound codecs have been developed with a particular application in mind, their performance being optimized for specific types of input signals, such as speech or audio (music), and application constraints, such as low bit rate, high quality, or low delay. There is, however, an

  8. Italian electricity supply contracts optimization: ECO computer code

    International Nuclear Information System (INIS)

    Napoli, G.; Savelli, D.

    1993-01-01

    The ECO (Electrical Contract Optimization) code written in the Microsoft WINDOWS 3.1 language can be handled with a 286 PC and a minimum of RAM. It consists of four modules, one for the calculation of ENEL (Italian National Electricity Board) tariffs, one for contractual time-of-use tariffs optimization, a table of tariff coefficients, and a module for monthly power consumption calculations based on annual load diagrams. The optimization code was developed by ENEA (Italian Agency for New Technology, Energy and the Environment) to help Italian industrial firms comply with new and complex national electricity supply contractual regulations and tariffs. In addition to helping industrial firms determine optimum contractual arrangements, the code also assists them in optimizing their choice of equipment and production cycles

  9. Recent developments in KTF. Code optimization and improved numerics

    International Nuclear Information System (INIS)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin

    2012-01-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  10. Recent developments in KTF. Code optimization and improved numerics

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Javier; Avramova, Maria; Sanchez, Victor Hugo; Ivanov, Kostadin [Karlsruhe Institute of Technology (KIT) (Germany). Inst. for Neutron Physics and Reactor Technology (INR)

    2012-11-01

    The rapid increase of computer power in the last decade facilitated the development of high fidelity simulations in nuclear engineering allowing a more realistic and accurate optimization as well as safety assessment of reactor cores and power plants compared to the legacy codes. Thermal hydraulic subchannel codes together with time dependent neutron transport codes are the options of choice for an accurate prediction of local safety parameters. Moreover, fast running codes with the best physical models are needed for high fidelity coupled thermal hydraulic / neutron kinetic solutions. Hence at KIT, different subchannel codes such as SUBCHANFLOW and KTF are being improved, validated and coupled with different neutron kinetics solutions. KTF is a subchannel code developed for best-estimate analysis of both Pressurized Water Reactor (PWR) and BWR. It is based on the Pennsylvania State University (PSU) version of COBRA-TF (Coolant Boling in Rod Arrays Two Fluids) named CTF. In this paper, the investigations devoted to the enhancement of the code numeric and informatics structure are presented and discussed. By some examples the gain on code speed-up will be demonstrated and finally an outlook of further activities concentrated on the code improvements will be given. (orig.)

  11. Optimized iterative decoding method for TPC coded CPM

    Science.gov (United States)

    Ma, Yanmin; Lai, Penghui; Wang, Shilian; Xie, Shunqin; Zhang, Wei

    2018-05-01

    Turbo Product Code (TPC) coded Continuous Phase Modulation (CPM) system (TPC-CPM) has been widely used in aeronautical telemetry and satellite communication. This paper mainly investigates the improvement and optimization on the TPC-CPM system. We first add the interleaver and deinterleaver to the TPC-CPM system, and then establish an iterative system to iteratively decode. However, the improved system has a poor convergence ability. To overcome this issue, we use the Extrinsic Information Transfer (EXIT) analysis to find the optimal factors for the system. The experiments show our method is efficient to improve the convergence performance.

  12. Software exorcism a handbook for debugging and optimizing legacy code

    CERN Document Server

    Blunden, Bill

    2013-01-01

    Software Exorcism: A Handbook for Debugging and Optimizing Legacy Code takes an unflinching, no bulls and look at behavioral problems in the software engineering industry, shedding much-needed light on the social forces that make it difficult for programmers to do their job. Do you have a co-worker who perpetually writes bad code that you are forced to clean up? This is your book. While there are plenty of books on the market that cover debugging and short-term workarounds for bad code, Reverend Bill Blunden takes a revolutionary step beyond them by bringing our atten

  13. A Fast Optimization Method for General Binary Code Learning.

    Science.gov (United States)

    Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng

    2016-09-22

    Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.

  14. IM (Integrity Management) software must show flexibility to local codes

    Energy Technology Data Exchange (ETDEWEB)

    Brors, Markus [ROSEN Technology and Research Center GmbH (Germany); Diggory, Ian [Macaw Engineering Ltd., Northumberland (United Kingdom)

    2009-07-01

    There are many internationally recognized codes and standards, such as API 1160 and ASME B31.8S, which help pipeline operators to manage and maintain the integrity of their pipeline networks. However, operators in many countries still use local codes that often reflect the history of pipeline developments in their region and are based on direct experience and research on their pipelines. As pipeline companies come under increasing regulatory and financial pressures to maintain the integrity of their networks, it is important that operators using regional codes are able to benchmark their integrity management schemes against these international standards. Any comprehensive Pipeline Integrity Management System (PIMS) software package should therefore not only incorporate industry standards for pipeline integrity assessment but also be capable of implementing regional codes for comparison purposes. This paper describes the challenges and benefits of incorporating one such set of regional pipeline standards into ROSEN Asset Integrity Management Software (ROAIMS). (author)

  15. Optimal and efficient decoding of concatenated quantum block codes

    International Nuclear Information System (INIS)

    Poulin, David

    2006-01-01

    We consider the problem of optimally decoding a quantum error correction code--that is, to find the optimal recovery procedure given the outcomes of partial ''check'' measurements on the system. In general, this problem is NP hard. However, we demonstrate that for concatenated block codes, the optimal decoding can be efficiently computed using a message-passing algorithm. We compare the performance of the message-passing algorithm to that of the widespread blockwise hard decoding technique. Our Monte Carlo results using the five-qubit and Steane's code on a depolarizing channel demonstrate significant advantages of the message-passing algorithms in two respects: (i) Optimal decoding increases by as much as 94% the error threshold below which the error correction procedure can be used to reliably send information over a noisy channel; and (ii) for noise levels below these thresholds, the probability of error after optimal decoding is suppressed at a significantly higher rate, leading to a substantial reduction of the error correction overhead

  16. Optimized Method for Generating and Acquiring GPS Gold Codes

    Directory of Open Access Journals (Sweden)

    Khaled Rouabah

    2015-01-01

    Full Text Available We propose a simpler and faster Gold codes generator, which can be efficiently initialized to any desired code, with a minimum delay. Its principle consists of generating only one sequence (code number 1 from which we can produce all the other different signal codes. This is realized by simply shifting this sequence by different delays that are judiciously determined by using the bicorrelation function characteristics. This is in contrast to the classical Linear Feedback Shift Register (LFSR based Gold codes generator that requires, in addition to the shift process, a significant number of logic XOR gates and a phase selector to change the code. The presence of all these logic XOR gates in classical LFSR based Gold codes generator provokes the consumption of an additional time in the generation and acquisition processes. In addition to its simplicity and its rapidity, the proposed architecture, due to the total absence of XOR gates, has fewer resources than the conventional Gold generator and can thus be produced at lower cost. The Digital Signal Processing (DSP implementations have shown that the proposed architecture presents a solution for acquiring Global Positioning System (GPS satellites signals optimally and in a parallel way.

  17. Optimization of the particle pusher in a diode simulation code

    International Nuclear Information System (INIS)

    Theimer, M.M.; Quintenz, J.P.

    1979-09-01

    The particle pusher in Sandia's particle-in-cell diode simulation code has been rewritten to reduce the required run time of a typical simulation. The resulting new version of the code has been found to run up to three times as fast as the original with comparable accuracy. The cost of this optimization was an increase in storage requirements of about 15%. The new version has also been written to run efficiently on a CRAY-1 computing system. Steps taken to affect this reduced run time are described. Various test cases are detailed

  18. Cooperative optimization and their application in LDPC codes

    Science.gov (United States)

    Chen, Ke; Rong, Jian; Zhong, Xiaochun

    2008-10-01

    Cooperative optimization is a new way for finding global optima of complicated functions of many variables. The proposed algorithm is a class of message passing algorithms and has solid theory foundations. It can achieve good coding gains over the sum-product algorithm for LDPC codes. For (6561, 4096) LDPC codes, the proposed algorithm can achieve 2.0 dB gains over the sum-product algorithm at BER of 4×10-7. The decoding complexity of the proposed algorithm is lower than the sum-product algorithm can do; furthermore, the former can achieve much lower error floor than the latter can do after the Eb / No is higher than 1.8 dB.

  19. Integrated code development for studying laser driven plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Takabe, Hideaki; Nagatomo, Hideo; Sunahara, Atsusi; Ohnishi, Naofumi; Naruo, Syuji; Mima, Kunioki [Osaka Univ., Suita (Japan). Inst. of Laser Engineering

    1998-03-01

    Present status and plan for developing an integrated implosion code are briefly explained by focusing on motivation, numerical scheme and issues to be developed more. Highly nonlinear stage of Rayleigh-Taylor instability of ablation front by laser irradiation has been simulated so as to be compared with model experiments. Improvement in transport and rezoning/remapping algorithms in ILESTA code is described. (author)

  20. Fundamentals of an Optimal Multirate Subband Coding of Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    D. Kula

    2000-06-01

    Full Text Available A consistent theory of optimal subband coding of zero mean wide-sense cyclostationary signals, with N-periodic statistics, is presented in this article. An M-channel orthonormal uniform filter bank, employing N-periodic analysis and synthesis filters, is used while an average variance condition is applied to evaluate the output distortion. In three lemmas and final theorem, the necessity of decorrelation of blocked subband signals and requirement of specific ordering of power spectral densities are proven.

  1. Iterative optimization of performance libraries by hierarchical division of codes

    International Nuclear Information System (INIS)

    Donadio, S.

    2007-09-01

    The increasing complexity of hardware features incorporated in modern processors makes high performance code generation very challenging. Library generators such as ATLAS, FFTW and SPIRAL overcome this issue by empirically searching in the space of possible program versions for the one that performs the best. This thesis explores fully automatic solution to adapt a compute-intensive application to the target architecture. By mimicking complex sequences of transformations useful to optimize real codes, we show that generative programming is a practical tool to implement a new hierarchical compilation approach for the generation of high performance code relying on the use of state-of-the-art compilers. As opposed to ATLAS, this approach is not application-dependant but can be applied to fairly generic loop structures. Our approach relies on the decomposition of the original loop nest into simpler kernels. These kernels are much simpler to optimize and furthermore, using such codes makes the performance trade off problem much simpler to express and to solve. Finally, we propose a new approach for the generation of performance libraries based on this decomposition method. We show that our method generates high-performance libraries, in particular for BLAS. (author)

  2. Integrated Multidisciplinary Optimization Objects, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to implement physics-based, multidisciplinary analysis and optimization objects that will be integrated into a Python, open-source framework...

  3. European Validation of the Integral Code ASTEC (EVITA)

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Neu, K.; Dorsselaere, J.P. Van

    2005-01-01

    The main objective of the European Validation of the Integral Code ASTEC (EVITA) project is to distribute the severe accident integral code ASTEC to European partners in order to apply the validation strategy issued from the VASA project (4th EC FWP). Partners evaluate the code capability through validation on reference experiments and plant applications accounting for severe accident management measures, and compare results with reference codes. The basis version V0 of ASTEC (Accident Source Term Evaluation Code)-commonly developed and basically validated by GRS and IRSN-was made available in late 2000 for the EVITA partners on their individual platforms. Users' training was performed by IRSN and GRS. The code portability on different computers was checked to be correct. A 'hot line' assistance was installed continuously available for EVITA code users. The actual version V1 has been released to the EVITA partners end of June 2002. It allows to simulate the front-end phase by two new modules:- for reactor coolant system 2-phase simplified thermal hydraulics (5-equation approach) during both front-end and core degradation phases; - for core degradation, based on structure and main models of ICARE2 (IRSN) reference mechanistic code for core degradation and on other simplified models. Next priorities are clearly identified: code consolidation in order to increase the robustness, extension of all plant applications beyond the vessel lower head failure and coupling with fission product modules, and continuous improvements of users' tools. As EVITA has very successfully made the first step into the intention to provide end-users (like utilities, vendors and licensing authorities) with a well validated European integral code for the simulation of severe accidents in NPPs, the EVITA partners strongly recommend to continue validation, benchmarking and application of ASTEC. This work will continue in Severe Accident Research Network (SARNET) in the 6th Framework Programme

  4. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    Science.gov (United States)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  5. Constellation labeling optimization for bit-interleaved coded APSK

    Science.gov (United States)

    Xiang, Xingyu; Mo, Zijian; Wang, Zhonghai; Pham, Khanh; Blasch, Erik; Chen, Genshe

    2016-05-01

    This paper investigates the constellation and mapping optimization for amplitude phase shift keying (APSK) modulation, which is deployed in Digital Video Broadcasting Satellite - Second Generation (DVB-S2) and Digital Video Broadcasting - Satellite services to Handhelds (DVB-SH) broadcasting standards due to its merits of power and spectral efficiency together with the robustness against nonlinear distortion. The mapping optimization is performed for 32-APSK according to combined cost functions related to Euclidean distance and mutual information. A Binary switching algorithm and its modified version are used to minimize the cost function and the estimated error between the original and received data. The optimized constellation mapping is tested by combining DVB-S2 standard Low-Density Parity-Check (LDPC) codes in both Bit-Interleaved Coded Modulation (BICM) and BICM with iterative decoding (BICM-ID) systems. The simulated results validate the proposed constellation labeling optimization scheme which yields better performance against conventional 32-APSK constellation defined in DVB-S2 standard.

  6. Development of code SFINEL (Spent fuel integrity evaluator)

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Soo; Min, Chin Young; Ohk, Young Kil; Yang, Yong Sik; Kim, Dong Ju; Kim, Nam Ku [Hanyang University, Seoul (Korea)

    1999-01-01

    SFINEL code, an integrated computer program for predicting the spent fuel rod integrity based on burn-up history and major degradation mechanisms, has been developed through this project. This code can sufficiently simulate the power history of a fuel rod during the reactor operation and estimate the degree of deterioration of spent fuel cladding using the recently-developed models on the degradation mechanisms. SFINEL code has been thoroughly benchmarked against the collected in-pile data and operating experiences: deformation and rupture, and cladding oxidation, rod internal pressure creep, then comprehensive whole degradation process. (author). 75 refs., 51 figs., 5 tabs.

  7. On Optimal Policies for Network-Coded Cooperation

    DEFF Research Database (Denmark)

    Khamfroush, Hana; Roetter, Daniel Enrique Lucani; Pahlevani, Peyman

    2015-01-01

    Network-coded cooperative communication (NC-CC) has been proposed and evaluated as a powerful technology that can provide a better quality of service in the next-generation wireless systems, e.g., D2D communications. Previous contributions have focused on performance evaluation of NC-CC scenarios...... rather than searching for optimal policies that can minimize the total cost of reliable packet transmission. We break from this trend by initially analyzing the optimal design of NC-CC for a wireless network with one source, two receivers, and half-duplex erasure channels. The problem is modeled...... as a special case of Markov decision process (MDP), which is called stochastic shortest path (SSP), and is solved for any field size, arbitrary number of packets, and arbitrary erasure probabilities of the channels. The proposed MDP solution results in an optimal transmission policy per time slot, and we use...

  8. Random mask optimization for fast neutron coded aperture imaging

    Energy Technology Data Exchange (ETDEWEB)

    McMillan, Kyle [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Univ. of California, Los Angeles, CA (United States); Marleau, Peter [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Brubaker, Erik [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2015-05-01

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed image quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.

  9. Comparative evaluation of various optimization methods and the development of an optimization code system SCOOP

    International Nuclear Information System (INIS)

    Suzuki, Tadakazu

    1979-11-01

    Thirty two programs for linear and nonlinear optimization problems with or without constraints have been developed or incorporated, and their stability, convergence and efficiency have been examined. On the basis of these evaluations, the first version of the optimization code system SCOOP-I has been completed. The SCOOP-I is designed to be an efficient, reliable, useful and also flexible system for general applications. The system enables one to find global optimization point for a wide class of problems by selecting the most appropriate optimization method built in it. (author)

  10. Foundational development of an advanced nuclear reactor integrated safety code

    International Nuclear Information System (INIS)

    Clarno, Kevin; Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth; Hooper, Russell Warren; Humphries, Larry LaRon

    2010-01-01

    This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

  11. Foundational development of an advanced nuclear reactor integrated safety code.

    Energy Technology Data Exchange (ETDEWEB)

    Clarno, Kevin (Oak Ridge National Laboratory, Oak Ridge, TN); Lorber, Alfred Abraham; Pryor, Richard J.; Spotz, William F.; Schmidt, Rodney Cannon; Belcourt, Kenneth (Ktech Corporation, Albuquerque, NM); Hooper, Russell Warren; Humphries, Larry LaRon

    2010-02-01

    This report describes the activities and results of a Sandia LDRD project whose objective was to develop and demonstrate foundational aspects of a next-generation nuclear reactor safety code that leverages advanced computational technology. The project scope was directed towards the systems-level modeling and simulation of an advanced, sodium cooled fast reactor, but the approach developed has a more general applicability. The major accomplishments of the LDRD are centered around the following two activities. (1) The development and testing of LIME, a Lightweight Integrating Multi-physics Environment for coupling codes that is designed to enable both 'legacy' and 'new' physics codes to be combined and strongly coupled using advanced nonlinear solution methods. (2) The development and initial demonstration of BRISC, a prototype next-generation nuclear reactor integrated safety code. BRISC leverages LIME to tightly couple the physics models in several different codes (written in a variety of languages) into one integrated package for simulating accident scenarios in a liquid sodium cooled 'burner' nuclear reactor. Other activities and accomplishments of the LDRD include (a) further development, application and demonstration of the 'non-linear elimination' strategy to enable physics codes that do not provide residuals to be incorporated into LIME, (b) significant extensions of the RIO CFD code capabilities, (c) complex 3D solid modeling and meshing of major fast reactor components and regions, and (d) an approach for multi-physics coupling across non-conformal mesh interfaces.

  12. BWROPT: A multi-cycle BWR fuel cycle optimization code

    Energy Technology Data Exchange (ETDEWEB)

    Ottinger, Keith E.; Maldonado, G. Ivan, E-mail: Ivan.Maldonado@utk.edu

    2015-09-15

    Highlights: • A multi-cycle BWR fuel cycle optimization algorithm is presented. • New fuel inventory and core loading pattern determination. • The parallel simulated annealing algorithm was used for the optimization. • Variable sampling probabilities were compared to constant sampling probabilities. - Abstract: A new computer code for performing BWR in-core and out-of-core fuel cycle optimization for multiple cycles simultaneously has been developed. Parallel simulated annealing (PSA) is used to optimize the new fuel inventory and placement of new and reload fuel for each cycle considered. Several algorithm improvements were implemented and evaluated. The most significant of these are variable sampling probabilities and sampling new fuel types from an ordered array. A heuristic control rod pattern (CRP) search algorithm was also implemented, which is useful for single CRP determinations, however, this feature requires significant computational resources and is currently not practical for use in a full multi-cycle optimization. The PSA algorithm was demonstrated to be capable of significant objective function reduction and finding candidate loading patterns without constraint violations. The use of variable sampling probabilities was shown to reduce runtime while producing better results compared to using constant sampling probabilities. Sampling new fuel types from an ordered array was shown to have a mixed effect compared to random new fuel type sampling, whereby using both random and ordered sampling produced better results but required longer runtimes.

  13. Compiler design handbook optimizations and machine code generation

    CERN Document Server

    Srikant, YN

    2003-01-01

    The widespread use of object-oriented languages and Internet security concerns are just the beginning. Add embedded systems, multiple memory banks, highly pipelined units operating in parallel, and a host of other advances and it becomes clear that current and future computer architectures pose immense challenges to compiler designers-challenges that already exceed the capabilities of traditional compilation techniques. The Compiler Design Handbook: Optimizations and Machine Code Generation is designed to help you meet those challenges. Written by top researchers and designers from around the

  14. CBP TOOLBOX VERSION 2.0: CODE INTEGRATION ENHANCEMENTS

    Energy Technology Data Exchange (ETDEWEB)

    Smith, F.; Flach, G.; BROWN, K.

    2013-06-01

    This report describes enhancements made to code integration aspects of the Cementitious Barriers Project (CBP) Toolbox as a result of development work performed at the Savannah River National Laboratory (SRNL) in collaboration with Vanderbilt University (VU) in the first half of fiscal year 2013. Code integration refers to the interfacing to standalone CBP partner codes, used to analyze the performance of cementitious materials, with the CBP Software Toolbox. The most significant enhancements are: 1) Improved graphical display of model results. 2) Improved error analysis and reporting. 3) Increase in the default maximum model mesh size from 301 to 501 nodes. 4) The ability to set the LeachXS/Orchestra simulation times through the GoldSim interface. These code interface enhancements have been included in a new release (Version 2.0) of the CBP Toolbox.

  15. Investigation of Navier-Stokes Code Verification and Design Optimization

    Science.gov (United States)

    Vaidyanathan, Rajkumar

    2004-01-01

    With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization

  16. Single integrated device for optical CDMA code processing in dual-code environment.

    Science.gov (United States)

    Huang, Yue-Kai; Glesk, Ivan; Greiner, Christoph M; Iazkov, Dmitri; Mossberg, Thomas W; Wang, Ting; Prucnal, Paul R

    2007-06-11

    We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length.

  17. Optimization and Validation of the Developed Uranium Isotopic Analysis Code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, J. H.; Kang, M. Y.; Kim, Jinhyeong; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-10-15

    γ-ray spectroscopy is a representative non-destructive assay for nuclear material, and less time-consuming and less expensive than the destructive analysis method. The destructive technique is more precise than NDA technique, however, there is some correction algorithm which can improve the performance of γ-spectroscopy. For this reason, an analysis code for uranium isotopic analysis is developed by Applied Nuclear Physics Group in Seoul National University. Overlapped γ- and x-ray peaks in the 89-101 keV X{sub α}-region are fitted with Gaussian and Lorentzian distribution peak functions, tail and background functions. In this study, optimizations for the full-energy peak efficiency calibration and fitting parameters of peak tail and background are performed, and validated with 24 hour acquisition of CRM uranium samples. The optimization of peak tail and background parameters are performed with the validation by using CRM uranium samples. The analysis performance is improved in HEU samples, but more optimization of fitting parameters is required in LEU sample analysis. In the future, the optimization research about the fitting parameters with various type of uranium samples will be performed. {sup 234}U isotopic analysis algorithms and correction algorithms (coincidence effect, self-attenuation effect) will be developed.

  18. Turbine Airfoil Optimization Using Quasi-3D Analysis Codes

    Directory of Open Access Journals (Sweden)

    Sanjay Goel

    2009-01-01

    Full Text Available A new approach to optimize the geometry of a turbine airfoil by simultaneously designing multiple 2D sections of the airfoil is presented in this paper. The complexity of 3D geometry modeling is circumvented by generating multiple 2D airfoil sections and constraining their geometry in the radial direction using first- and second-order polynomials that ensure smoothness in the radial direction. The flow fields of candidate geometries obtained during optimization are evaluated using a quasi-3D, inviscid, CFD analysis code. An inviscid flow solver is used to reduce the execution time of the analysis. Multiple evaluation criteria based on the Mach number profile obtained from the analysis of each airfoil cross-section are used for computing a quality metric. A key contribution of the paper is the development of metrics that emulate the perception of the human designer in visually evaluating the Mach Number distribution. A mathematical representation of the evaluation criteria coupled with a parametric geometry generator enables the use of formal optimization techniques in the design. The proposed approach is implemented in the optimal design of a low-pressure turbine nozzle.

  19. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    Science.gov (United States)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  20. Integrated design optimization research and development in an industrial environment

    Science.gov (United States)

    Kumar, V.; German, Marjorie D.; Lee, S.-J.

    1989-01-01

    An overview is given of a design optimization project that is in progress at the GE Research and Development Center for the past few years. The objective of this project is to develop a methodology and a software system for design automation and optimization of structural/mechanical components and systems. The effort focuses on research and development issues and also on optimization applications that can be related to real-life industrial design problems. The overall technical approach is based on integration of numerical optimization techniques, finite element methods, CAE and software engineering, and artificial intelligence/expert systems (AI/ES) concepts. The role of each of these engineering technologies in the development of a unified design methodology is illustrated. A software system DESIGN-OPT has been developed for both size and shape optimization of structural components subjected to static as well as dynamic loadings. By integrating this software with an automatic mesh generator, a geometric modeler and an attribute specification computer code, a software module SHAPE-OPT has been developed for shape optimization. Details of these software packages together with their applications to some 2- and 3-dimensional design problems are described.

  1. Integrated severe accident containment analysis with the CONTAIN computer code

    International Nuclear Information System (INIS)

    Bergeron, K.D.; Williams, D.C.; Rexroth, P.E.; Tills, J.L.

    1985-12-01

    Analysis of physical and radiological conditions iunside the containment building during a severe (core-melt) nuclear reactor accident requires quantitative evaluation of numerous highly disparate yet coupled phenomenologies. These include two-phase thermodynamics and thermal-hydraulics, aerosol physics, fission product phenomena, core-concrete interactions, the formation and combustion of flammable gases, and performance of engineered safety features. In the past, this complexity has meant that a complete containment analysis would require application of suites of separate computer codes each of which would treat only a narrower subset of these phenomena, e.g., a thermal-hydraulics code, an aerosol code, a core-concrete interaction code, etc. In this paper, we describe the development and some recent applications of the CONTAIN code, which offers an integrated treatment of the dominant containment phenomena and the interactions among them. We describe the results of a series of containment phenomenology studies, based upon realistic accident sequence analyses in actual plants. These calculations highlight various phenomenological effects that have potentially important implications for source term and/or containment loading issues, and which are difficult or impossible to treat using a less integrated code suite

  2. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  3. Integrating Renewable Energy Requirements Into Building Energy Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kaufmann, John R.; Hand, James R.; Halverson, Mark A.

    2011-07-01

    This report evaluates how and when to best integrate renewable energy requirements into building energy codes. The basic goals were to: (1) provide a rough guide of where we’re going and how to get there; (2) identify key issues that need to be considered, including a discussion of various options with pros and cons, to help inform code deliberations; and (3) to help foster alignment among energy code-development organizations. The authors researched current approaches nationally and internationally, conducted a survey of key stakeholders to solicit input on various approaches, and evaluated the key issues related to integration of renewable energy requirements and various options to address those issues. The report concludes with recommendations and a plan to engage stakeholders. This report does not evaluate whether the use of renewable energy should be required on buildings; that question involves a political decision that is beyond the scope of this report.

  4. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    Energy Technology Data Exchange (ETDEWEB)

    Yi, Qing [Univ. of Colorado, Colorado Springs, CO (United States); Whaley, Richard Clint [Univ. of Texas, San Antonio, TX (United States); Qasem, Apan [Texas State Univ., San Marcos, TX (United States); Quinlan, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-11-23

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis, identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.

  5. Revised SWAT. The integrated burnup calculation code system

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya; Mochizuki, Hiroki [Department of Fuel Cycle Safety Research, Nuclear Safety Research Center, Tokai Research Establishment, Japan Atomic Energy Research Institute, Tokai, Ibaraki (Japan); Kiyosumi, Takehide [The Japan Research Institute, Ltd., Tokyo (Japan)

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  6. Revised SWAT. The integrated burnup calculation code system

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Kiyosumi, Takehide

    2000-07-01

    SWAT is an integrated burnup code system developed for analysis of post irradiation examination, transmutation of radioactive waste, and burnup credit problem. This report shows an outline and a user's manual of revised SWAT. This revised SWAT includes expansion of functions, increasing supported machines, and correction of several bugs reported from users of previous SWAT. (author)

  7. Development of FBR integrity system code. Basic concept

    International Nuclear Information System (INIS)

    Asayama, Tai

    2001-05-01

    For fast breeder reactors to be commercialized, they must be more reliable, safer, and at the same, economically competitive with future light water reactors. Innovation of elevated temperature structural design standard is necessary to achieve this goal. The most powerful way is to enlarge the scope of structural integrity code to cover items other than design evaluation that has been addressed in existing codes. Items that must be newly covered are prerequisites of design, fabrication, examination, operation and maintenance, etc. This allows designers to choose the most economical combination of design variations to achieve specific reliability that is needed for a particular component. Designing components by this concept, a cost-minimum design of a whole plant can be realized. By determining the reliability that must be achieved for a component by risk technologies, further economical improvement can be expected by avoiding excessive quality. Recognizing the necessity for the codes based on the new concept, the development of 'FBR integrity system code' began in 2000. Research and development will last 10 years. For this development, the basic logistics and system as well as technologies that materialize the concept are necessary. Original logistics and system must be developed, because no existing researches are available in and out of Japan. This reports presents the results of the work done in the first year regarding the basic idea, methodology, and structure of the code. (author)

  8. Vertical integration and optimal reimbursement policy.

    Science.gov (United States)

    Afendulis, Christopher C; Kessler, Daniel P

    2011-09-01

    Health care providers may vertically integrate not only to facilitate coordination of care, but also for strategic reasons that may not be in patients' best interests. Optimal Medicare reimbursement policy depends upon the extent to which each of these explanations is correct. To investigate, we compare the consequences of the 1997 adoption of prospective payment for skilled nursing facilities (SNF PPS) in geographic areas with high versus low levels of hospital/SNF integration. We find that SNF PPS decreased spending more in high integration areas, with no measurable consequences for patient health outcomes. Our findings suggest that integrated providers should face higher-powered reimbursement incentives, i.e., less cost-sharing. More generally, we conclude that purchasers of health services (and other services subject to agency problems) should consider the organizational form of their suppliers when choosing a reimbursement mechanism.

  9. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  10. Optimization of the FAST ICRF antenna using TOPICA code

    International Nuclear Information System (INIS)

    Sorba, M.; Milanesio, D.; Maggiora, R.; Tuccillo, A.

    2010-01-01

    Ion Cyclotron Resonance Heating is one of the most important auxiliary heating systems in most plasma confinement experiments. Because of this, the need for very accurate design of ion cyclotron (IC) launchers has dramatically grown in recent years. Furthermore, a reliable simulation tool is a crucial request in the successful design of these antennas, since full testing is impossible outside experiments. One of the most advanced and validated simulation codes is TOPICA, which offers the possibility to handle the geometrical level of detail of a real antenna in front of an accurately described plasma scenario. Adopting this essential tool made possible to reach a refined design of ion cyclotron radio frequency antenna for the FAST (Fusion Advanced Studies Torus) experiment . Starting from a streamlined antenna model and then following well-defined refinement procedures, an optimized launcher design in terms of power delivered to plasma has been finally achieved. The computer-assisted geometry refinements allowed an increase in the performances of the antenna and notably in power handling: the extent of the gained improvements were not experienced in the past, essentially due to the absence of predictive tools capable of analyzing the detailed effects of antenna geometry in plasma facing conditions. Thus, with the help of TOPICA code, it has been possible to comply with the FAST experiment requirements in terms of vacuum chamber constraints and power delivered to plasma. Once an antenna geometry was optimized with a reference plasma profile, the analysis of the performances of the launcher has been extended with respect to two plasma scenarios. Exploiting all TOPICA features, it has been possible to predict the behavior of the launcher in real operating conditions, for instance varying the position of the separatrix surface. In order to fulfil the analysis of the FAST IC antenna, the study of the RF potentials, which depend on the parallel electric field computation

  11. Integrated Fuel-Coolant Interaction (IFCI 6.0) code

    International Nuclear Information System (INIS)

    Davis, F.J.; Young, M.F.

    1994-04-01

    The integrated Fuel-Coolant interaction (IFCI) computer code is being developed at Sandia National Laboratories to investigate the fuel-coolant interaction (FCI) problem at large scale using a two-dimensional, four-field hydrodynamic framework and physically based models. IFCI will be capable of treating all major FCI processes in an integrated manner. This document is a product of the effort to generate a stand-alone version of IFCI, IFCI 6.0. The User's Manual describes in detail the hydrodynamic method and physical models used in IFCI 6.0. Appendix A is an input manual, provided for the creation of working decks

  12. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  13. METHODS OF INTEGRATED OPTIMIZATION MAGLEV TRANSPORT SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. Lasher

    2013-09-01

    Full Text Available Purpose. To demonstrate feasibility of the proposed integrated optimization of various MTS parameters to reduce capital investments as well as decrease any operational and maintenance expense. This will make use of MTS reasonable. At present, the Maglev Transport Systems (MTS for High-Speed Ground Transportation (HSGT almost do not apply. Significant capital investments, high operational and maintenance costs are the main reasons why Maglev Transport Systems (MTS are hardly currently used for the High-Speed Ground Transportation (HSGT. Therefore, this article justifies use of Theory of Complex Optimization of Transport (TCOT, developed by one of the co-authors, to reduce MTS costs. Methodology. According to TCOT, authors developed an abstract model of the generalized transport system (AMSTG. This model mathematically determines the optimal balance between all components of the system and thus provides the ultimate adaptation of any transport systems to the conditions of its application. To identify areas for effective use of MTS, by TCOT, the authors developed a dynamic model of distribution and expansion of spheres of effective use of transport systems (DMRRSEPTS. Based on this model, the most efficient transport system was selected for each individual track. The main estimated criterion at determination of efficiency of application of MTS is the size of the specific transportation tariff received from calculation of payback of total given expenses to a standard payback period or term of granting the credit. Findings. The completed multiple calculations of four types of MTS: TRANSRAPID, MLX01, TRANSMAG and TRANSPROGRESS demonstrated efficiency of the integrated optimization of the parameters of such systems. This research made possible expending the scope of effective usage of MTS in about 2 times. The achieved results were presented at many international conferences in Germany, Switzerland, United States, China, Ukraine, etc. Using MTS as an

  14. Phenotypic Graphs and Evolution Unfold the Standard Genetic Code as the Optimal

    Science.gov (United States)

    Zamudio, Gabriel S.; José, Marco V.

    2018-03-01

    In this work, we explicitly consider the evolution of the Standard Genetic Code (SGC) by assuming two evolutionary stages, to wit, the primeval RNY code and two intermediate codes in between. We used network theory and graph theory to measure the connectivity of each phenotypic graph. The connectivity values are compared to the values of the codes under different randomization scenarios. An error-correcting optimal code is one in which the algebraic connectivity is minimized. We show that the SGC is optimal in regard to its robustness and error-tolerance when compared to all random codes under different assumptions.

  15. A Realistic Model under which the Genetic Code is Optimal

    NARCIS (Netherlands)

    Buhrman, H.; van der Gulik, P.T.S.; Klau, G.W.; Schaffner, C.; Speijer, D.; Stougie, L.

    2013-01-01

    The genetic code has a high level of error robustness. Using values of hydrophobicity scales as a proxy for amino acid character, and the mean square measure as a function quantifying error robustness, a value can be obtained for a genetic code which reflects the error robustness of that code. By

  16. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  17. Development and assessment of best estimate integrated safety analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bub Dong; Lee, Young Jin; Hwang, Moon Kyu (and others)

    2007-03-15

    Improvement of the integrated safety analysis code MARS3.0 has been carried out and a multi-D safety analysis application system has been established. Iterative matrix solver and parallel processing algorithm have been introduced, and a LINUX version has been generated to enable MARS to run in cluster PCs. MARS variables and sub-routines have been reformed and modularised to simplify code maintenance. Model uncertainty analyses have been performed for THTF, FLECHT, NEPTUN, and LOFT experiments as well as APR1400 plant. Participations in international cooperation research projects such as OECD BEMUSE, SETH, PKL, BFBT, and TMI-2 have been actively pursued as part of code assessment efforts. The assessment, evaluation and experimental data obtained through international cooperation projects have been registered and maintained in the T/H Databank. Multi-D analyses of APR1400 LBLOCA, DVI Break, SLB, and SGTR have been carried out as a part of application efforts in multi-D safety analysis. GUI based 3D input generator has been developed for user convenience. Operation of the MARS Users Group (MUG) was continued and through MUG, the technology has been transferred to 24 organisations. A set of 4 volumes of user manuals has been compiled and the correction reports for the code errors reported during MARS development have been published.

  18. Development and assessment of best estimate integrated safety analysis code

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Young Jin; Hwang, Moon Kyu

    2007-03-01

    Improvement of the integrated safety analysis code MARS3.0 has been carried out and a multi-D safety analysis application system has been established. Iterative matrix solver and parallel processing algorithm have been introduced, and a LINUX version has been generated to enable MARS to run in cluster PCs. MARS variables and sub-routines have been reformed and modularised to simplify code maintenance. Model uncertainty analyses have been performed for THTF, FLECHT, NEPTUN, and LOFT experiments as well as APR1400 plant. Participations in international cooperation research projects such as OECD BEMUSE, SETH, PKL, BFBT, and TMI-2 have been actively pursued as part of code assessment efforts. The assessment, evaluation and experimental data obtained through international cooperation projects have been registered and maintained in the T/H Databank. Multi-D analyses of APR1400 LBLOCA, DVI Break, SLB, and SGTR have been carried out as a part of application efforts in multi-D safety analysis. GUI based 3D input generator has been developed for user convenience. Operation of the MARS Users Group (MUG) was continued and through MUG, the technology has been transferred to 24 organisations. A set of 4 volumes of user manuals has been compiled and the correction reports for the code errors reported during MARS development have been published

  19. Validating the BISON fuel performance code to integral LWR experiments

    Energy Technology Data Exchange (ETDEWEB)

    Williamson, R.L., E-mail: Richard.Williamson@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gamble, K.A., E-mail: Kyle.Gamble@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Perez, D.M., E-mail: Danielle.Perez@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Novascone, S.R., E-mail: Stephen.Novascone@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Pastore, G., E-mail: Giovanni.Pastore@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Gardner, R.J., E-mail: Russell.Gardner@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Hales, J.D., E-mail: Jason.Hales@inl.gov [Fuel Modeling and Simulation, Idaho National Laboratory, P.O. Box 1625, Idaho Falls, ID 83415-3840 (United States); Liu, W., E-mail: Wenfeng.Liu@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States); Mai, A., E-mail: Anh.Mai@anatech.com [ANATECH Corporation, 5435 Oberlin Dr., San Diego, CA 92121 (United States)

    2016-05-15

    Highlights: • The BISON multidimensional fuel performance code is being validated to integral LWR experiments. • Code and solution verification are necessary prerequisites to validation. • Fuel centerline temperature comparisons through all phases of fuel life are very reasonable. • Accuracy in predicting fission gas release is consistent with state-of-the-art modeling and the involved uncertainties. • Rod diameter comparisons are not satisfactory and further investigation is underway. - Abstract: BISON is a modern finite element-based nuclear fuel performance code that has been under development at Idaho National Laboratory (INL) since 2009. The code is applicable to both steady and transient fuel behavior and has been used to analyze a variety of fuel forms in 1D spherical, 2D axisymmetric, or 3D geometries. Code validation is underway and is the subject of this study. A brief overview of BISON's computational framework, governing equations, and general material and behavioral models is provided. BISON code and solution verification procedures are described, followed by a summary of the experimental data used to date for validation of Light Water Reactor (LWR) fuel. Validation comparisons focus on fuel centerline temperature, fission gas release, and rod diameter both before and following fuel-clad mechanical contact. Comparisons for 35 LWR rods are consolidated to provide an overall view of how the code is predicting physical behavior, with a few select validation cases discussed in greater detail. Results demonstrate that (1) fuel centerline temperature comparisons through all phases of fuel life are very reasonable with deviations between predictions and experimental data within ±10% for early life through high burnup fuel and only slightly out of these bounds for power ramp experiments, (2) accuracy in predicting fission gas release appears to be consistent with state-of-the-art modeling and with the involved uncertainties and (3) comparison

  20. Integrating Multibody Simulation and CFD: toward Complex Multidisciplinary Design Optimization

    Science.gov (United States)

    Pieri, Stefano; Poloni, Carlo; Mühlmeier, Martin

    This paper describes the use of integrated multidisciplinary analysis and optimization of a race car model on a predefined circuit. The objective is the definition of the most efficient geometric configuration that can guarantee the lowest lap time. In order to carry out this study it has been necessary to interface the design optimization software modeFRONTIER with the following softwares: CATIA v5, a three dimensional CAD software, used for the definition of the parametric geometry; A.D.A.M.S./Motorsport, a multi-body dynamic simulation software; IcemCFD, a mesh generator, for the automatic generation of the CFD grid; CFX, a Navier-Stokes code, for the fluid-dynamic forces prediction. The process integration gives the possibility to compute, for each geometrical configuration, a set of aerodynamic coefficients that are then used in the multiboby simulation for the computation of the lap time. Finally an automatic optimization procedure is started and the lap-time minimized. The whole process is executed on a Linux cluster running CFD simulations in parallel.

  1. Engineering application of in-core fuel management optimization code with CSA algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhihong; Hu, Yongming [INET, Tsinghua university, Beijing 100084 (China)

    2009-06-15

    PWR in-core loading (reloading) pattern optimization is a complex combined problem. An excellent fuel management optimization code can greatly improve the efficiency of core reloading design, and bring economic and safety benefits. Today many optimization codes with experiences or searching algorithms (such as SA, GA, ANN, ACO) have been developed, while how to improve their searching efficiency and engineering usability still needs further research. CSA (Characteristic Statistic Algorithm) is a global optimization algorithm with high efficiency developed by our team. The performance of CSA has been proved on many problems (such as Traveling Salesman Problems). The idea of CSA is to induce searching direction by the statistic distribution of characteristic values. This algorithm is quite suitable for fuel management optimization. Optimization code with CSA has been developed and was used on many core models. The research in this paper is to improve the engineering usability of CSA code according to all the actual engineering requirements. Many new improvements have been completed in this code, such as: 1. Considering the asymmetry of burn-up in one assembly, the rotation of each assembly is considered as new optimization variables in this code. 2. Worth of control rods must satisfy the given constraint, so some relative modifications are added into optimization code. 3. To deal with the combination of alternate cycles, multi-cycle optimization is considered in this code. 4. To confirm the accuracy of optimization results, many identifications of the physics calculation module in this code have been done, and the parameters of optimization schemes are checked by SCIENCE code. The improved optimization code with CSA has been used on Qinshan nuclear plant of China. The reloading of cycle 7, 8, 9 (12 months, no burnable poisons) and the 18 months equilibrium cycle (with burnable poisons) reloading are optimized. At last, many optimized schemes are found by CSA code

  2. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... solutions. Considering the property of abnormal event detection, i.e., only normal videos are used as training data due to practical reasons, effective codes in classification application may not perform well in abnormality detection. Therefore, we compare the sparse codes and comprehensively evaluate...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  3. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    . Zulfikar

    2012-10-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  4. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    Directory of Open Access Journals (Sweden)

    Zulfikar .

    2015-05-01

    Full Text Available A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared. The VHDL code which limits range of integer values is occupies less area than the one which is not. This VHDL coding method is suitable for multi stage circuits.

  5. Hydrodynamic Instability, Integrated Code, Laboratory Astrophysics, and Astrophysics

    Science.gov (United States)

    Takabe, Hideaki

    2016-10-01

    This is an article for the memorial lecture of Edward Teller Medal and is presented as memorial lecture at the IFSA03 conference held on September 12th, 2003, at Monterey, CA. The author focuses on his main contributions to fusion science and its extension to astrophysics in the field of theory and computation by picking up five topics. The first one is the anomalous resisitivity to hot electrons penetrating over-dense region through the ion wave turbulence driven by the return current compensating the current flow by the hot electrons. It is concluded that almost the same value of potential as the average kinetic energy of the hot electrons is realized to prevent the penetration of the hot electrons. The second is the ablative stabilization of Rayleigh-Taylor instability at ablation front and its dispersion relation so-called Takabe formula. This formula gave a principal guideline for stable target design. The author has developed an integrated code ILESTA (ID & 2D) for analyses and design of laser produced plasma including implosion dynamics. It is also applied to design high gain targets. The third is the development of the integrated code ILESTA. The forth is on Laboratory Astrophysics with intense lasers. This consists of two parts; one is review on its historical background and the other is on how we relate laser plasma to wide-ranging astrophysics and the purposes for promoting such research. In relation to one purpose, I gave a comment on anomalous transport of relativistic electrons in Fast Ignition laser fusion scheme. Finally, I briefly summarize recent activity in relation to application of the author's experience to the development of an integrated code for studying extreme phenomena in astrophysics.

  6. Generalized rank weights of reducible codes, optimal cases and related properties

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto

    2018-01-01

    in network coding. In this paper, we study their security behavior against information leakage on networks when applied as coset coding schemes, giving the following main results: 1) we give lower and upper bounds on their generalized rank weights (GRWs), which measure worst case information leakage...... to the wire tapper; 2) we find new parameters for which these codes are MRD (meaning that their first GRW is optimal) and use the previous bounds to estimate their higher GRWs; 3) we show that all linear (over the extension field) codes, whose GRWs are all optimal for fixed packet and code sizes but varying...... length are reducible codes up to rank equivalence; and 4) we show that the information leaked to a wire tapper when using reducible codes is often much less than the worst case given by their (optimal in some cases) GRWs. We conclude with some secondary related properties: conditions to be rank...

  7. Optimal nonimaging integrated evacuated solar collector

    Science.gov (United States)

    Garrison, John D.; Duff, W. S.; O'Gallagher, Joseph J.; Winston, Roland

    1993-11-01

    A non imaging integrated evacuated solar collector for solar thermal energy collection is discussed which has the lower portion of the tubular glass vacuum enveloped shaped and inside surface mirrored to optimally concentrate sunlight onto an absorber tube in the vacuum. This design uses vacuum to eliminate heat loss from the absorber surface by conduction and convection of air, soda lime glass for the vacuum envelope material to lower cost, optimal non imaging concentration integrated with the glass vacuum envelope to lower cost and improve solar energy collection, and a selective absorber for the absorbing surface which has high absorptance and low emittance to lower heat loss by radiation and improve energy collection efficiency. This leads to a very low heat loss collector with high optical collection efficiency, which can operate at temperatures up to the order of 250 degree(s)C with good efficiency while being lower in cost than current evacuated solar collectors. Cost estimates are presented which indicate a cost for this solar collector system which can be competitive with the cost of fossil fuel heat energy sources when the collector system is produced in sufficient volume. Non imaging concentration, which reduces cost while improving performance, and which allows efficient solar energy collection without tracking the sun, is a key element in this solar collector design.

  8. Optimal Near-Hitless Network Failure Recovery Using Diversity Coding

    Science.gov (United States)

    Avci, Serhat Nazim

    2013-01-01

    Link failures in wide area networks are common and cause significant data losses. Mesh-based protection schemes offer high capacity efficiency but they are slow, require complex signaling, and instable. Diversity coding is a proactive coding-based recovery technique which offers near-hitless (sub-ms) restoration with a competitive spare capacity…

  9. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Jing Li (Tiffany

    2008-04-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified “convergence-constraint” density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional “threshold-constraint” method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  10. Efficacy of Code Optimization on Cache-based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    The current common wisdom in the U.S. is that the powerful, cost-effective supercomputers of tomorrow will be based on commodity (RISC) micro-processors with cache memories. Already, most distributed systems in the world use such hardware as building blocks. This shift away from vector supercomputers and towards cache-based systems has brought about a change in programming paradigm, even when ignoring issues of parallelism. Vector machines require inner-loop independence and regular, non-pathological memory strides (usually this means: non-power-of-two strides) to allow efficient vectorization of array operations. Cache-based systems require spatial and temporal locality of data, so that data once read from main memory and stored in high-speed cache memory is used optimally before being written back to main memory. This means that the most cache-friendly array operations are those that feature zero or unit stride, so that each unit of data read from main memory (a cache line) contains information for the next iteration in the loop. Moreover, loops ought to be 'fat', meaning that as many operations as possible are performed on cache data-provided instruction caches do not overflow and enough registers are available. If unit stride is not possible, for example because of some data dependency, then care must be taken to avoid pathological strides, just ads on vector computers. For cache-based systems the issues are more complex, due to the effects of associativity and of non-unit block (cache line) size. But there is more to the story. Most modern micro-processors are superscalar, which means that they can issue several (arithmetic) instructions per clock cycle, provided that there are enough independent instructions in the loop body. This is another argument for providing fat loop bodies. With these restrictions, it appears fairly straightforward to produce code that will run efficiently on any cache-based system. It can be argued that although some of the important

  11. ANL/CANTIA code for steam generator tube integrity assessment

    International Nuclear Information System (INIS)

    Revankar, S.T.; Wolf, B.; Majumdar, S.; Riznic, J.R.

    2009-01-01

    Steam generator (SG) tubes have an important safety role in CANDU type reactors and Pressurized Water Reactors (PWR) because they constitute one of the primary barriers between the radioactive and non-radioactive sides of the nuclear plant. The SG tubes are susceptible to corrosion and damage. A failure of a single steam generator tube, or even a few tubes, would not be a serious safety-related event in a CANDU reactor. The leakage from a ruptured tube is within makeup capacity of the primary heat transport system, so that as long as the operator takes the correct actions, the off-site consequences will be negligible. A sufficient safety margin against tube rupture used to be the basis for a variety of maintenance strategies developed to maintain a suitable level of plant safety and reliability. Several through-wall flaws may remain in operation and potentially contribute to the total primary-to-secondary leak rate. Assessment of the conditional probabilities of tube failures, leak rates, and ultimately risk of exceeding licensing dose limits has been used for steam generator tube fitness-for-service assessment. The advantage of this type of analysis is that it avoids the excessive conservatism typically present in deterministic methodologies. However, it requires considerable effort and expense to develop all of the failure, leakage, probability of detection, and flaw growth distributions and models necessary to obtain meaningful results from a probabilistic model. The Canadian Nuclear Safety Commission (CNSC) recently developed the CANTIA methodology for probabilistic assessment of inspection strategies for steam generator tubes as a direct effect on the probability of tube failure and primary-to-secondary leak rate Recently Argonne National Laboratory has developed tube integrity and leak rate models under Integrated Steam Generator Tube Integrity Program (ISGTIP-2). These models have been incorporated in the ANL/CANTIA code. This paper presents the ANL

  12. Novel Area Optimization in FPGA Implementation Using Efficient VHDL Code

    OpenAIRE

    Zulfikar, Z

    2012-01-01

    A new novel method for area efficiency in FPGA implementation is presented. The method is realized through flexibility and wide capability of VHDL coding. This method exposes the arithmetic operations such as addition, subtraction and others. The design technique aim to reduce occupies area for multi stages circuits by selecting suitable range of all value involved in every step of calculations. Conventional and efficient VHDL coding methods are presented and the synthesis result is compared....

  13. Integrated use of Primavera and ORAM codes in outage 1999 at NPP Krsko

    International Nuclear Information System (INIS)

    Krajnc, J.; Skaler, F.; Basic, I.; Kocnar, R.

    1999-01-01

    The paper deals with the following postulated main goals of outage scheduling with Primavera tool at Krsko NPP: planning and controlling of resources (people, equipment, locations, sources), controlling the safety aspects of an outage and assuring defense-in-depth philosophy (through integrated safety assessment by ORAM code), diversity use of the plan during preparations period and outage progress (MCB, work leaders, management, planning Dept., subcontractors, support, etc.), allowing for optimization of outage duration. A snapshot in Primavera of what actually happened in outage 1999, lessons learned and a new work template is the scope of the next year outage.(author)

  14. Multi-Objective Climb Path Optimization for Aircraft/Engine Integration Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Aristeidis Antonakis

    2017-04-01

    Full Text Available In this article, a new multi-objective approach to the aircraft climb path optimization problem, based on the Particle Swarm Optimization algorithm, is introduced to be used for aircraft–engine integration studies. This considers a combination of a simulation with a traditional Energy approach, which incorporates, among others, the use of a proposed path-tracking scheme for guidance in the Altitude–Mach plane. The adoption of population-based solver serves to simplify case setup, allowing for direct interfaces between the optimizer and aircraft/engine performance codes. A two-level optimization scheme is employed and is shown to improve search performance compared to the basic PSO algorithm. The effectiveness of the proposed methodology is demonstrated in a hypothetic engine upgrade scenario for the F-4 aircraft considering the replacement of the aircraft’s J79 engine with the EJ200; a clear advantage of the EJ200-equipped configuration is unveiled, resulting, on average, in 15% faster climbs with 20% less fuel.

  15. Near-optimal integration of facial form and motion.

    Science.gov (United States)

    Dobs, Katharina; Ma, Wei Ji; Reddy, Leila

    2017-09-08

    Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects' identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.

  16. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  17. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  18. Modeling of fission product release in integral codes

    International Nuclear Information System (INIS)

    Obaidurrahman, K.; Raman, Rupak K.; Gaikwad, Avinash J.

    2014-01-01

    The Great Tohoku earthquake and tsunami that stroke the Fukushima-Daiichi nuclear power station in March 11, 2011 has intensified the needs of detailed nuclear safety research and with this objective all streams associated with severe accident phenomenology are being revisited thoroughly. The present paper would cover an overview of state of art FP release models being used, the important phenomenon considered in semi-mechanistic models and knowledge gaps in present FP release modeling. Capability of FP release module, ELSA of ASTEC integral code in appropriate prediction of FP release under several diversified core degraded conditions will also be demonstrated. Use of semi-mechanistic fission product release models at AERB in source-term estimation shall be briefed. (author)

  19. Ground Vehicle System Integration (GVSI) and Design Optimization Model

    National Research Council Canada - National Science Library

    Horton, William

    1996-01-01

    This report documents the Ground Vehicle System Integration (GVSI) and Design Optimization Model GVSI is a top-level analysis tool designed to support engineering tradeoff studies and vehicle design optimization efforts...

  20. Committed to the Honor Code: An Investment Model Analysis of Academic Integrity

    Science.gov (United States)

    Dix, Emily L.; Emery, Lydia F.; Le, Benjamin

    2014-01-01

    Educators worldwide face challenges surrounding academic integrity. The development of honor codes can promote academic integrity, but understanding how and why honor codes affect behavior is critical to their successful implementation. To date, research has not examined how students' "relationship" to an honor code predicts…

  1. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng; Alloum, Amira; Al-Naffouri, Tareq Y.

    2016-01-01

    present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first

  2. Integration of Rotor Aerodynamic Optimization with the Conceptual Design of a Large Civil Tiltrotor

    Science.gov (United States)

    Acree, C. W., Jr.

    2010-01-01

    Coupling of aeromechanics analysis with vehicle sizing is demonstrated with the CAMRAD II aeromechanics code and NDARC sizing code. The example is optimization of cruise tip speed with rotor/wing interference for the Large Civil Tiltrotor (LCTR2) concept design. Free-wake models were used for both rotors and the wing. This report is part of a NASA effort to develop an integrated analytical capability combining rotorcraft aeromechanics, structures, propulsion, mission analysis, and vehicle sizing. The present paper extends previous efforts by including rotor/wing interference explicitly in the rotor performance optimization and implicitly in the sizing.

  3. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  4. PlayNCool: Opportunistic Network Coding for Local Optimization of Routing in Wireless Mesh Networks

    DEFF Research Database (Denmark)

    Pahlevani, Peyman; Roetter, Daniel Enrique Lucani; Pedersen, Morten Videbæk

    2013-01-01

    This paper introduces PlayNCool, an opportunistic protocol with local optimization based on network coding to increase the throughput of a wireless mesh network (WMN). PlayNCool aims to enhance current routing protocols by (i) allowing random linear network coding transmissions end-to-end, (ii) r...

  5. Asset management: integrated software optimizes production performance

    Energy Technology Data Exchange (ETDEWEB)

    Polczer, S.

    1998-06-01

    Two new multi-dimensional databases, which expand the `row and column` concept of spreadsheets into multiple categories of data called dimensions, are described. These integrated software packages provide the foundation for industry players such as Poco Petroleum Ltd and Numac Energy Inc to gain a competitive advantage, by overhauling their respective data collection and retrieval systems to allow for timely cost analysis and financial reporting. Energy Warehouse, an on-line analytical processing product marketed by SysGold Ltd, is one of the software products described. It gathers various sources of information, allows advanced searches and generates reports previously unavailable in other conventional financial accounting systems. The second product discussed - the Canadian Upstream Energy System (CUES) - is an on-line analytical processing system developed by Oracle Corporation and Calgary-based Applied Terravision Systems (ATS) Inc. CUES combines Oracle`s universal data server and software development tools with ATS`s upstream financial, land, geotechnical and production applications. The software also allows for optimization of facilities, analysis of production efficiencies and comparison of performance against industry standards.

  6. SWAT3.1 - the integrated burnup code system driving continuous energy Monte Carlo codes MVP and MCNP

    International Nuclear Information System (INIS)

    Suyama, Kenya; Mochizuki, Hiroki; Takada, Tomoyuki; Ryufuku, Susumu; Okuno, Hiroshi; Murazaki, Minoru; Ohkubo, Kiyoshi

    2009-05-01

    Integrated burnup calculation code system SWAT is a system that combines neutronics calculation code SRAC,which is widely used in Japan, and point burnup calculation code ORIGEN2. It has been used to evaluate the composition of the uranium, plutonium, minor actinides and the fission products in the spent nuclear fuel. Based on this idea, the integrated burnup calculation code system SWAT3.1 was developed by combining the continuous energy Monte Carlo code MVP and MCNP, and ORIGEN2. This enables us to treat the arbitrary fuel geometry and to generate the effective cross section data to be used in the burnup calculation with few approximations. This report describes the outline, input data instruction and several examples of the calculation. (author)

  7. Applications of ASTEC integral code on a generic CANDU 6

    Energy Technology Data Exchange (ETDEWEB)

    Radu, Gabriela, E-mail: gabriela.radu@nuclear.ro [Institute for Nuclear Research, Campului 1, 115400 Mioveni, Arges (Romania); Prisecaru, Ilie [Power Engineering Department, University “Politehnica” of Bucharest, 313 Splaiul Independentei, Bucharest (Romania)

    2015-05-15

    Highlights: • Short overview of the models included in the ASTEC MCCI module. • MEDICIS/CPA coupled calculations for a generic CANDU6 reactor. • Two cases taking into account different pool/concrete interface models. - Abstract: In case of a hypothetical severe accident in a nuclear power plant, the corium consisting of the molten reactor core and internal structures may flow onto the concrete floor of containment building. This would cause an interaction between the molten corium and the concrete (MCCI), in which the heat transfer from the hot melt to the concrete would cause the decomposition and the ablation of the concrete. The potential hazard of this interaction is the loss of integrity of the containment building and the release of fission products into the environment due to the possibility of a concrete foundation melt-through or containment over-pressurization by the gases produced from the decomposition of the concrete or by the inflammation of combustible gases. In the safety assessment of nuclear power plants, it is necessary to know the consequences of such a phenomenon. The paper presents an example of application of the ASTECv2 code to a generic CANDU6 reactor. This concerns the thermal-hydraulic behaviour of the containment during molten core–concrete interaction in the reactor vault. The calculations were carried out with the help of the MEDICIS MCCI module and the CPA containment module of ASTEC code coupled through a specific prediction–correction method, which consists in describing the heat exchanges with the vault walls and partially absorbent gases. Moreover, the heat conduction inside the vault walls is described. Two cases are presented in this paper taking into account two different heat transfer models at the pool/concrete interface and siliceous concrete. The corium pool configuration corresponds to a homogeneous configuration with a detailed description of the upper crust.

  8. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  9. Transoptr-a second order beam transport design code with automatic internal optimization and general constraints

    International Nuclear Information System (INIS)

    Heighway, E.A.

    1980-07-01

    A second order beam transport design code with parametric optimization is described. The code analyzes the transport of charged particle beams through a user defined magnet system. The magnet system parameters are varied (within user defined limits) until the properties of the transported beam and/or the system transport matrix match those properties requested by the user. The code uses matrix formalism to represent the transport elements and optimization is achieved using the variable metric method. Any constraints that can be expressed algebraically may be included by the user as part of his design. Instruction in the use of the program is given. (auth)

  10. Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel

    Directory of Open Access Journals (Sweden)

    Declercq David

    2007-01-01

    Full Text Available We address the problem of designing good LDPC codes for the Gaussian multiple access channel (MAC. The framework we choose is to design multiuser LDPC codes with joint belief propagation decoding on the joint graph of the 2-user case. Our main result compared to existing work is to express analytically EXIT functions of the multiuser decoder with two different approximations of the density evolution. This allows us to propose a very simple linear programming optimization for the complicated problem of LDPC code design with joint multiuser decoding. The stability condition for our case is derived and used in the optimization constraints. The codes that we obtain for the 2-user case are quite good for various rates, especially if we consider the very simple optimization procedure.

  11. Optimizing fusion PIC code performance at scale on Cori Phase 2

    Energy Technology Data Exchange (ETDEWEB)

    Koskela, T. S.; Deslippe, J.

    2017-07-23

    In this paper we present the results of optimizing the performance of the gyrokinetic full-f fusion PIC code XGC1 on the Cori Phase Two Knights Landing system. The code has undergone substantial development to enable the use of vector instructions in its most expensive kernels within the NERSC Exascale Science Applications Program. We study the single-node performance of the code on an absolute scale using the roofline methodology to guide optimization efforts. We have obtained 2x speedups in single node performance due to enabling vectorization and performing memory layout optimizations. On multiple nodes, the code is shown to scale well up to 4000 nodes, near half the size of the machine. We discuss some communication bottlenecks that were identified and resolved during the work.

  12. Development of a graphical interface computer code for reactor fuel reloading optimization

    International Nuclear Information System (INIS)

    Do Quang Binh; Nguyen Phuoc Lan; Bui Xuan Huy

    2007-01-01

    This report represents the results of the project performed in 2007. The aim of this project is to develop a graphical interface computer code that allows refueling engineers to design fuel reloading patterns for research reactor using simulated graphical model of reactor core. Besides, this code can perform refueling optimization calculations based on genetic algorithms as well as simulated annealing. The computer code was verified based on a sample problem, which relies on operational and experimental data of Dalat research reactor. This code can play a significant role in in-core fuel management practice at nuclear research reactor centers and in training. (author)

  13. Product code optimization for determinate state LDPC decoding in robust image transmission.

    Science.gov (United States)

    Thomos, Nikolaos; Boulgouris, Nikolaos V; Strintzis, Michael G

    2006-08-01

    We propose a novel scheme for error-resilient image transmission. The proposed scheme employs a product coder consisting of low-density parity check (LDPC) codes and Reed-Solomon codes in order to deal effectively with bit errors. The efficiency of the proposed scheme is based on the exploitation of determinate symbols in Tanner graph decoding of LDPC codes and a novel product code optimization technique based on error estimation. Experimental evaluation demonstrates the superiority of the proposed system in comparison to recent state-of-the-art techniques for image transmission.

  14. On CAD-integrated Structural Topology and Design Optimization

    DEFF Research Database (Denmark)

    Olhoff, Niels; Bendsøe, M.P.; Rasmussen, John

    1991-01-01

    Concepts underlying an interactive CAD-based engineering design optimization system are developed, and methods of optimizing the topology, shape and sizing of mechanical components are presented. These methods are integrated in the system, and the method for determining the optimal topology is used...

  15. HCPCS Coding: An Integral Part of Your Reimbursement Strategy.

    Science.gov (United States)

    Nusgart, Marcia

    2013-12-01

    The first step to a successful reimbursement strategy is to ensure that your wound care product has the most appropriate Healthcare Common Procedure Coding System (HCPCS) code (or billing) for your product. The correct HCPCS code plays an essential role in patient access to new and existing technologies. When devising a strategy to obtain a HCPCS code for its product, companies must consider a number of factors as follows: (1) Has the product gone through the Food and Drug Administration (FDA) regulatory process or does it need to do so? Will the FDA code designation impact which HCPCS code will be assigned to your product? (2) In what "site of service" do you intend to market your product? Where will your customers use the product? Which coding system (CPT ® or HCPCS) applies to your product? (3) Does a HCPCS code for a similar product already exist? Does your product fit under the existing HCPCS code? (4) Does your product need a new HCPCS code? What is the linkage, if any, between coding, payment, and coverage for the product? Researchers and companies need to start early and place the same emphasis on a reimbursement strategy as it does on a regulatory strategy. Your reimbursement strategy staff should be involved early in the process, preferably during product research and development and clinical trial discussions.

  16. Technology for Building Systems Integration and Optimization – Landscape Report

    Energy Technology Data Exchange (ETDEWEB)

    William Goetzler, Matt Guernsey, Youssef Bargach

    2018-01-31

    BTO's Commercial Building Integration (CBI) program helps advance a range of innovative building integration and optimization technologies and solutions, paving the way for high-performing buildings that could use 50-70% less energy than typical buildings. CBI’s work focuses on early stage technology innovation, with an emphasis on how components and systems work together and how whole buildings are integrated and optimized. This landscape study outlines the current body of knowledge, capabilities, and the broader array of solutions supporting integration and optimization in commercial buildings. CBI seeks to support solutions for both existing buildings and new construction, which often present very different challenges.

  17. Quo vadis code optimization in high energy physics

    International Nuclear Information System (INIS)

    Jarp, S.

    1994-01-01

    Although performance tuning and optimization can be considered less critical than in the past, there are still many High Energy Physics (HEP) applications and application domains that can profit from such an undertaking. In CERN's CORE (Centrally Operated RISC Environment) where all major RISC vendors are present, this implies an understanding of the various computer architectures, instruction sets and performance analysis tools from each of these vendors. This paper discusses some initial observations after having evaluated the situation and makes some recommendations for further progress

  18. Computational analysis of battery optimized reactor integral system

    International Nuclear Information System (INIS)

    Hwang, J. S.; Son, H. M.; Jeong, W. S.; Kim, T. W.; Suh, K. Y.

    2007-01-01

    Battery Optimized Reactor Integral System (BORIS) is being developed as a multi-purpose fast spectrum reactor cooled by lead (Pb). BORIS is an integral optimized reactor with an ultra-long life core. BORIS aims to satisfy various energy demands maintaining inherent safety with the primary coolant Pb, and improving economics. BORIS is being designed to generate 23 MW t h with 10 MW e for at least twenty consecutive years without refueling and to meet the Generation IV Nuclear Energy System goals of sustainability, safety, reliability, and economics. BORIS is conceptualized to be used as the main power and heat source for remote areas and barren lands, and also considered to be deployed for desalinisation purpose. BORIS, based on modular components to be viable for rapid construction and easy maintenance, adopts an integrated heat exchanger system operated by natural circulation of Pb without pumps to realize a small sized reactor. The BORIS primary system is designed through an optimization study. Thermal hydraulic characteristics during a reactor steady state with heat source and sink by core and heat exchanger, respectively, have been carried out by utilizing a computational fluid dynamics code and hand calculations based on first principles. This paper analyzes a transient condition of the BORIS primary system. The Pb coolant was selected for its lower chemical activity with air or water than sodium (Na) and good thermal characteristics. The reactor transient conditions such as core blockage, heat exchanger failure, and loss of heat sink, were selected for this study. Blockage in the core or its inlet structure causes localized flow starvation in one or several fuel assemblies. The coolant loop blockages cause a more or less uniform flow reduction across the core, which may trigger coolant temperature transient. General conservation equations were applied to model the primary system transients. Numerical approaches were adopted to discretized the governing

  19. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  20. Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization

    Science.gov (United States)

    Green, Lawrence; Carle, Alan; Fagan, Mike

    1999-01-01

    Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop

  1. Efficacy of Code Optimization on Cache-Based Processors

    Science.gov (United States)

    VanderWijngaart, Rob F.; Saphir, William C.; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    In this paper a number of techniques for improving the cache performance of a representative piece of numerical software is presented. Target machines are popular processors from several vendors: MIPS R5000 (SGI Indy), MIPS R8000 (SGI PowerChallenge), MIPS R10000 (SGI Origin), DEC Alpha EV4 + EV5 (Cray T3D & T3E), IBM RS6000 (SP Wide-node), Intel PentiumPro (Ames' Whitney), Sun UltraSparc (NERSC's NOW). The optimizations all attempt to increase the locality of memory accesses. But they meet with rather varied and often counterintuitive success on the different computing platforms. We conclude that it may be genuinely impossible to obtain portable performance on the current generation of cache-based machines. At the least, it appears that the performance of modern commodity processors cannot be described with parameters defining the cache alone.

  2. Development of the integrated system reliability analysis code MODULE

    International Nuclear Information System (INIS)

    Han, S.H.; Yoo, K.J.; Kim, T.W.

    1987-01-01

    The major components in a system reliability analysis are the determination of cut sets, importance measure, and uncertainty analysis. Various computer codes have been used for these purposes. For example, SETS and FTAP are used to determine cut sets; Importance for importance calculations; and Sample, CONINT, and MOCUP for uncertainty analysis. There have been problems when the codes run each other and the input and output are not linked, which could result in errors when preparing input for each code. The code MODULE was developed to carry out the above calculations simultaneously without linking input and outputs to other codes. MODULE can also prepare input for SETS for the case of a large fault tree that cannot be handled by MODULE. The flow diagram of the MODULE code is shown. To verify the MODULE code, two examples are selected and the results and computation times are compared with those of SETS, FTAP, CONINT, and MOCUP on both Cyber 170-875 and IBM PC/AT. Two examples are fault trees of the auxiliary feedwater system (AFWS) of Korea Nuclear Units (KNU)-1 and -2, which have 54 gates and 115 events, 39 gates and 92 events, respectively. The MODULE code has the advantage that it can calculate the cut sets, importances, and uncertainties in a single run with little increase in computing time over other codes and that it can be used in personal computers

  3. Integrated Validation System for a Thermal-hydraulic System Code, TASS/SMR-S

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hee-Kyung; Kim, Hyungjun; Kim, Soo Hyoung; Hwang, Young-Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kim, Hyeon-Soo [Chungnam National University, Daejeon (Korea, Republic of)

    2015-10-15

    Development including enhancement and modification of thermal-hydraulic system computer code is indispensable to a new reactor, SMART. Usually, a thermal-hydraulic system code validation is achieved by a comparison with the results of corresponding physical effect tests. In the reactor safety field, a similar concept, referred to as separate effect tests has been used for a long time. But there are so many test data for comparison because a lot of separate effect tests and integral effect tests are required for a code validation. It is not easy to a code developer to validate a computer code whenever a code modification is occurred. IVS produces graphs which shown the comparison the code calculation results with the corresponding test results automatically. IVS was developed for a validation of TASS/SMR-S code. The code validation could be achieved by a comparison code calculation results with corresponding test results. This comparison was represented as a graph for convenience. IVS is useful before release a new code version. The code developer can validate code result easily using IVS. Even during code development, IVS could be used for validation of code modification. The code developer could gain a confidence about his code modification easily and fast and could be free from tedious and long validation work. The popular software introduced in IVS supplies better usability and portability.

  4. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  5. Multi-Objective Optimization in Physical Synthesis of Integrated Circuits

    CERN Document Server

    A Papa, David

    2013-01-01

    This book introduces techniques that advance the capabilities and strength of modern software tools for physical synthesis, with the ultimate goal to improve the quality of leading-edge semiconductor products.  It provides a comprehensive introduction to physical synthesis and takes the reader methodically from first principles through state-of-the-art optimizations used in cutting edge industrial tools. It explains how to integrate chip optimizations in novel ways to create powerful circuit transformations that help satisfy performance requirements. Broadens the scope of physical synthesis optimization to include accurate transformations operating between the global and local scales; Integrates groups of related transformations to break circular dependencies and increase the number of circuit elements that can be jointly optimized to escape local minima;  Derives several multi-objective optimizations from first observations through complete algorithms and experiments; Describes integrated optimization te...

  6. A new 3-D integral code for computation of accelerator magnets

    International Nuclear Information System (INIS)

    Turner, L.R.; Kettunen, L.

    1991-01-01

    For computing accelerator magnets, integral codes have several advantages over finite element codes; far-field boundaries are treated automatically, and computed field in the bore region satisfy Maxwell's equations exactly. A new integral code employing edge elements rather than nodal elements has overcome the difficulties associated with earlier integral codes. By the use of field integrals (potential differences) as solution variables, the number of unknowns is reduced to one less than the number of nodes. Two examples, a hollow iron sphere and the dipole magnet of Advanced Photon Source injector synchrotron, show the capability of the code. The CPU time requirements are comparable to those of three-dimensional (3-D) finite-element codes. Experiments show that in practice it can realize much of the potential CPU time saving that parallel processing makes possible. 8 refs., 4 figs., 1 tab

  7. Pre-Service Teachers' Perception of Quick Response (QR) Code Integration in Classroom Activities

    Science.gov (United States)

    Ali, Nagla; Santos, Ieda M.; Areepattamannil, Shaljan

    2017-01-01

    Quick Response (QR) codes have been discussed in the literature as adding value to teaching and learning. Despite their potential in education, more research is needed to inform practice and advance knowledge in this field. This paper investigated the integration of the QR code in classroom activities and the perceptions of the integration by…

  8. An Auto sequence Code to Integrate a Neutron Unfolding Code with thePC-MCA Accuspec

    International Nuclear Information System (INIS)

    Darsono

    2000-01-01

    In a neutron spectrometry using proton recoil method, the neutronunfolding code is needed to unfold the measured proton spectrum to become theneutron spectrum. The process of the unfolding neutron in the existingneutron spectrometry which was successfully installed last year was doneseparately. This manuscript reports that the auto sequence code to integratethe neutron unfolding code UNFSPEC.EXE with the software facility of thePC-MCA Accuspec has been made and run successfully so that the new neutronspectrometry become compact. The auto sequence code was written based on therules in application program facility of PC-MCA Accuspec and then it wascompiled using AC-EXE. Result of the test of the auto sequence code showedthat for binning width 20, 30, and 40 giving a little different spectrumshape. The binning width around 30 gives a better spectrum in mean of givingsmall error compared to the others. (author)

  9. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement......This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...

  10. THE OPTIMAL CONTROL IN THE MODELOF NETWORK SECURITY FROM MALICIOUS CODE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available The paper deals with a mathematical model of network security. The model is described in terms of the nonlinear optimal control. As a criterion of the control problem quality the price of the summary damage inflicted by the harmful codes is chosen, under additional restriction: the number of recovered nodes is maximized. The Pontryagin maximum principle for construction of the optimal decisions is formulated. The number of switching points of the optimal control is found. The explicit form of optimal control is given using the Lagrange multipliers method.

  11. Power Optimization of Wireless Media Systems With Space-Time Block Codes

    OpenAIRE

    Yousefi'zadeh, Homayoun; Jafarkhani, Hamid; Moshfeghi, Mehran

    2004-01-01

    We present analytical and numerical solutions to the problem of power control in wireless media systems with multiple antennas. We formulate a set of optimization problems aimed at minimizing total power consumption of wireless media systems subject to a given level of QoS and an available bit rate. Our formulation takes in to consideration the power consumption related to source coding, channel coding, and transmission of multiple-transmit antennas. In our study, we consider Gauss-Markov and...

  12. Differentially Encoded LDPC Codes—Part II: General Case and Code Optimization

    Directory of Open Access Journals (Sweden)

    Li (Tiffany Jing

    2008-01-01

    Full Text Available This two-part series of papers studies the theory and practice of differentially encoded low-density parity-check (DE-LDPC codes, especially in the context of noncoherent detection. Part I showed that a special class of DE-LDPC codes, product accumulate codes, perform very well with both coherent and noncoherent detections. The analysis here reveals that a conventional LDPC code, however, is not fitful for differential coding and does not, in general, deliver a desirable performance when detected noncoherently. Through extrinsic information transfer (EXIT analysis and a modified "convergence-constraint" density evolution (DE method developed here, we provide a characterization of the type of LDPC degree profiles that work in harmony with differential detection (or a recursive inner code in general, and demonstrate how to optimize these LDPC codes. The convergence-constraint method provides a useful extension to the conventional "threshold-constraint" method, and can match an outer LDPC code to any given inner code with the imperfectness of the inner decoder taken into consideration.

  13. Spectral-Amplitude-Coded OCDMA Optimized for a Realistic FBG Frequency Response

    Science.gov (United States)

    Penon, Julien; El-Sahn, Ziad A.; Rusch, Leslie A.; Larochelle, Sophie

    2007-05-01

    We develop a methodology for numerical optimization of fiber Bragg grating frequency response to maximize the achievable capacity of a spectral-amplitude-coded optical code-division multiple-access (SAC-OCDMA) system. The optimal encoders are realized, and we experimentally demonstrate an incoherent SAC-OCDMA system with seven simultaneous users. We report a bit error rate (BER) of 2.7 x 10-8 at 622 Mb/s for a fully loaded network (seven users) using a 9.6-nm optical band. We achieve error-free transmission (BER < 1 x 10-9) for up to five simultaneous users.

  14. Integrated Multidisciplinary Optimization Objects, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — During Phase I, M4 Engineering integrated a prototype system into OpenMDAO, a NASA GRC open-source framework. This prototype system was a proof-of-concept that M4...

  15. Integrated modeling of ozonation for optimization of drinking water treatment

    NARCIS (Netherlands)

    van der Helm, A.W.C.

    2007-01-01

    Drinking water treatment plants automation becomes more sophisticated, more on-line monitoring systems become available and integration of modeling environments with control systems becomes easier. This gives possibilities for model-based optimization. In operation of drinking water treatment

  16. Optimal stability polynomials for numerical integration of initial value problems

    KAUST Repository

    Ketcheson, David I.; Ahmadia, Aron

    2013-01-01

    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step

  17. DYNAMIC OPTIMAL BUDGET ALLOCATION FOR INTEGRATED MARKETING CONSIDERING PERSISTENCE

    OpenAIRE

    SHIZHONG AI; RONG DU; QIYING HU

    2010-01-01

    Aiming at forming dynamic optimal integrated marketing policies, we build a budget allocation model considering both current effects and sustained ones. The model includes multiple time periods and multiple marketing tools which interact through a common resource pool as well as through delayed cross influences on each other's sales, reflecting the nature of "integrated marketing" and its dynamics. In our study, marginal analysis is used to illuminate the structure of optimal policy. We deriv...

  18. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    International Nuclear Information System (INIS)

    Li, Jia; Jiang, Kecheng; Zhang, Xiaokang; Nie, Xingchen; Zhu, Qinjun; Liu, Songlin

    2016-01-01

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  19. Nuclear-thermal-coupled optimization code for the fusion breeding blanket conceptual design

    Energy Technology Data Exchange (ETDEWEB)

    Li, Jia, E-mail: lijia@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Jiang, Kecheng; Zhang, Xiaokang [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China); Nie, Xingchen [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230027, Anhui (China); Zhu, Qinjun; Liu, Songlin [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei 230031, Anhui (China)

    2016-12-15

    Highlights: • A nuclear-thermal-coupled predesign code has been developed for optimizing the radial build arrangement of fusion breeding blanket. • Coupling module aims at speeding up the efficiency of design progress by coupling the neutronics calculation code with the thermal-hydraulic analysis code. • Radial build optimization algorithm aims at optimal arrangement of breeding blanket considering one or multiple specified objectives subject to the design criteria such as material temperature limit and available TBR. - Abstract: Fusion breeding blanket as one of the key in-vessel components performs the functions of breeding the tritium, removing the nuclear heat and heat flux from plasma chamber as well as acting as part of shielding system. The radial build design which determines the arrangement of function zones and material properties on the radial direction is the basis of the detailed design of fusion breeding blanket. For facilitating the radial build design, this study aims for developing a pre-design code to optimize the radial build of blanket with considering the performance of nuclear and thermal-hydraulic simultaneously. Two main features of this code are: (1) Coupling of the neutronics analysis with the thermal-hydraulic analysis to speed up the analysis progress; (2) preliminary optimization algorithm using one or multiple specified objectives subject to the design criteria in the form of constrains imposed on design variables and performance parameters within the possible engineering ranges. This pre-design code has been applied to the conceptual design of water-cooled ceramic breeding blanket in project of China fusion engineering testing reactor (CFETR).

  20. Developing an Integrated Design Strategy for Chip Layout Optimization

    NARCIS (Netherlands)

    Wits, Wessel Willems; Jauregui Becker, Juan Manuel; van Vliet, Frank Edward; te Riele, G.J.

    2011-01-01

    This paper presents an integrated design strategy for chip layout optimization. The strategy couples both electric and thermal aspects during the conceptual design phase to improve chip performances; thermal management being one of the major topics. The layout of the chip circuitry is optimized

  1. SolveDB: Integrating Optimization Problem Solvers Into SQL Databases

    DEFF Research Database (Denmark)

    Siksnys, Laurynas; Pedersen, Torben Bach

    2016-01-01

    for optimization problems, (2) an extensible infrastructure for integrating different solvers, and (3) query optimization techniques to achieve the best execution performance and/or result quality. Extensive experiments with the PostgreSQL-based implementation show that SolveDB is a versatile tool offering much...

  2. Optimal database locks for efficient integrity checking

    DEFF Research Database (Denmark)

    Martinenghi, Davide

    2004-01-01

    In concurrent database systems, correctness of update transactions refers to the equivalent effects of the execution schedule and some serial schedule over the same set of transactions. Integrity constraints add further semantic requirements to the correctness of the database states reached upon...... the execution of update transactions. Several methods for efficient integrity checking and enforcing exist. We show in this paper how to apply one such method to automatically extend update transactions with locks and simplified consistency tests on the locked entities. All schedules produced in this way...

  3. Optimization of Segmentation Quality of Integrated Circuit Images

    Directory of Open Access Journals (Sweden)

    Gintautas Mušketas

    2012-04-01

    Full Text Available The paper presents investigation into the application of genetic algorithms for the segmentation of the active regions of integrated circuit images. This article is dedicated to a theoretical examination of the applied methods (morphological dilation, erosion, hit-and-miss, threshold and describes genetic algorithms, image segmentation as optimization problem. The genetic optimization of the predefined filter sequence parameters is carried out. Improvement to segmentation accuracy using a non optimized filter sequence makes 6%.Artcile in Lithuanian

  4. A design approach for integrating thermoelectric devices using topology optimization

    International Nuclear Information System (INIS)

    Soprani, S.; Haertel, J.H.K.; Lazarov, B.S.; Sigmund, O.; Engelbrecht, K.

    2016-01-01

    Highlights: • The integration of a thermoelectric (TE) cooler into a robotic tool is optimized. • Topology optimization is suggested as design tool for TE integrated systems. • A 3D optimization technique using temperature dependent TE properties is presented. • The sensitivity of the optimization process to the boundary conditions is studied. • A working prototype is constructed and compared to the model results. - Abstract: Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems for different operating conditions and objective functions, such as temperature span, efficiency, and power recovery rate. As a specific application, the integration of a thermoelectric cooler into the electronics section of a downhole oil well intervention tool is investigated, with the objective of minimizing the temperature of the cooled electronics. Several challenges are addressed: ensuring effective heat transfer from the load, minimizing the thermal resistances within the integrated system, maximizing the thermal protection of the cooled zone, and enhancing the conduction of the rejected heat to the oil well. The design method incorporates temperature dependent properties of the thermoelectric device and other materials. The 3D topology optimization model developed in this work was used to design a thermoelectric system, complete with insulation and heat sink, that was produced and tested. Good agreement between experimental results and

  5. Approaches in highly parameterized inversion - PEST++, a Parameter ESTimation code optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; Doherty, John E.; Hunt, Randall J.; Muffels, Christopher T.; Tonkin, Matthew J.; Schreuder, Willem A.

    2012-01-01

    An object-oriented parameter estimation code was developed to incorporate benefits of object-oriented programming techniques for solving large parameter estimation modeling problems. The code is written in C++ and is a formulation and expansion of the algorithms included in PEST, a widely used parameter estimation code written in Fortran. The new code is called PEST++ and is designed to lower the barriers of entry for users and developers while providing efficient algorithms that can accommodate large, highly parameterized problems. This effort has focused on (1) implementing the most popular features of PEST in a fashion that is easy for novice or experienced modelers to use and (2) creating a software design that is easy to extend; that is, this effort provides a documented object-oriented framework designed from the ground up to be modular and extensible. In addition, all PEST++ source code and its associated libraries, as well as the general run manager source code, have been integrated in the Microsoft Visual Studio® 2010 integrated development environment. The PEST++ code is designed to provide a foundation for an open-source development environment capable of producing robust and efficient parameter estimation tools for the environmental modeling community into the future.

  6. The Effect of Slot-Code Optimization in Warehouse Order Picking

    Directory of Open Access Journals (Sweden)

    Andrea Fumi

    2013-07-01

    most appropriate material handling resource configuration. Building on previous work on the effect of slot-code optimization on travel times in single/dual command cycles, the authors broaden the scope to include the most general picking case, thus widening the range of applicability and realising former suggestions for future research.

  7. Analysis of Optimal Operation of an Energy Integrated Distillation Plant

    DEFF Research Database (Denmark)

    Li, Hong Wen; Hansen, C.A.; Gani, Rafiqul

    2003-01-01

    The efficiency of manufacturing systems can be significantly increased through diligent application of control based on mathematical models thereby enabling more tight integration of decision making with systems operation. In the present paper analysis of optimal operation of an energy integrated...

  8. RAID-6 reed-solomon codes with asymptotically optimal arithmetic complexities

    KAUST Repository

    Lin, Sian-Jheng

    2016-12-24

    In computer storage, RAID 6 is a level of RAID that can tolerate two failed drives. When RAID-6 is implemented by Reed-Solomon (RS) codes, the penalty of the writing performance is on the field multiplications in the second parity. In this paper, we present a configuration of the factors of the second-parity formula, such that the arithmetic complexity can reach the optimal complexity bound when the code length approaches infinity. In the proposed approach, the intermediate data used for the first parity is also utilized to calculate the second parity. To the best of our knowledge, this is the first approach supporting the RAID-6 RS codes to approach the optimal arithmetic complexity.

  9. Integrated computer codes for nuclear power plant severe accident analysis

    International Nuclear Information System (INIS)

    Jordanov, I.; Khristov, Y.

    1995-01-01

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs

  10. Integrated computer codes for nuclear power plant severe accident analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jordanov, I; Khristov, Y [Bylgarska Akademiya na Naukite, Sofia (Bulgaria). Inst. za Yadrena Izsledvaniya i Yadrena Energetika

    1996-12-31

    This overview contains a description of the Modular Accident Analysis Program (MAAP), ICARE computer code and Source Term Code Package (STCP). STCP is used to model TMLB sample problems for Zion Unit 1 and WWER-440/V-213 reactors. Comparison is made of STCP implementation on VAX and IBM systems. In order to improve accuracy, a double precision version of MARCH-3 component of STCP is created and the overall thermal hydraulics is modelled. Results of modelling the containment pressure, debris temperature, hydrogen mass are presented. 5 refs., 10 figs., 2 tabs.

  11. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    Smith, L.M.; Hochstedler, R.D.

    1997-01-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code)

  12. Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes

    Science.gov (United States)

    Smith, L. M.; Hochstedler, R. D.

    1997-02-01

    Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of the accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).

  13. Asset management -- Integrated software optimizes production performance

    International Nuclear Information System (INIS)

    Polczer, S.

    1998-01-01

    Developments in data collection and retrieval systems to allow timely cost analysis, financial reporting and production management are discussed. One of the most important new OLAP (on-line analytical processing) products is Energy Warehouse which gathers field information from various sources, allows advanced searches, and generates reports previously unavailable in other conventional financial accounting systems. Another OLAP-based system, the Canadian Upstream Energy System (CUES), was developed by the Oracle Corporation and the Calgary-based Applied Terravision Systems (ATS) Inc. CUES combines Oracle's universal data server software development tools with ATS's upstream financial, land, geotechnical and production applications. ATS also developed a product called IDPMARS (Integrated Daily Production Management Accounting Reporting System). It interfaces with CUES to link working interests, government royalties, administration, facility charges, lifting costs, transportation tooling, and customers by integrating field data collection systems with financial accounting

  14. Asset management -- Integrated software optimizes production performance

    Energy Technology Data Exchange (ETDEWEB)

    Polczer, S.

    1998-10-01

    Developments in data collection and retrieval systems to allow timely cost analysis, financial reporting and production management are discussed. One of the most important new OLAP (on-line analytical processing) products is Energy Warehouse which gathers field information from various sources, allows advanced searches, and generates reports previously unavailable in other conventional financial accounting systems. Another OLAP-based system, the Canadian Upstream Energy System (CUES), was developed by the Oracle Corporation and the Calgary-based Applied Terravision Systems (ATS) Inc. CUES combines Oracle`s universal data server software development tools with ATS`s upstream financial, land, geotechnical and production applications. ATS also developed a product called IDPMARS (Integrated Daily Production Management Accounting Reporting System). It interfaces with CUES to link working interests, government royalties, administration, facility charges, lifting costs, transportation tooling, and customers by integrating field data collection systems with financial accounting.

  15. Substrate optimization for integrated circuit antennas

    OpenAIRE

    Alexopoulos, N. G.; Katehi, P. B.; Rutledge, D. B.

    1982-01-01

    Imaging systems in microwaves, millimeter and submillimeter wave applications employ printed circuit antenna elements. The effect of substrate properties is analyzed in this paper by both reciprocity theorem as well as integral equation approach for infinitesimally short as well as finite length dipole and slot elements. Radiation efficiency and substrate surface wave guidance is studied for practical substrate materials as GaAs, Silicon, Quartz and Duroid.

  16. Recent progress of an integrated implosion code and modeling of element physics

    International Nuclear Information System (INIS)

    Nagatomo, H.; Takabe, H.; Mima, K.; Ohnishi, N.; Sunahara, A.; Takeda, T.; Nishihara, K.; Nishiguchu, A.; Sawada, K.

    2001-01-01

    Physics of the inertial fusion is based on a variety of elements such as compressible hydrodynamics, radiation transport, non-ideal equation of state, non-LTE atomic process, and relativistic laser plasma interaction. In addition, implosion process is not in stationary state and fluid dynamics, energy transport and instabilities should be solved simultaneously. In order to study such complex physics, an integrated implosion code including all physics important in the implosion process should be developed. The details of physics elements should be studied and the resultant numerical modeling should be installed in the integrated code so that the implosion can be simulated with available computer within realistic CPU time. Therefore, this task can be basically separated into two parts. One is to integrate all physics elements into a code, which is strongly related to the development of hydrodynamic equation solver. We have developed 2-D integrated implosion code which solves mass, momentum, electron energy, ion energy, equation of states, laser ray-trace, laser absorption radiation, surface tracing and so on. The reasonable results in simulating Rayleigh-Taylor instability and cylindrical implosion are obtained using this code. The other is code development on each element physics and verification of these codes. We had progress in developing a nonlocal electron transport code and 2 and 3 dimension radiation hydrodynamic code. (author)

  17. Optimal planning of integrated multi-energy systems

    DEFF Research Database (Denmark)

    van Beuzekom, I.; Gibescu, M.; Pinson, Pierre

    2017-01-01

    In this paper, a mathematical approach for the optimal planning of integrated energy systems is proposed. In order to address the challenges of future, RES-dominated energy systems, the model deliberates between the expansion of traditional energy infrastructures, the integration...... and sustainability goals for 2030 and 2045. Optimal green- and brownfield designs for a district's future integrated energy system are compared using a one-step, as well as a two-step planning approach. As expected, the greenfield designs are more cost efficient, as their results are not constrained by the existing...

  18. Overview of Recent Grid Codes for Wind Power Integration

    DEFF Research Database (Denmark)

    Altin, Müfit; Göksu, Ömer; Teodorescu, Remus

    2010-01-01

    As wind power penetration level increases, power system operators are challenged by the penetration impacts to maintain reliability and stability of power system. Therefore, grid codes are being published and continuously updated by transmission system operators of the countries. In this paper...

  19. Simultaneous integrated optimal energy flow of electricity, gas, and heat

    International Nuclear Information System (INIS)

    Shabanpour-Haghighi, Amin; Seifi, Ali Reza

    2015-01-01

    Highlights: • Integration of electrical, natural gas, and district heating networks is studied. • Part-load performances of units are considered in modeling. • A modified teaching–learning based optimization is used to solve the problem. • Results show the advantages of the integrated optimization approach. - Abstract: In this paper, an integrated approach to optimize electrical, natural gas, and district heating networks simultaneously is studied. Several interdependencies between these infrastructures are considered in details including a nonlinear part-load performance for boilers and CHPs besides the valve-point effect for generators. A novel approach based on selecting an appropriate set of state-variables for the problem is proposed that eliminates the addition of any new variable to convert irregular equations into a regular set while the optimization problem is still solvable. As a large optimization problem, the optimal solution cannot be achieved by conventional mathematical techniques. Hence, it is better to use evolutionary algorithms instead. In this paper, the well-known modified teaching–learning based optimization algorithm is utilized to solve the multi-period optimal power flow problem of multi-carrier energy networks. The proposed scheme is implemented and applied to a typical multi-carrier energy network. Results are compared with some other conventional heuristic algorithms and the applicability and superiority of the proposed methodology is verified

  20. Development of integrated SOL/Divertor code and simulation study of the JT-60U/JT-60SA tokamaks

    International Nuclear Information System (INIS)

    Kawashima, H.; Shimizu, K.; Takizuka, T.

    2007-01-01

    To predict the particle and heat controllability in the divertor of tokamak reactors such as ITER and to optimize the divertor design, comprehensive simulations by integrated modelling with taking in various physical processes are indispensable. For the design study of ITER divertor, the modelling codes such as B2, UEDGE and EDGE2D have been developed, and their results have contributed to the evolution of the divertor concept. In Japan Atomic Energy Agency (JAEA), SOL/divertor codes have also been developed for the interpretation and the prediction on behaviours of plasmas, neutrals and impurities in the SOL/divertor regions. The code development is originally carried out since physics models can be verified quickly and flexibly under the circumstance of close collaboration with JT-60 team. Figure 1 shows our code system, which consists of the 2 dimensional fluid code SOLDOR, the neutral Monte Carlo (MC) code NEUT2D, and the impurity MC code IMPMC. The particle simulation code PARASOL has also been developed in order to establish the physics modelling used in fluid simulations. Integration of SOLDOR, NEUT2D and IMPMC, called the '' SONIC '' code, is being carried out to simulate self-consistently the SOL/divertor plasmas in present tokamaks and in future devices. Combination of the SOLDOR and NEUT2D was completed, which has the features such as 1) high-resolution oscillation-free scheme in solving fluid equations, 2) neutral transport calculation under the fine meshes, 3) success in reduction of MC noise, 4) optimization on the massive parallel computer, etc. The simulation reproduces the X-point MARFE in the JT-60U experiment. It is found that the chemically sputtered carbon at the dome causes the radiation peaking near the X-point. The performance of divertor pumping in JT-60U is evaluated from the particle balances. We also present the divertor designing of JT-60SA, which is the modification program of JT-60U to establish high beta steady-state operation. To

  1. Optimal Real-time Dispatch for Integrated Energy Systems

    DEFF Research Database (Denmark)

    Anvari-Moghaddam, Amjad; Guerrero, Josep M.; Rahimi-Kian, Ashkan

    2016-01-01

    With the emerging of small-scale integrated energy systems (IESs), there are significant potentials to increase the functionality of a typical demand-side management (DSM) strategy and typical implementation of building-level distributed energy resources (DERs). By integrating DSM and DERs...... into a cohesive, networked package that fully utilizes smart energy-efficient end-use devices, advanced building control/automation systems, and integrated communications architectures, it is possible to efficiently manage energy and comfort at the end-use location. In this paper, an ontology-driven multi......-agent control system with intelligent optimizers is proposed for optimal real-time dispatch of an integrated building and microgrid system considering coordinated demand response (DR) and DERs management. The optimal dispatch problem is formulated as a mixed integer nonlinear programing problem (MINLP...

  2. Integrated Aeroservoelastic Optimization: Status and Direction

    Science.gov (United States)

    Livne, Eli

    1999-01-01

    The interactions of lightweight flexible airframe structures, steady and unsteady aerodynamics, and wide-bandwidth active controls on modern airplanes lead to considerable multidisciplinary design challenges. More than 25 years of mathematical and numerical methods' development, numerous basic research studies, simulations and wind-tunnel tests of simple models, wind-tunnel tests of complex models of real airplanes, as well as flight tests of actively controlled airplanes, have all contributed to the accumulation of a substantial body of knowledge in the area of aeroservoelasticity. A number of analysis codes, with the capabilities to model real airplane systems under the assumptions of linearity, have been developed. Many tests have been conducted, and results were correlated with analytical predictions. A selective sample of references covering aeroservoelastic testing programs from the 1960s to the early 1980s, as well as more recent wind-tunnel test programs of real or realistic configurations, are included in the References section of this paper. An examination of references 20-29 will reveal that in the course of development (or later modification), of almost every modern airplane with a high authority active control system, there arose a need to face aeroservoelastic problems and aeroservoelastic design challenges.

  3. Integrating orthodontics for the optimal smile.

    Science.gov (United States)

    Yorita, Frank K

    2008-08-01

    With the rapid and complex advancements in materials and technology in dentistry today, it has become difficult for the dental practitioner to stay current in one field, let alone more than one. In order to increase patient benefits and decrease the dentist's frustration, today's dental practice requires an interdisciplinary approach that integrates the knowledge, skills, and experience of all the disciplines of dentistry and its associated fields. This article highlights the advantage of an interdisciplinary treatment approach and how knowledge of basic orthodontic techniques can help in producing a more comprehensive treatment plan.

  4. WKB: an interactive code for solving differential equations using phase integral methods

    International Nuclear Information System (INIS)

    White, R.B.

    1978-01-01

    A small code for the analysis of ordinary differential equations interactively through the use of Phase Integral Methods (WKB) has been written for use on the DEC 10. This note is a descriptive manual for those interested in using the code

  5. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    Energy Technology Data Exchange (ETDEWEB)

    Page, R.; Jones, J.R.

    1997-07-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell `B` Loss of offsite power fault transient.

  6. Development of an integrated thermal-hydraulics capability incorporating RELAP5 and PANTHER neutronics code

    International Nuclear Information System (INIS)

    Page, R.; Jones, J.R.

    1997-01-01

    Ensuring that safety analysis needs are met in the future is likely to lead to the development of new codes and the further development of existing codes. It is therefore advantageous to define standards for data interfaces and to develop software interfacing techniques which can readily accommodate changes when they are made. Defining interface standards is beneficial but is necessarily restricted in application if future requirements are not known in detail. Code interfacing methods are of particular relevance with the move towards automatic grid frequency response operation where the integration of plant dynamic, core follow and fault study calculation tools is considered advantageous. This paper describes the background and features of a new code TALINK (Transient Analysis code LINKage program) used to provide a flexible interface to link the RELAP5 thermal hydraulics code with the PANTHER neutron kinetics and the SIBDYM whole plant dynamic modelling codes used by Nuclear Electric. The complete package enables the codes to be executed in parallel and provides an integrated whole plant thermal-hydraulics and neutron kinetics model. In addition the paper discusses the capabilities and pedigree of the component codes used to form the integrated transient analysis package and the details of the calculation of a postulated Sizewell 'B' Loss of offsite power fault transient

  7. Optimal power allocation and joint source-channel coding for wireless DS-CDMA visual sensor networks

    Science.gov (United States)

    Pandremmenou, Katerina; Kondi, Lisimachos P.; Parsopoulos, Konstantinos E.

    2011-01-01

    In this paper, we propose a scheme for the optimal allocation of power, source coding rate, and channel coding rate for each of the nodes of a wireless Direct Sequence Code Division Multiple Access (DS-CDMA) visual sensor network. The optimization is quality-driven, i.e. the received quality of the video that is transmitted by the nodes is optimized. The scheme takes into account the fact that the sensor nodes may be imaging scenes with varying levels of motion. Nodes that image low-motion scenes will require a lower source coding rate, so they will be able to allocate a greater portion of the total available bit rate to channel coding. Stronger channel coding will mean that such nodes will be able to transmit at lower power. This will both increase battery life and reduce interference to other nodes. Two optimization criteria are considered. One that minimizes the average video distortion of the nodes and one that minimizes the maximum distortion among the nodes. The transmission powers are allowed to take continuous values, whereas the source and channel coding rates can assume only discrete values. Thus, the resulting optimization problem lies in the field of mixed-integer optimization tasks and is solved using Particle Swarm Optimization. Our experimental results show the importance of considering the characteristics of the video sequences when determining the transmission power, source coding rate and channel coding rate for the nodes of the visual sensor network.

  8. Integrating the nursing management minimum data set into the logical observation identifier names and codes system.

    Science.gov (United States)

    Subramanian, Amarnath; Westra, Bonnie; Matney, Susan; Wilson, Patricia S; Delaney, Connie W; Huff, Stan; Huff, Stanley M; Huber, Diane

    2008-11-06

    This poster describes the process used to integrate the Nursing Management Minimum Data Set (NMMDS), an instrument to measure the nursing context of care, into the Logical Observation Identifier Names and Codes (LOINC) system to facilitate contextualization of quality measures. Integration of the first three of 18 elements resulted in 48 new codes including five panels. The LOINC Clinical Committee has approved the presented mapping for their next release.

  9. Experimental assessment of computer codes used for safety analysis of integral reactors

    Energy Technology Data Exchange (ETDEWEB)

    Falkov, A.A.; Kuul, V.S.; Samoilov, O.B. [OKB Mechanical Engineering, Nizhny Novgorod (Russian Federation)

    1995-09-01

    Peculiarities of integral reactor thermohydraulics in accidents are associated with presence of noncondensable gas in built-in pressurizer, absence of pumped ECCS, use of guard vessel for LOCAs localisation and passive RHRS through in-reactor HX`s. These features defined the main trends in experimental investigations and verification efforts for computer codes applied. The paper reviews briefly the performed experimental investigation of thermohydraulics of AST-500, VPBER600-type integral reactors. The characteristic of UROVEN/MB-3 code for LOCAs analysis in integral reactors and results of its verification are given. The assessment of RELAP5/mod3 applicability for accident analysis in integral reactor is presented.

  10. Multi-objective group scheduling optimization integrated with preventive maintenance

    Science.gov (United States)

    Liao, Wenzhu; Zhang, Xiufang; Jiang, Min

    2017-11-01

    This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.

  11. Energy optimization of integrated process plants

    Energy Technology Data Exchange (ETDEWEB)

    Sandvig Nielsen, J

    1996-10-01

    A general approach for viewing the process synthesis as an evolutionary process is proposed. Each step is taken according to the present level of information and knowledge. This is formulated in a Process Synthesis Cycle. Initially the synthesis is conducted at a high abstraction level maximizing use of heuristics (prior experience, rules of thumbs etc). When further knowledge and information are available, heuristics will gradually be replaced by exact problem formulations. The principles in the Process Synthesis Cycle, is used to develop a general procedure for energy synthesis, based on available tools. The procedure is based on efficient use of process simulators with integrated Pinch capabilities (energy targeting). The proposed general procedure is tailored to three specific problems (Humid Air Turbine power plant synthesis, Nitric Acid process synthesis and Sulphuric Acid synthesis). Using the procedure reduces the problem dimension considerable and thus allows for faster evaluation of more alternatives. At more detailed level a new framework for the Heat Exchanger Network synthesis problem is proposed. The new framework is object oriented based on a general functional description of all elements potentially present in the heat exchanger network (streams, exchangers, pumps, furnaces etc.). (LN) 116 refs.

  12. Optimizing Groundwater Monitoring Networks Using Integrated Statistical and Geostatistical Approaches

    Directory of Open Access Journals (Sweden)

    Jay Krishna Thakur

    2015-08-01

    Full Text Available The aim of this work is to investigate new approaches using methods based on statistics and geo-statistics for spatio-temporal optimization of groundwater monitoring networks. The formulated and integrated methods were tested with the groundwater quality data set of Bitterfeld/Wolfen, Germany. Spatially, the monitoring network was optimized using geo-statistical methods. Temporal optimization of the monitoring network was carried out using Sen’s method (1968. For geostatistical network optimization, a geostatistical spatio-temporal algorithm was used to identify redundant wells in 2- and 2.5-D Quaternary and Tertiary aquifers. Influences of interpolation block width, dimension, contaminant association, groundwater flow direction and aquifer homogeneity on statistical and geostatistical methods for monitoring network optimization were analysed. The integrated approach shows 37% and 28% redundancies in the monitoring network in Quaternary aquifer and Tertiary aquifer respectively. The geostatistical method also recommends 41 and 22 new monitoring wells in the Quaternary and Tertiary aquifers respectively. In temporal optimization, an overall optimized sampling interval was recommended in terms of lower quartile (238 days, median quartile (317 days and upper quartile (401 days in the research area of Bitterfeld/Wolfen. Demonstrated methods for improving groundwater monitoring network can be used in real monitoring network optimization with due consideration given to influencing factors.

  13. Time integration in the code Zgoubi and external usage of PTC's structures

    International Nuclear Information System (INIS)

    Forest, Etienne; Meot, F.

    2006-06-01

    The purpose of this note is to describe Zgoubi's integrator and to describe some pitfalls for time based integration when used in accelerators. We show why the convergence rate of an integrator can be affected by an improper treatment at the boundary when time is used as the integration variable. We also point out how the code PTC can be used as a container by other tracking engine. This work is not completed as far as incorporation of Zgoubi is concerned. (authors)

  14. Authorship attribution of source code by using back propagation neural network based on particle swarm optimization.

    Science.gov (United States)

    Yang, Xinyu; Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao

    2017-01-01

    Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead.

  15. Optimal integration of organic Rankine cycles with industrial processes

    International Nuclear Information System (INIS)

    Hipólito-Valencia, Brígido J.; Rubio-Castro, Eusiel; Ponce-Ortega, José M.; Serna-González, Medardo; Nápoles-Rivera, Fabricio; El-Halwagi, Mahmoud M.

    2013-01-01

    Highlights: • An optimization approach for heat integration is proposed. • A new general superstructure for heat integration is proposed. • Heat process streams are simultaneously integrated with an organic Rankine cycle. • Better results can be obtained respect to other previously reported methodologies. - Abstract: This paper presents a procedure for simultaneously handling the problem of optimal integration of regenerative organic Rankine cycles (ORCs) with overall processes. ORCs may allow the recovery of an important fraction of the low-temperature process excess heat (i.e., waste heat from industrial processes) in the form of mechanical energy. An integrated stagewise superstructure is proposed for representing the interconnections and interactions between the HEN and ORC for fixed data of process streams. Based on the integrated superstructure, the optimization problem is formulated as a mixed integer nonlinear programming problem to simultaneously account for the capital and operating costs including the revenue from the sale of the shaft power produced by the integrated system. The application of this method is illustrated with three example problems. Results show that the proposed procedure provides significantly better results than an earlier developed method for discovering optimal integrated systems using a sequential approach, due to the fact that it accounts simultaneously for the tradeoffs between the capital and operating costs as well as the sale of the produced energy. Also, the proposed method is an improvement over the previously reported methods for solving the synthesis problem of heat exchanger networks without the option of integration with an ORC (i.e., stand-alone heat exchanger networks)

  16. A new approach of optimization procedure for superconducting integrated circuits

    International Nuclear Information System (INIS)

    Saitoh, K.; Soutome, Y.; Tarutani, Y.; Takagi, K.

    1999-01-01

    We have developed and tested a new circuit simulation procedure for superconducting integrated circuits which can be used to optimize circuit parameters. This method reveals a stable operation region in the circuit parameter space in connection with the global bias margin by means of a contour plot of the global bias margin versus the circuit parameters. An optimal set of parameters with margins larger than these of the initial values has been found in the stable region. (author)

  17. The SWAN/NPSOL code system for multivariable multiconstraint shield optimization

    International Nuclear Information System (INIS)

    Watkins, E.F.; Greenspan, E.

    1995-01-01

    SWAN is a useful code for optimization of source-driven systems, i.e., systems for which the neutron and photon distribution is the solution of the inhomogeneous transport equation. Over the years, SWAN has been applied to the optimization of a variety of nuclear systems, such as minimizing the thickness of fusion reactor blankets and shields, the weight of space reactor shields, the cost for an ICF target chamber shield, and the background radiation for explosive detection systems and maximizing the beam quality for boron neutron capture therapy applications. However, SWAN's optimization module can handle up to a single constraint and was inefficient in handling problems with many variables. The purpose of this work is to upgrade SWAN's optimization capability

  18. Status of emergency spray modelling in the integral code ASTEC

    International Nuclear Information System (INIS)

    Plumecocq, W.; Passalacqua, R.

    2001-01-01

    Containment spray systems are emergency systems that would be used in very low probability events which may lead to severe accidents in Light Water Reactors. In most cases, the primary function of the spray would be to remove heat and condense steam in order to reduce pressure and temperature in the containment building. Spray would also wash out fission products (aerosols and gaseous species) from the containment atmosphere. The efficiency of the spray system in the containment depressurization as well as in the removal of aerosols, during a severe accident, depends on the evolution of the spray droplet size distribution with the height in the containment, due to kinetic and thermal relaxation, gravitational agglomeration and mass transfer with the gas. A model has been developed taking into account all of these phenomena. This model has been implemented in the ASTEC code with a validation of the droplets relaxation against the CARAIDAS experiment (IPSN). Applications of this modelling to a PWR 900, during a severe accident, with special emphasis on the effect of spray on containment hydrogen distribution have been performed in multi-compartment configuration with the ASTEC V0.3 code. (author)

  19. Optimizing Performance of Combustion Chemistry Solvers on Intel's Many Integrated Core (MIC) Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Sitaraman, Hariswaran [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Grout, Ray W [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-06-09

    This work investigates novel algorithm designs and optimization techniques for restructuring chemistry integrators in zero and multidimensional combustion solvers, which can then be effectively used on the emerging generation of Intel's Many Integrated Core/Xeon Phi processors. These processors offer increased computing performance via large number of lightweight cores at relatively lower clock speeds compared to traditional processors (e.g. Intel Sandybridge/Ivybridge) used in current supercomputers. This style of processor can be productively used for chemistry integrators that form a costly part of computational combustion codes, in spite of their relatively lower clock speeds. Performance commensurate with traditional processors is achieved here through the combination of careful memory layout, exposing multiple levels of fine grain parallelism and through extensive use of vendor supported libraries (Cilk Plus and Math Kernel Libraries). Important optimization techniques for efficient memory usage and vectorization have been identified and quantified. These optimizations resulted in a factor of ~ 3 speed-up using Intel 2013 compiler and ~ 1.5 using Intel 2017 compiler for large chemical mechanisms compared to the unoptimized version on the Intel Xeon Phi. The strategies, especially with respect to memory usage and vectorization, should also be beneficial for general purpose computational fluid dynamics codes.

  20. A design approach for integrating thermoelectric devices using topology optimization

    DEFF Research Database (Denmark)

    Soprani, Stefano; Haertel, Jan Hendrik Klaas; Lazarov, Boyan Stefanov

    2016-01-01

    Efficient operation of thermoelectric devices strongly relies on the thermal integration into the energy conversion system in which they operate. Effective thermal integration reduces the temperature differences between the thermoelectric module and its thermal reservoirs, allowing the system...... to operate more efficiently. This work proposes and experimentally demonstrates a topology optimization approach as a design tool for efficient integration of thermoelectric modules into systems with specific design constraints. The approach allows thermal layout optimization of thermoelectric systems...... for different operating conditions and objective functions, such as temperature span, efficiency, and power recoveryrate. As a specific application, the integration of a thermoelectric cooler into the electronics section ofa downhole oil well intervention tool is investigated, with the objective of minimizing...

  1. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    Science.gov (United States)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  2. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    International Nuclear Information System (INIS)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant

  3. Numerical simulations of inertial confinement fusion hohlraum with LARED-integration code

    International Nuclear Information System (INIS)

    Li Jinghong; Li Shuanggui; Zhai Chuanlei

    2011-01-01

    In the target design of the Inertial Confinement Fusion (ICF) program, it is common practice to apply radiation hydrodynamics code to study the key physical processes happened in ICF process, such as hohlraum physics, radiation drive symmetry, capsule implosion physics in the radiation-drive approach of ICF. Recently, many efforts have been done to develop our 2D integrated simulation capability of laser fusion with a variety of optional physical models and numerical methods. In order to effectively integrate the existing codes and to facilitate the development of new codes, we are developing an object-oriented structured-mesh parallel code-supporting infrastructure, called JASMIN. Based on two-dimensional three-temperature hohlraum physics code LARED-H and two-dimensional multi-group radiative transfer code LARED-R, we develop a new generation two-dimensional laser fusion code under the JASMIN infrastructure, which enable us to simulate the whole process of laser fusion from the laser beams' entrance into the hohlraum to the end of implosion. In this paper, we will give a brief description of our new-generation two-dimensional laser fusion code, named LARED-Integration, especially in its physical models, and present some simulation results of holhraum. (author)

  4. [Symbol: see text]2 Optimized predictive image coding with [Symbol: see text]∞ bound.

    Science.gov (United States)

    Chuah, Sceuchin; Dumitrescu, Sorina; Wu, Xiaolin

    2013-12-01

    In many scientific, medical, and defense applications of image/video compression, an [Symbol: see text]∞ error bound is required. However, pure[Symbol: see text]∞-optimized image coding, colloquially known as near-lossless image coding, is prone to structured errors such as contours and speckles if the bit rate is not sufficiently high; moreover, most of the previous [Symbol: see text]∞-based image coding methods suffer from poor rate control. In contrast, the [Symbol: see text]2 error metric aims for average fidelity and hence preserves the subtlety of smooth waveforms better than the ∞ error metric and it offers fine granularity in rate control, but pure [Symbol: see text]2-based image coding methods (e.g., JPEG 2000) cannot bound individual errors as the [Symbol: see text]∞-based methods can. This paper presents a new compression approach to retain the benefits and circumvent the pitfalls of the two error metrics. A common approach of near-lossless image coding is to embed into a DPCM prediction loop a uniform scalar quantizer of residual errors. The said uniform scalar quantizer is replaced, in the proposed new approach, by a set of context-based [Symbol: see text]2-optimized quantizers. The optimization criterion is to minimize a weighted sum of the [Symbol: see text]2 distortion and the entropy while maintaining a strict [Symbol: see text]∞ error bound. The resulting method obtains good rate-distortion performance in both [Symbol: see text]2 and [Symbol: see text]∞ metrics and also increases the rate granularity. Compared with JPEG 2000, the new method not only guarantees lower [Symbol: see text]∞ error for all bit rates, but also it achieves higher PSNR for relatively high bit rates.

  5. Dr. Mainte. Integrated simulator of maintenance optimization of LWRs

    International Nuclear Information System (INIS)

    Isobe, Yoshihiro; Sagisaka, Mitsuyuki; Etoh, Junji; Matsunaga, Takashi; Kosaka, Toru; Matsumoto, Satoshi; Yoshimura, Shinobu

    2014-01-01

    Dr. Mainte, an integrated simulator for maintenance optimization of LWRs (Light Water Reactors) has been developed based on PFM (Probabilistic Fracture Mechanics) analyses. The concept of the simulator is to provide a decision-making system to optimize maintenance activities for representative components and piping systems in nuclear power plants totally and quantitatively in terms of safety, availability and economic efficiency, environmental impact and social acceptance. For the further improvement of the safety and availability, the effect of human error and its reduction on the optimization of plant maintenance activities and approaches of reducing it have been studied. (author)

  6. An integrated reliability-based design optimization of offshore towers

    International Nuclear Information System (INIS)

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  7. Robust output LQ optimal control via integral sliding modes

    CERN Document Server

    Fridman, Leonid; Bejarano, Francisco Javier

    2014-01-01

    Featuring original research from well-known experts in the field of sliding mode control, this monograph presents new design schemes for implementing LQ control solutions in situations where the output system is the only information provided about the state of the plant. This new design works under the restrictions of matched disturbances without losing its desirable features. On the cutting-edge of optimal control research, Robust Output LQ Optimal Control via Integral Sliding Modes is an excellent resource for both graduate students and professionals involved in linear systems, optimal control, observation of systems with unknown inputs, and automatization. In the theory of optimal control, the linear quadratic (LQ) optimal problem plays an important role due to its physical meaning, and its solution is easily given by an algebraic Riccati equation. This solution turns out to be restrictive, however, because of two assumptions: the system must be free from disturbances and the entire state vector must be kn...

  8. An integrated reliability-based design optimization of offshore towers

    Energy Technology Data Exchange (ETDEWEB)

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  9. Topology Optimization of Building Blocks for Photonic Integrated Circuits

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard; Sigmund, Ole

    2005-01-01

    Photonic integrated circuits are likely candidates as high speed replacements for the standard electrical integrated circuits of today. However, in order to obtain a satisfactorily performance many design prob- lems that up until now have resulted in too high losses must be resolved. In this work...... we demonstrate how the method of topology optimization can be used to design a variety of high performance building blocks for the future circuits....

  10. A long-term, integrated impact assessment of alternative building energy code scenarios in China

    International Nuclear Information System (INIS)

    Yu, Sha; Eom, Jiyong; Evans, Meredydd; Clarke, Leon

    2014-01-01

    China is the second largest building energy user in the world, ranking first and third in residential and commercial energy consumption. Beginning in the early 1980s, the Chinese government has developed a variety of building energy codes to improve building energy efficiency and reduce total energy demand. This paper studies the impact of building energy codes on energy use and CO 2 emissions by using a detailed building energy model that represents four distinct climate zones each with three building types, nested in a long-term integrated assessment framework GCAM. An advanced building stock module, coupled with the building energy model, is developed to reflect the characteristics of future building stock and its interaction with the development of building energy codes in China. This paper also evaluates the impacts of building codes on building energy demand in the presence of economy-wide carbon policy. We find that building energy codes would reduce Chinese building energy use by 13–22% depending on building code scenarios, with a similar effect preserved even under the carbon policy. The impact of building energy codes shows regional and sectoral variation due to regionally differentiated responses of heating and cooling services to shell efficiency improvement. - Highlights: • We assessed long-term impacts of building codes and climate policy using GCAM. • Building energy codes would reduce Chinese building energy use by 13–22%. • The impacts of codes on building energy use vary by climate region and sub-sector

  11. Development of free-piston Stirling engine performance and optimization codes based on Martini simulation technique

    Science.gov (United States)

    Martini, William R.

    1989-01-01

    A FORTRAN computer code is described that could be used to design and optimize a free-displacer, free-piston Stirling engine similar to the RE-1000 engine made by Sunpower. The code contains options for specifying displacer and power piston motion or for allowing these motions to be calculated by a force balance. The engine load may be a dashpot, inertial compressor, hydraulic pump or linear alternator. Cycle analysis may be done by isothermal analysis or adiabatic analysis. Adiabatic analysis may be done using the Martini moving gas node analysis or the Rios second-order Runge-Kutta analysis. Flow loss and heat loss equations are included. Graphical display of engine motions and pressures and temperatures are included. Programming for optimizing up to 15 independent dimensions is included. Sample performance results are shown for both specified and unconstrained piston motions; these results are shown as generated by each of the two Martini analyses. Two sample optimization searches are shown using specified piston motion isothermal analysis. One is for three adjustable input and one is for four. Also, two optimization searches for calculated piston motion are presented for three and for four adjustable inputs. The effect of leakage is evaluated. Suggestions for further work are given.

  12. Design of pressure vessels using shape optimization: An integrated approach

    Energy Technology Data Exchange (ETDEWEB)

    Carbonari, R.C., E-mail: ronny@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Munoz-Rojas, P.A., E-mail: pablo@joinville.udesc.br [Department of Mechanical Engineering, Universidade do Estado de Santa Catarina, Bom Retiro, Joinville, SC 89223-100 (Brazil); Andrade, E.Q., E-mail: edmundoq@petrobras.com.br [CENPES, PDP/Metodos Cientificos, Petrobras (Brazil); Paulino, G.H., E-mail: paulino@uiuc.edu [Newmark Laboratory, Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, 205 North Mathews Av., Urbana, IL 61801 (United States); Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, 158 Mechanical Engineering Building, 1206 West Green Street, Urbana, IL 61801-2906 (United States); Nishimoto, K., E-mail: knishimo@usp.br [Department of Naval Architecture and Ocean Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil); Silva, E.C.N., E-mail: ecnsilva@usp.br [Department of Mechatronic Engineering, Escola Politecnica da Universidade de Sao Paulo, Av. Prof. Mello Moraes, 2231 Sao Paulo, SP 05508-900 (Brazil)

    2011-05-15

    Previous papers related to the optimization of pressure vessels have considered the optimization of the nozzle independently from the dished end. This approach generates problems such as thickness variation from nozzle to dished end (coupling cylindrical region) and, as a consequence, it reduces the optimality of the final result which may also be influenced by the boundary conditions. Thus, this work discusses shape optimization of axisymmetric pressure vessels considering an integrated approach in which the entire pressure vessel model is used in conjunction with a multi-objective function that aims to minimize the von-Mises mechanical stress from nozzle to head. Representative examples are examined and solutions obtained for the entire vessel considering temperature and pressure loading. It is noteworthy that different shapes from the usual ones are obtained. Even though such different shapes may not be profitable considering present manufacturing processes, they may be competitive for future manufacturing technologies, and contribute to a better understanding of the actual influence of shape in the behavior of pressure vessels. - Highlights: > Shape optimization of entire pressure vessel considering an integrated approach. > By increasing the number of spline knots, the convergence stability is improved. > The null angle condition gives lower stress values resulting in a better design. > The cylinder stresses are very sensitive to the cylinder length. > The shape optimization of the entire vessel must be considered for cylinder length.

  13. GEMSFITS: Code package for optimization of geochemical model parameters and inverse modeling

    International Nuclear Information System (INIS)

    Miron, George D.; Kulik, Dmitrii A.; Dmytrieva, Svitlana V.; Wagner, Thomas

    2015-01-01

    Highlights: • Tool for generating consistent parameters against various types of experiments. • Handles a large number of experimental data and parameters (is parallelized). • Has a graphical interface and can perform statistical analysis on the parameters. • Tested on fitting the standard state Gibbs free energies of aqueous Al species. • Example on fitting interaction parameters of mixing models and thermobarometry. - Abstract: GEMSFITS is a new code package for fitting internally consistent input parameters of GEM (Gibbs Energy Minimization) geochemical–thermodynamic models against various types of experimental or geochemical data, and for performing inverse modeling tasks. It consists of the gemsfit2 (parameter optimizer) and gfshell2 (graphical user interface) programs both accessing a NoSQL database, all developed with flexibility, generality, efficiency, and user friendliness in mind. The parameter optimizer gemsfit2 includes the GEMS3K chemical speciation solver ( (http://gems.web.psi.ch/GEMS3K)), which features a comprehensive suite of non-ideal activity- and equation-of-state models of solution phases (aqueous electrolyte, gas and fluid mixtures, solid solutions, (ad)sorption. The gemsfit2 code uses the robust open-source NLopt library for parameter fitting, which provides a selection between several nonlinear optimization algorithms (global, local, gradient-based), and supports large-scale parallelization. The gemsfit2 code can also perform comprehensive statistical analysis of the fitted parameters (basic statistics, sensitivity, Monte Carlo confidence intervals), thus supporting the user with powerful tools for evaluating the quality of the fits and the physical significance of the model parameters. The gfshell2 code provides menu-driven setup of optimization options (data selection, properties to fit and their constraints, measured properties to compare with computed counterparts, and statistics). The practical utility, efficiency, and

  14. A nuclear reload optimization approach using a real coded genetic algorithm with random keys

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.

    2009-01-01

    The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison

  15. Optimized Irregular Low-Density Parity-Check Codes for Multicarrier Modulations over Frequency-Selective Channels

    Directory of Open Access Journals (Sweden)

    Valérian Mannoni

    2004-09-01

    Full Text Available This paper deals with optimized channel coding for OFDM transmissions (COFDM over frequency-selective channels using irregular low-density parity-check (LDPC codes. Firstly, we introduce a new characterization of the LDPC code irregularity called “irregularity profile.” Then, using this parameterization, we derive a new criterion based on the minimization of the transmission bit error probability to design an irregular LDPC code suited to the frequency selectivity of the channel. The optimization of this criterion is done using the Gaussian approximation technique. Simulations illustrate the good performance of our approach for different transmission channels.

  16. Energy Optimal Path Planning: Integrating Coastal Ocean Modelling with Optimal Control

    Science.gov (United States)

    Subramani, D. N.; Haley, P. J., Jr.; Lermusiaux, P. F. J.

    2016-02-01

    A stochastic optimization methodology is formulated for computing energy-optimal paths from among time-optimal paths of autonomous vehicles navigating in a dynamic flow field. To set up the energy optimization, the relative vehicle speed and headings are considered to be stochastic, and new stochastic Dynamically Orthogonal (DO) level-set equations that govern their stochastic time-optimal reachability fronts are derived. Their solution provides the distribution of time-optimal reachability fronts and corresponding distribution of time-optimal paths. An optimization is then performed on the vehicle's energy-time joint distribution to select the energy-optimal paths for each arrival time, among all stochastic time-optimal paths for that arrival time. The accuracy and efficiency of the DO level-set equations for solving the governing stochastic level-set reachability fronts are quantitatively assessed, including comparisons with independent semi-analytical solutions. Energy-optimal missions are studied in wind-driven barotropic quasi-geostrophic double-gyre circulations, and in realistic data-assimilative re-analyses of multiscale coastal ocean flows. The latter re-analyses are obtained from multi-resolution 2-way nested primitive-equation simulations of tidal-to-mesoscale dynamics in the Middle Atlantic Bight and Shelbreak Front region. The effects of tidal currents, strong wind events, coastal jets, and shelfbreak fronts on the energy-optimal paths are illustrated and quantified. Results showcase the opportunities for longer-duration missions that intelligently utilize the ocean environment to save energy, rigorously integrating ocean forecasting with optimal control of autonomous vehicles.

  17. Effect of difference between group constants processed by codes TIMS and ETOX on integral quantities

    International Nuclear Information System (INIS)

    Takano, Hideki; Ishiguro, Yukio; Matsui, Yasushi.

    1978-06-01

    Group constants of 235 U, 238 U, 239 Pu, 240 Pu and 241 Pu have been produced with the processing code TIMS using the evaluated nuclear data of JENDL-1. The temperature and composition dependent self-shielding factors have been calculated for the two cases with and without considering mutual interference resonant nuclei. By using the group constants set produced by the TIMS code, the integral quantities, i.e. multiplication factor, Na-void reactivity effect and Doppler reactivity effect, are calculated and compared with those calculated with the use of the cross sections set produced by the ETOX code to evaluate accuracy of the approximate calculation method in ETOX. There is much difference in self-shielding factor in each energy group between the two codes. For the fast reactor assemblies under study, however, the integral quantities calculated with these two sets are in good agreement with each other, because of eventual cancelation of errors. (auth.)

  18. A Study of Performance in Low-Power Tokamak Reactor with Integrated Predictive Modeling Code

    International Nuclear Information System (INIS)

    Pianroj, Y.; Onjun, T.; Suwanna, S.; Picha, R.; Poolyarat, N.

    2009-07-01

    Full text: A fusion hybrid or a small fusion power output with low power tokamak reactor is presented as another useful application of nuclear fusion. Such tokamak can be used for fuel breeding, high-level waste transmutation, hydrogen production at high temperature, and testing of nuclear fusion technology components. In this work, an investigation of the plasma performance in a small fusion power output design is carried out using the BALDUR predictive integrated modeling code. The simulations of the plasma performance in this design are carried out using the empirical-based Mixed Bohm/gyro Bohm (B/gB) model, whereas the pedestal temperature model is based on magnetic and flow shear (δ α ρ ζ 2 ) stabilization pedestal width scaling. The preliminary results using this core transport model show that the central ion and electron temperatures are rather pessimistic. To improve the performance, the optimization approach are carried out by varying some parameters, such as plasma current and power auxiliary heating, which results in some improvement of plasma performance

  19. Multiple Description Coding Based on Optimized Redundancy Removal for 3D Depth Map

    Directory of Open Access Journals (Sweden)

    Sen Han

    2016-06-01

    Full Text Available Multiple description (MD coding is a promising alternative for the robust transmission of information over error-prone channels. In 3D image technology, the depth map represents the distance between the camera and objects in the scene. Using the depth map combined with the existing multiview image, it can be efficient to synthesize images of any virtual viewpoint position, which can display more realistic 3D scenes. Differently from the conventional 2D texture image, the depth map contains a lot of spatial redundancy information, which is not necessary for view synthesis, but may result in the waste of compressed bits, especially when using MD coding for robust transmission. In this paper, we focus on the redundancy removal of MD coding based on the DCT (discrete cosine transform domain. In view of the characteristics of DCT coefficients, at the encoder, a Lagrange optimization approach is designed to determine the amounts of high frequency coefficients in the DCT domain to be removed. It is noted considering the low computing complexity that the entropy is adopted to estimate the bit rate in the optimization. Furthermore, at the decoder, adaptive zero-padding is applied to reconstruct the depth map when some information is lost. The experimental results have shown that compared to the corresponding scheme, the proposed method demonstrates better rate central and side distortion performance.

  20. The SWAN-SCALE code for the optimization of critical systems

    International Nuclear Information System (INIS)

    Greenspan, E.; Karni, Y.; Regev, D.; Petrie, L.M.

    1999-01-01

    The SWAN optimization code was recently developed to identify the maximum value of k eff for a given mass of fissile material when in combination with other specified materials. The optimization process is iterative; in each iteration SWAN varies the zone-dependent concentration of the system constituents. This change is guided by the equal volume replacement effectiveness functions (EVREF) that SWAN generates using first-order perturbation theory. Previously, SWAN did not have provisions to account for the effect of the composition changes on neutron cross-section resonance self-shielding; it used the cross sections corresponding to the initial system composition. In support of the US Department of Energy Nuclear Criticality Safety Program, the authors recently removed the limitation on resonance self-shielding by coupling SWAN with the SCALE code package. The purpose of this paper is to briefly describe the resulting SWAN-SCALE code and to illustrate the effect that neutron cross-section self-shielding could have on the maximum k eff and on the corresponding system composition

  1. Final Report. An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group

    Energy Technology Data Exchange (ETDEWEB)

    Rosenthal, Andrew [New Mexico State Univ., Las Cruces, NM (United States)

    2013-12-30

    The DOE grant, “An Integrated Partnership to Create and Lead the Solar Codes and Standards Working Group,” to New Mexico State University created the Solar America Board for Codes and Standards (Solar ABCs). From 2007 – 2013 with funding from this grant, Solar ABCs identified current issues, established a dialogue among key stakeholders, and catalyzed appropriate activities to support the development of codes and standards that facilitated the installation of high quality, safe photovoltaic systems. Solar ABCs brought the following resources to the PV stakeholder community; Formal coordination in the planning or revision of interrelated codes and standards removing “stove pipes” that have only roofing experts working on roofing codes, PV experts on PV codes, fire enforcement experts working on fire codes, etc.; A conduit through which all interested stakeholders were able to see the steps being taken in the development or modification of codes and standards and participate directly in the processes; A central clearing house for new documents, standards, proposed standards, analytical studies, and recommendations of best practices available to the PV community; A forum of experts that invites and welcomes all interested parties into the process of performing studies, evaluating results, and building consensus on standards and code-related topics that affect all aspects of the market; and A biennial gap analysis to formally survey the PV community to identify needs that are unmet and inhibiting the market and necessary technical developments.

  2. REopt: A Platform for Energy System Integration and Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Simpkins, T.; Cutler, D.; Anderson, K.; Olis, D.; Elgqvist, E.; Callahan, M.; Walker, A.

    2014-08-01

    REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, and energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.

  3. ITS - The integrated TIGER series of coupled electron/photon Monte Carlo transport codes

    International Nuclear Information System (INIS)

    Halbleib, J.A.; Mehlhorn, T.A.

    1985-01-01

    The TIGER series of time-independent coupled electron/photon Monte Carlo transport codes is a group of multimaterial, multidimensional codes designed to provide a state-of-the-art description of the production and transport of the electron/photon cascade. The codes follow both electrons and photons from 1.0 GeV down to 1.0 keV, and the user has the option of combining the collisional transport with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence. Source particles can be either electrons or photons. The most important output data are (a) charge and energy deposition profiles, (b) integral and differential escape coefficients for both electrons and photons, (c) differential electron and photon flux, and (d) pulse-height distributions for selected regions of the problem geometry. The base codes of the series differ from one another primarily in their dimensionality and geometric modeling. They include (a) a one-dimensional multilayer code, (b) a code that describes the transport in two-dimensional axisymmetric cylindrical material geometries with a fully three-dimensional description of particle trajectories, and (c) a general three-dimensional transport code which employs a combinatorial geometry scheme. These base codes were designed primarily for describing radiation transport for those situations in which the detailed atomic structure of the transport medium is not important. For some applications, it is desirable to have a more detailed model of the low energy transport. The system includes three additional codes that contain a more elaborate ionization/relaxation model than the base codes. Finally, the system includes two codes that combine the collisional transport of the multidimensional base codes with transport in macroscopic electric and magnetic fields of arbitrary spatial dependence

  4. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    Science.gov (United States)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  5. Nuclear Energy Advanced Modeling and Simulation (NEAMS) Waste Integrated Performance and Safety Codes (IPSC) : FY10 development and integration.

    Energy Technology Data Exchange (ETDEWEB)

    Criscenti, Louise Jacqueline; Sassani, David Carl; Arguello, Jose Guadalupe, Jr.; Dewers, Thomas A.; Bouchard, Julie F.; Edwards, Harold Carter; Freeze, Geoffrey A.; Wang, Yifeng; Schultz, Peter Andrew

    2011-02-01

    This report describes the progress in fiscal year 2010 in developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with robust verification, validation, and software quality requirements. Waste IPSC activities in fiscal year 2010 focused on specifying a challenge problem to demonstrate proof of concept, developing a verification and validation plan, and performing an initial gap analyses to identify candidate codes and tools to support the development and integration of the Waste IPSC. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. This year-end progress report documents the FY10 status of acquisition, development, and integration of thermal-hydrologic-chemical-mechanical (THCM) code capabilities, frameworks, and enabling tools and infrastructure.

  6. Investigation of Optimal Integrated Circuit Raster Image Vectorization Method

    Directory of Open Access Journals (Sweden)

    Leonas Jasevičius

    2011-03-01

    Full Text Available Visual analysis of integrated circuit layer requires raster image vectorization stage to extract layer topology data to CAD tools. In this paper vectorization problems of raster IC layer images are presented. Various line extraction from raster images algorithms and their properties are discussed. Optimal raster image vectorization method was developed which allows utilization of common vectorization algorithms to achieve the best possible extracted vector data match with perfect manual vectorization results. To develop the optimal method, vectorized data quality dependence on initial raster image skeleton filter selection was assessed.Article in Lithuanian

  7. REopt: A Platform for Energy System Integration and Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Katherine H. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cutler, Dylan S. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Olis, Daniel R. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Elgqvist, Emma M. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Li, Xiangkun [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Laws, Nicholas D. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); DiOrio, Nicholas A. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Walker, H. A [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-09-22

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  8. The light-water-reactor version of the Uranus integral fuel-rod code

    International Nuclear Information System (INIS)

    Moreno, A.; Lassmann, K.

    1977-01-01

    The LWR of the Uranus code, a digital computer programme for the thermal and mechanical analysis of fuel rods, is presented. Material properties are discussed and their effect on integral fuel rod behaviour elaborated via Uranus results for some carefully selected reference experiments. The numerical results do not represent post-irradiation analysis of in-pile experiments, they illustrate rather typical and diverse Uranus capabilities. The performance test shows that Uranus is reliable and efficient, thus the code is a most valuable tool in fuel fod analysis work. K. Lassmann developed the LWR version of the Uranus code, material properties were reviewed and supplied by A. Moreno. (author)

  9. Integrated transport code system for a multicomponent plasma in a gas dynamic trap

    International Nuclear Information System (INIS)

    Anikeev, A.V.; Karpushov, A.N.; Noak, K.; Strogalova, S.L.

    2000-01-01

    This report is focused on the development of the theoretical and numerical models of multicomponent high-β plasma confinement and transport in the gas-dynamic trap (GDT). In order to simulate the plasma behavior in the GDT as well as that in the GDT-based neutron source the Integrated Transport Code System is developed from existing stand-alone codes calculating the target plasma, the fast ions and the neutral gas in the GDT. The code system considers the full dependence of the transport phenomena on space, time, energy and angle variables as well as the interactions between the particle fields [ru

  10. Three-dimensional polarization marked multiple-QR code encryption by optimizing a single vectorial beam

    Science.gov (United States)

    Lin, Chao; Shen, Xueju; Hua, Binbin; Wang, Zhisong

    2015-10-01

    We demonstrate the feasibility of three dimensional (3D) polarization multiplexing by optimizing a single vectorial beam using a multiple-signal window multiple-plane (MSW-MP) phase retrieval algorithm. Original messages represented with multiple quick response (QR) codes are first partitioned into a series of subblocks. Then, each subblock is marked with a specific polarization state and randomly distributed in 3D space with both longitudinal and transversal adjustable freedoms. A generalized 3D polarization mapping protocol is established to generate a 3D polarization key. Finally, multiple-QR code is encrypted into one phase only mask and one polarization only mask based on the modified Gerchberg-Saxton (GS) algorithm. We take the polarization mask as the cyphertext and the phase only mask as additional dimension of key. Only when both the phase key and 3D polarization key are correct, original messages can be recovered. We verify our proposal with both simulation and experiment evidences.

  11. Optimizing integrated airport surface and terminal airspace operations under uncertainty

    Science.gov (United States)

    Bosson, Christabelle S.

    In airports and surrounding terminal airspaces, the integration of surface, arrival and departure scheduling and routing have the potential to improve the operations efficiency. Moreover, because both the airport surface and the terminal airspace are often altered by random perturbations, the consideration of uncertainty in flight schedules is crucial to improve the design of robust flight schedules. Previous research mainly focused on independently solving arrival scheduling problems, departure scheduling problems and surface management scheduling problems and most of the developed models are deterministic. This dissertation presents an alternate method to model the integrated operations by using a machine job-shop scheduling formulation. A multistage stochastic programming approach is chosen to formulate the problem in the presence of uncertainty and candidate solutions are obtained by solving sample average approximation problems with finite sample size. The developed mixed-integer-linear-programming algorithm-based scheduler is capable of computing optimal aircraft schedules and routings that reflect the integration of air and ground operations. The assembled methodology is applied to a Los Angeles case study. To show the benefits of integrated operations over First-Come-First-Served, a preliminary proof-of-concept is conducted for a set of fourteen aircraft evolving under deterministic conditions in a model of the Los Angeles International Airport surface and surrounding terminal areas. Using historical data, a representative 30-minute traffic schedule and aircraft mix scenario is constructed. The results of the Los Angeles application show that the integration of air and ground operations and the use of a time-based separation strategy enable both significant surface and air time savings. The solution computed by the optimization provides a more efficient routing and scheduling than the First-Come-First-Served solution. Additionally, a data driven analysis is

  12. Integrated decision making for the optimal bioethanol supply chain

    International Nuclear Information System (INIS)

    Corsano, Gabriela; Fumero, Yanina; Montagna, Jorge M.

    2014-01-01

    Highlights: • Optimal allocation, design and production planning of integrated ethanol plants is considered. • Mixed Integer Programming model is presented for solving the integration problem. • Different tradeoffs can be assessed and analyzed. • The modeling framework represents an useful tool for guiding decision making. - Abstract: Bioethanol production poses different challenges that require an integrated approach. Usually previous works have focused on specific perspectives of the global problem. On the contrary, bioethanol, in particular, and biofuels, in general, requires an integrated decision making framework that takes into account the needs and concerns of the different members involved in its supply chain. In this work, a Mixed Integer Linear Programming (MILP) model for the optimal allocation, design and production planning of integrated ethanol/yeast plants is considered. The proposed formulation addresses the relations between different aspects of the bioethanol supply chain and provides an efficient tool to assess the global operation of the supply chain taking into account different points of view. The model proposed in this work simultaneously determines the structure of a three-echelon supply chain (raw material sites, production facilities and customer zones), the design of each installed plant and operational considerations through production campaigns. Yeast production is considered in order to reduce the negative environmental impact caused by bioethanol residues. Several cases are presented in order to assess the approach capabilities and to evaluate the tradeoffs among all the decisions

  13. Tramp ship routing and scheduling with integrated bunker optimization

    DEFF Research Database (Denmark)

    Vilhelmsen, Charlotte; Lusby, Richard Martin; Larsen, Jesper

    2014-01-01

    is referred to as bunker and bunker costs constitute a significant part of the daily operating costs. There can be great variations in bunker prices across bunker ports so it is important to carefully plan bunkering for each ship. As ships operate 24 hours a day, they must refuel during operations. Therefore...... and scheduling phase and present a mixed integer programming formulation for the integrated problem of optimally routing, scheduling and bunkering a tramp fleet. Aside from the integration of bunker, this model also extends standard tramp formulations by using load dependent costs, speed and bunker consumption...

  14. Adaptive treatment-length optimization in spatiobiologically integrated radiotherapy

    Science.gov (United States)

    Ajdari, Ali; Ghate, Archis; Kim, Minsun

    2018-04-01

    Recent theoretical research on spatiobiologically integrated radiotherapy has focused on optimization models that adapt fluence-maps to the evolution of tumor state, for example, cell densities, as observed in quantitative functional images acquired over the treatment course. We propose an optimization model that adapts the length of the treatment course as well as the fluence-maps to such imaged tumor state. Specifically, after observing the tumor cell densities at the beginning of a session, the treatment planner solves a group of convex optimization problems to determine an optimal number of remaining treatment sessions, and a corresponding optimal fluence-map for each of these sessions. The objective is to minimize the total number of tumor cells remaining (TNTCR) at the end of this proposed treatment course, subject to upper limits on the biologically effective dose delivered to the organs-at-risk. This fluence-map is administered in future sessions until the next image is available, and then the number of sessions and the fluence-map are re-optimized based on the latest cell density information. We demonstrate via computer simulations on five head-and-neck test cases that such adaptive treatment-length and fluence-map planning reduces the TNTCR and increases the biological effect on the tumor while employing shorter treatment courses, as compared to only adapting fluence-maps and using a pre-determined treatment course length based on one-size-fits-all guidelines.

  15. Two-dimensional core calculation research for fuel management optimization based on CPACT code

    International Nuclear Information System (INIS)

    Chen Xiaosong; Peng Lianghui; Gang Zhi

    2013-01-01

    Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)

  16. Fluorecence modulated radiotherapy with integrated segmentation to optimization

    International Nuclear Information System (INIS)

    Baer, W.; Alber, M.; Nuesslin, F.

    2003-01-01

    On the basis of two clinical cases, we present fluence-modulated radiotherapy with a sequencer integrated into the optimization of our treatment-planning software HYPERION. In each case, we achieved simple relations for the dependence of the total number of segments on the complexity of the sequencing, as well as for the dependence of the dose-distribution quality on the number of segments. For both clinical cases, it was possible to obtain treatment plans that complied with the clinical demands on dose distribution and number of segments. Also, compared to the widespread concept of equidistant steps, our method of sequencing with fluence steps of variable size led to a significant reduction of the number of segments, while maintaining the quality of the dose distribution. Our findings substantiate the value of the integration of the sequencer into the optimization for the clinical efficiency of IMRT [de

  17. Integrals of Motion for Discrete-Time Optimal Control Problems

    OpenAIRE

    Torres, Delfim F. M.

    2003-01-01

    We obtain a discrete time analog of E. Noether's theorem in Optimal Control, asserting that integrals of motion associated to the discrete time Pontryagin Maximum Principle can be computed from the quasi-invariance properties of the discrete time Lagrangian and discrete time control system. As corollaries, results for first-order and higher-order discrete problems of the calculus of variations are obtained.

  18. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    This volume provides a listing of the BNW-II dry/wet ammonia heat rejection optimization code and is an appendix to Volume I which gives a narrative description of the code's algorithms as well as logic, input and output information.

  19. An effective coded excitation scheme based on a predistorted FM signal and an optimized digital filter

    DEFF Research Database (Denmark)

    Misaridis, Thanasis; Jensen, Jørgen Arendt

    1999-01-01

    This paper presents a coded excitation imaging system based on a predistorted FM excitation and a digital compression filter designed for medical ultrasonic applications, in order to preserve both axial resolution and contrast. In radars, optimal Chebyshev windows efficiently weight a nearly...... as with pulse excitation (about 1.5 lambda), depending on the filter design criteria. The axial sidelobes are below -40 dB, which is the noise level of the measuring imaging system. The proposed excitation/compression scheme shows good overall performance and stability to the frequency shift due to attenuation...... be removed by weighting. We show that by using a predistorted chirp with amplitude or phase shaping for amplitude ripple reduction and a correlation filter that accounts for the transducer's natural frequency weighting, output sidelobe levels of -35 to -40 dB are directly obtained. When an optimized filter...

  20. An Order Coding Genetic Algorithm to Optimize Fuel Reloads in a Nuclear Boiling Water Reactor

    International Nuclear Information System (INIS)

    Ortiz, Juan Jose; Requena, Ignacio

    2004-01-01

    A genetic algorithm is used to optimize the nuclear fuel reload for a boiling water reactor, and an order coding is proposed for the chromosomes and appropriate crossover and mutation operators. The fitness function was designed so that the genetic algorithm creates fuel reloads that, on one hand, satisfy the constrictions for the radial power peaking factor, the minimum critical power ratio, and the maximum linear heat generation rate while optimizing the effective multiplication factor at the beginning and end of the cycle. To find the values of these variables, a neural network trained with the behavior of a reactor simulator was used to predict them. The computation time is therefore greatly decreased in the search process. We validated this method with data from five cycles of the Laguna Verde Nuclear Power Plant in Mexico

  1. A model for optimization of process integration investments under uncertainty

    International Nuclear Information System (INIS)

    Svensson, Elin; Stroemberg, Ann-Brith; Patriksson, Michael

    2011-01-01

    The long-term economic outcome of energy-related industrial investment projects is difficult to evaluate because of uncertain energy market conditions. In this article, a general, multistage, stochastic programming model for the optimization of investments in process integration and industrial energy technologies is proposed. The problem is formulated as a mixed-binary linear programming model where uncertainties are modelled using a scenario-based approach. The objective is to maximize the expected net present value of the investments which enables heat savings and decreased energy imports or increased energy exports at an industrial plant. The proposed modelling approach enables a long-term planning of industrial, energy-related investments through the simultaneous optimization of immediate and later decisions. The stochastic programming approach is also suitable for modelling what is possibly complex process integration constraints. The general model formulation presented here is a suitable basis for more specialized case studies dealing with optimization of investments in energy efficiency. -- Highlights: → Stochastic programming approach to long-term planning of process integration investments. → Extensive mathematical model formulation. → Multi-stage investment decisions and scenario-based modelling of uncertain energy prices. → Results illustrate how investments made now affect later investment and operation opportunities. → Approach for evaluation of robustness with respect to variations in probability distribution.

  2. Outage Analysis and Optimization of SWIPT in Network-Coded Two-Way Relay Networks

    Directory of Open Access Journals (Sweden)

    Ruihong Jiang

    2017-01-01

    Full Text Available This paper investigates the outage performance of simultaneous wireless information and power transfer (SWIPT in network-coded two-way relay systems, where a relay first harvests energy from the signals transmitted by two sources and then uses the harvested energy to forward the received information to the two sources. We consider two transmission protocols, power splitting two-way relay (PS-TWR and time switching two-way relay (TS-TWR protocols. We present two explicit expressions for the system outage probability of the two protocols and further derive approximate expressions for them in high and low SNR cases. To explore the system performance limits, two optimization problems are formulated to minimize the system outage probability. Since the problems are nonconvex and have no known solution methods, a genetic algorithm- (GA- based algorithm is designed. Numerical and simulation results validate our theoretical analysis. It is shown that, by jointly optimizing the time assignment and SWIPT receiver parameters, a great performance gain can be achieved for both PS-TWR and TS-TWR. Moreover, the optimized PS-TWR always outperforms the optimized TS-TWR in terms of outage performance. Additionally, the effects of parameters including relay location and transmit powers are also discussed, which provide some insights for the SWIPT-enabled two-way relay networks.

  3. A 1ST Step Integration of the Restructured MELCOR for the MIDAS Computer Code

    International Nuclear Information System (INIS)

    Park, S. H.; Kim, D. H.; Cho, S. W.

    2006-01-01

    KAERI is developing a localized severe accident code, MIDAS, based on MELCOR. MELCOR uses pointer variables for a fixed-size storage management to save the data. It passes data through two depths, its meaning is not understandable by variable itself. So it is needed to understand the methods for data passing. This method deteriorates the readability, maintainability and portability of the code. As a most important process for a localized severe accident analysis code, it is needed convenient method for data handling. So, it has been used the new features in FORTRAN90 such as a dynamic allocation for the restructuring. The restructuring of the data saving and transferring method of the existing code makes it easy to understand the code. Before an entire restructuring of the code, a restructuring for each package was developed and tested. And then integration of each restructured package was being processed one by one. In this paper, the integrating scope includes the BUR, CF, CVH, DCH, EDF, ESF, MP, SPR, TF and TP packages. As most of them use data within each package and a few packages share data with other packages. The verification was done through comparing the results before and after the restructuring

  4. A k-distribution-based radiation code and its computational optimization for an atmospheric general circulation model

    International Nuclear Information System (INIS)

    Sekiguchi, Miho; Nakajima, Teruyuki

    2008-01-01

    The gas absorption process scheme in the broadband radiative transfer code 'mstrn8', which is used to calculate atmospheric radiative transfer efficiently in a general circulation model, is improved. Three major improvements are made. The first is an update of the database of line absorption parameters and the continuum absorption model. The second is a change to the definition of the selection rule for gas absorption used to choose which absorption bands to include. The last is an upgrade of the optimization method used to decrease the number of quadrature points used for numerical integration in the correlated k-distribution approach, thereby realizing higher computational efficiency without losing accuracy. The new radiation package termed 'mstrnX' computes radiation fluxes and heating rates with errors less than 0.6 W/m 2 and 0.3 K/day, respectively, through the troposphere and the lower stratosphere for any standard AFGL atmospheres. A serious cold bias problem of an atmospheric general circulation model using the ancestor code 'mstrn8' is almost solved by the upgrade to 'mstrnX'

  5. Numerical optimization of the ramp-down phase with the RAPTOR code

    Science.gov (United States)

    Teplukhina, Anna; Sauter, Olivier; Felici, Federico; The Tcv Team; The ASDEX-Upgrade Team; The Eurofusion Mst1 Team

    2017-10-01

    The ramp-down optimization goal in this work is defined as the fastest possible decrease of a plasma current while avoiding any disruptions caused by reaching physical or technical limits. Numerical simulations and preliminary experiments on TCV and AUG have shown that a fast decrease of plasma elongation and an adequate timing of the H-L transition during current ramp-down can help to avoid reaching high values of the plasma internal inductance. The RAPTOR code (F. Felici et al., 2012 PPCF 54; F. Felici, 2011 EPFL PhD thesis), developed for real-time plasma control, has been used for an optimization problem solving. Recently the transport model has been extended to include the ion temperature and electron density transport equations in addition to the electron temperature and current density transport equations, increasing the physical applications of the code. The gradient-based models for the transport coefficients (O. Sauter et al., 2014 PPCF 21; D. Kim et al., 2016 PPCF 58) have been implemented to RAPTOR and tested during this work. Simulations of the AUG and TCV entire plasma discharges will be presented. See the author list of S. Coda et al., Nucl. Fusion 57 2017 102011.

  6. SPEXTRA: Optimal extraction code for long-slit spectra in crowded fields

    Science.gov (United States)

    Sarkisyan, A. N.; Vinokurov, A. S.; Solovieva, Yu. N.; Sholukhova, O. N.; Kostenkov, A. E.; Fabrika, S. N.

    2017-10-01

    We present a code for the optimal extraction of long-slit 2D spectra in crowded stellar fields. Its main advantage and difference from the existing spectrum extraction codes is the presence of a graphical user interface (GUI) and a convenient visualization system of data and extraction parameters. On the whole, the package is designed to study stars in crowded fields of nearby galaxies and star clusters in galaxies. Apart from the spectrum extraction for several stars which are closely located or superimposed, it allows the spectra of objects to be extracted with subtraction of superimposed nebulae of different shapes and different degrees of ionization. The package can also be used to study single stars in the case of a strong background. In the current version, the optimal extraction of 2D spectra with an aperture and the Gaussian function as PSF (point spread function) is proposed. In the future, the package will be supplemented with the possibility to build a PSF based on a Moffat function. We present the details of GUI, illustrate main features of the package, and show results of extraction of the several interesting spectra of objects from different telescopes.

  7. Optimization and Openmp Parallelization of a Discrete Element Code for Convex Polyhedra on Multi-Core Machines

    Science.gov (United States)

    Chen, Jian; Matuttis, Hans-Georg

    2013-02-01

    We report our experiences with the optimization and parallelization of a discrete element code for convex polyhedra on multi-core machines and introduce a novel variant of the sort-and-sweep neighborhood algorithm. While in theory the whole code in itself parallelizes ideally, in practice the results on different architectures with different compilers and performance measurement tools depend very much on the particle number and optimization of the code. After difficulties with the interpretation of the data for speedup and efficiency are overcome, respectable parallelization speedups could be obtained.

  8. Label swapper device for spectral amplitude coded optical packet networks monolithically integrated on InP

    NARCIS (Netherlands)

    Muñoz, P.; García-Olcina, R.; Habib, C.; Chen, L.R.; Leijtens, X.J.M.; Vries, de T.; Robbins, D.J.; Capmany, J.

    2011-01-01

    In this paper the design, fabrication and experimental characterization of an spectral amplitude coded (SAC) optical label swapper monolithically integrated on Indium Phosphide (InP) is presented. The device has a footprint of 4.8x1.5 mm2 and is able to perform label swapping operations required in

  9. Continuous integration in a social-coding world : empirical evidence from GitHub

    NARCIS (Netherlands)

    Vasilescu, B.N.; van Schuylenburg, S.B.; Wulms, Jules; Serebrenik, A.; Brand, van den M.G.J.

    2014-01-01

    Continuous integration is a software engineering practice of frequently merging all developer working copies with a shared main branch, e.g., several times a day. With the advent of GitHub, a platform well known for its "social coding" features that aid collaboration and sharing, and currently the

  10. InP monolithically integrated label swapper device for spectral amplitude coded optical packet networks

    NARCIS (Netherlands)

    Muñoz, P.; García-Olcina, R.; Doménech, J.D.; Rius, M.; Sancho, J.C.; Capmany, J.; Chen, L.R.; Habib, C.; Leijtens, X.J.M.; Vries, de T.; Heck, M.J.R.; Augustin, L.M.; Nötzel, R.; Robbins, D.J.

    2010-01-01

    In this paper a label swapping device, for spectral amplitude coded optical packet networks, fully integrated using InP technology is presented. Compared to previous demonstrations using discrete component assembly, the device footprint is reduced by a factor of 105 and the operation speed is

  11. SIMIFR: A code to simulate material movement in the Integral Fast Reactor

    International Nuclear Information System (INIS)

    White, A.M.; Orechwa, Yuri.

    1991-01-01

    The SIMIFR code has been written to simulate the movement of material through a process. This code can be used to investigate inventory differences in material balances, assist in process design, and to produce operational scheduling. The particular process that is of concern to the authors is that centered around Argonne National Laboratory's Integral Fast Reactor. This is a process which involves the irradiation of fissile material for power production, and the recycling of the irradiated reactor fuel pins into fresh fuel elements. To adequately simulate this process it is necessary to allow for locations which can contain either discrete items or homogeneous mixtures. It is also necessary to allow for a very flexible process control algorithm. Further, the code must have the capability of transmuting isotopic compositions and computing internally the fraction of material from a process ending up in a given location. The SIMIFR code has been developed to perform all of these tasks. In addition to simulating the process, the code is capable of generating random measurement values and sampling errors for all locations, and of producing a restart deck so that terminated problems may be continued. In this paper the authors first familiarize the reader with the IFR fuel cycle. The different capabilities of the SIMIFR code are described. Finally, the simulation of the IFR fuel cycle using the SIMIFR code is discussed. 4 figs

  12. Development of fast ignition integrated interconnecting code (FI3) for fast ignition scheme

    International Nuclear Information System (INIS)

    Nagatomo, H.; Johzaki, T.; Mima, K.; Sunahara, A.; Nishihara, K.; Izawa, Y.; Sakagami, H.; Nakao, Y.; Yokota, T.; Taguchi, T.

    2005-01-01

    The numerical simulation plays an important role in estimating the feasibility and performance of the fast ignition. There are two key issues in numerical analysis for the fast ignition. One is the controlling the implosion dynamics to form a high density core plasma in non-spherical implosion, and the other is heating core plasma efficiency by the short pulse high intense laser. From initial laser irradiation to final fusion burning, all the physics are coupling strongly in any phase, and they must be solved consistently in computational simulation. However, in general, it is impossible to simulate laser plasma interaction and radiation hydrodynamics in a single computational code, without any numerical dissipation, special assumption or conditional treatment. Recently, we have developed 'Fast Ignition Integrated Interconnecting code' (FI 3 ) which consists of collective Particle-in-Cell code, Relativistic Fokker-Planck hydro code, and 2-dimensional radiation hydrodynamics code. And those codes are connecting with each other in data-flow bases. In this paper, we will present detail feature of the FI 3 code, and numerical results of whole process of fast ignition. (author)

  13. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    Energy Technology Data Exchange (ETDEWEB)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto, E-mail: alan@lmp.ufrj.br, E-mail: andressa@lmp.ufrj.br, E-mail: schirru@lmp.ufrj.br, E-mail: ffreire@eletronuclear.gov.br [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil); Eletrobras Termonuclear S.A. (ELETRONUCLEAR), Rio de Janeiro, RJ (Brazil)

    2017-11-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10{sup 13} combinations and 10{sup 11} great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  14. Optimization of reload of nuclear power plants using ACO together with the GENES reactor physics code

    International Nuclear Information System (INIS)

    Lima, Alan M.M. de; Freire, Fernando S.; Nicolau, Andressa S.; Schirru, Roberto

    2017-01-01

    The Nuclear reload of a Pressurized Water Reactor (PWR) occurs whenever the burning of the fuel elements can no longer maintain the criticality of the reactor, that is, it cannot maintain the Nuclear power plant operates within its nominal power. Nuclear reactor reload optimization problem consists of finding a loading pattern of fuel assemblies in the reactor core in order to minimize the cost/benefit ratio, trying to obtain maximum power generation with a minimum of cost, since in all reloads an average of one third of the new fuel elements are purchased. This loading pattern must also satisfy constraints of symmetry and security. In practice, it consists of the placing 121 fuel elements in 121 core positions, in the case of the Angra 1 Brazilian Nuclear Power Plant (NPP), making this new arrangement provide the best cost/benefit ratio. It is an extremely complex problem, since it has around 1% of great places. A core of 121 fuel elements has approximately 10"1"3 combinations and 10"1"1 great locations. With this number of possible combinations it is impossible to test all, in order to choose the best. In this work a system called ACO-GENES is proposed in order to optimization the Nuclear Reactor Reload Problem. ACO is successfully used in combination problems, and it is expected that ACO-GENES will show a robust optimization system, since in addition to optimizing ACO, it allows important prior knowledge such as K infinite, burn, etc. After optimization by ACO-GENES, the best results will be validated by a licensed reactor physics code and will be compared with the actual results of the cycle. (author)

  15. Shape optimization of turbine blades with the integration of aerodynamics and heat transfer

    Directory of Open Access Journals (Sweden)

    Rajadas J. N.

    1998-01-01

    Full Text Available A multidisciplinary optimization procedure, with the integration of aerodynamic and heat transfer criteria, has been developed for the design of gas turbine blades. Two different optimization formulations have been used. In the first formulation, the maximum temperature in the blade section is chosen as the objective function to be minimized. An upper bound constraint is imposed on the blade average temperature and a lower bound constraint is imposed on the blade tangential force coefficient. In the second formulation, the blade average and maximum temperatures are chosen as objective functions. In both formulations, bounds are imposed on the velocity gradients at several points along the surface of the airfoil to eliminate leading edge velocity spikes which deteriorate aerodynamic performance. Shape optimization is performed using the blade external and coolant path geometric parameters as design variables. Aerodynamic analysis is performed using a panel code. Heat transfer analysis is performed using the finite element method. A gradient based procedure in conjunction with an approximate analysis technique is used for optimization. The results obtained using both optimization techniques are compared with a reference geometry. Both techniques yield significant improvements with the multiobjective formulation resulting in slightly superior design.

  16. Users Guide to SAMINT: A Code for Nuclear Data Adjustment with SAMMY Based on Integral Experiments

    Energy Technology Data Exchange (ETDEWEB)

    Sobes, Vladimir [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Leal, Luiz C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Arbanas, Goran [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-10-01

    The purpose of this project is to couple differential and integral data evaluation in a continuous-energy framework. More specifically, the goal is to use the Generalized Linear Least Squares methodology employed in TSURFER to update the parameters of a resolved resonance region evaluation directly. Recognizing that the GLLS methodology in TSURFER is identical to the mathematical description of the simple Bayesian updating carried out in SAMMY, the computer code SAMINT was created to help use the mathematical machinery of SAMMY to update resolved resonance parameters based on integral data. Minimal modifications of SAMMY are required when used with SAMINT to make resonance parameter updates based on integral experimental data.

  17. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream

    Science.gov (United States)

    Douglas, Danielle; Newsome, Rachel N; Man, Louisa LY

    2018-01-01

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. PMID:29393853

  18. Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream.

    Science.gov (United States)

    Martin, Chris B; Douglas, Danielle; Newsome, Rachel N; Man, Louisa Ly; Barense, Morgan D

    2018-02-02

    A significant body of research in cognitive neuroscience is aimed at understanding how object concepts are represented in the human brain. However, it remains unknown whether and where the visual and abstract conceptual features that define an object concept are integrated. We addressed this issue by comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli. Our results revealed evidence for distinctive coding of visual features in lateral occipital cortex, and conceptual features in the temporal pole and parahippocampal cortex. By contrast, we found evidence for integrative coding of visual and conceptual object features in perirhinal cortex. The neuroanatomical specificity of this effect was highlighted by results from a searchlight analysis. Taken together, our findings suggest that perirhinal cortex uniquely supports the representation of fully specified object concepts through the integration of their visual and conceptual features. © 2018, Martin et al.

  19. Integrated fast ignition simulation of cone-guided target with three codes

    Energy Technology Data Exchange (ETDEWEB)

    Sakagami, H. [Hyogo Univ., Computer Engineering, Himeji, Hyogo (Japan); Johzaki, T.; Nagatomo, H.; Mima, K. [Osaka Univ., Institute of Laser Engineering, Suita, Osaka (Japan)

    2004-07-01

    It was reported that the fuel core was heated up to {approx} 0.8 keV in the fast ignition experiments with cone-guided targets, but they could not theoretically explain heating mechanisms and achievement of such high temperature. Thus simulations should play an important role in estimating the scheme performance, and we must simulate each phenomenon with individual codes and integrate them under the Fast Ignition Integrated Interconnecting code project. In the previous integrated simulations, fast electrons generated by the laser-plasma interaction were too hot to efficiently heat the core and we got only a 0.096 keV temperature rise. Including the density gap at the contact surface between the cone tip and the imploded plasma, the period of core heating became longer and the core was heated by 0.162 keV, about 69% higher increment compared with ignoring the density gap effect. (authors)

  20. Optimal Integration of Intermittent Renewables: A System LCOE Stochastic Approach

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2018-03-01

    Full Text Available We propose a system level approach to value the impact on costs of the integration of intermittent renewable generation in a power system, based on expected breakeven cost and breakeven cost risk. To do this, we carefully reconsider the definition of Levelized Cost of Electricity (LCOE when extended to non-dispatchable generation, by examining extra costs and gains originated by the costly management of random power injections. We are thus lead to define a ‘system LCOE’ as a system dependent LCOE that takes properly into account intermittent generation. In order to include breakeven cost risk we further extend this deterministic approach to a stochastic setting, by introducing a ‘stochastic system LCOE’. This extension allows us to discuss the optimal integration of intermittent renewables from a broad, system level point of view. This paper thus aims to provide power producers and policy makers with a new methodological scheme, still based on the LCOE but which updates this valuation technique to current energy system configurations characterized by a large share of non-dispatchable production. Quantifying and optimizing the impact of intermittent renewables integration on power system costs, risk and CO 2 emissions, the proposed methodology can be used as powerful tool of analysis for assessing environmental and energy policies.

  1. Design optimization of radiation-hardened CMOS integrated circuits

    International Nuclear Information System (INIS)

    1975-01-01

    Ionizing-radiation-induced threshold voltage shifts in CMOS integrated circuits will drastically degrade circuit performance unless the design parameters related to the fabrication process are properly chosen. To formulate an approach to CMOS design optimization, experimentally observed analytical relationships showing strong dependences between threshold voltage shifts and silicon dioxide thickness are utilized. These measurements were made using radiation-hardened aluminum-gate CMOS inverter circuits and have been corroborated by independent data taken from MOS capacitor structures. Knowledge of these relationships allows one to define ranges of acceptable CMOS design parameters based upon radiation-hardening capabilities and post-irradiation performance specifications. Furthermore, they permit actual design optimization of CMOS integrated circuits which results in optimum pre- and post-irradiation performance with respect to speed, noise margins, and quiescent power consumption. Theoretical and experimental results of these procedures, the applications of which can mean the difference between failure and success of a CMOS integrated circuit in a radiation environment, are presented

  2. Integrating bar-code devices with computerized MC and A systems

    International Nuclear Information System (INIS)

    Anderson, L.K.; Boor, M.G.; Hurford, J.M.

    1998-01-01

    Over the past seven years, Los Alamos National Laboratory developed several generations of computerized nuclear materials control and accountability (MC and A) systems for tracking and reporting the storage, movement, and management of nuclear materials at domestic and international facilities. During the same period, Oak Ridge National Laboratory was involved with automated data acquisition (ADA) equipment, including installation of numerous bar-code scanning stations at various facilities to serve as input devices to computerized systems. Bar-code readers, as well as other ADA devices, reduce input errors, provide faster input, and allow the capture of data in remote areas where workstations do not exist. Los Alamos National Laboratory and Oak Ridge National Laboratory teamed together to implement the integration of bar-code hardware technology with computerized MC and A systems. With the expertise of both sites, the two technologies were successfully merged with little difficulty. Bar-code input is now available with several functions of the MC and A systems: material movements within material balance areas (MBAs), material movements between MBAs, and physical inventory verification. This paper describes the various components required for the integration of these MC and A systems with the installed bar-code reader devices and the future directions for these technologies

  3. An Integration of the Restructured Melcor for the Midas Computer Code

    International Nuclear Information System (INIS)

    Sunhee Park; Dong Ha Kim; Ko-Ryu Kim; Song-Won Cho

    2006-01-01

    The developmental need for a localized severe accident analysis code is on the rise. KAERI is developing a severe accident code called MIDAS, which is based on MELCOR. In order to develop the localized code (MIDAS) which simulates a severe accident in a nuclear power plant, the existing data structure is reconstructed for all the packages in MELCOR, which uses pointer variables for data transfer between the packages. During this process, new features in FORTRAN90 such as a dynamic allocation are used for an improved data saving and transferring method. Hence the readability, maintainability and portability of the MIDAS code have been enhanced. After the package-wise restructuring, the newly converted packages are integrated together. Depending on the data usage in the package, two types of packages can be defined: some use their own data within the package (let's call them independent packages) and the others share their data with other packages (dependent packages). For the independent packages, the integration process is simple to link the already converted packages together. That is, the package-wise structuring does not require further conversion of variables for the integration process. For the dependent packages, extra conversion is necessary to link them together. As the package-wise restructuring converts only the corresponding package's variables, other variables defined from other packages are not touched and remain as it is. These variables are to be converted into the new types of variables simultaneously as well as the main variables in the corresponding package. Then these dependent packages are ready for integration. In order to check whether the integration process is working well, the results from the integrated version are verified against the package-wise restructured results. Steady state runs and station blackout sequences are tested and the major variables are found to be the same each other. In order to verify the results, the integrated

  4. Stochastic simulation and robust design optimization of integrated photonic filters

    Directory of Open Access Journals (Sweden)

    Weng Tsui-Wei

    2016-07-01

    Full Text Available Manufacturing variations are becoming an unavoidable issue in modern fabrication processes; therefore, it is crucial to be able to include stochastic uncertainties in the design phase. In this paper, integrated photonic coupled ring resonator filters are considered as an example of significant interest. The sparsity structure in photonic circuits is exploited to construct a sparse combined generalized polynomial chaos model, which is then used to analyze related statistics and perform robust design optimization. Simulation results show that the optimized circuits are more robust to fabrication process variations and achieve a reduction of 11%–35% in the mean square errors of the 3 dB bandwidth compared to unoptimized nominal designs.

  5. AS Migration and Optimization of the Power Integrated Data Network

    Science.gov (United States)

    Zhou, Junjie; Ke, Yue

    2018-03-01

    In the transformation process of data integration network, the impact on the business has always been the most important reference factor to measure the quality of network transformation. With the importance of the data network carrying business, we must put forward specific design proposals during the transformation, and conduct a large number of demonstration and practice to ensure that the transformation program meets the requirements of the enterprise data network. This paper mainly demonstrates the scheme of over-migrating point-to-point access equipment in the reconstruction project of power data comprehensive network to migrate the BGP autonomous domain to the specified domain defined in the industrial standard, and to smooth the intranet OSPF protocol Migration into ISIS agreement. Through the optimization design, eventually making electric power data network performance was improved on traffic forwarding, traffic forwarding path optimized, extensibility, get larger, lower risk of potential loop, the network stability was improved, and operational cost savings, etc.

  6. Optimal stability polynomials for numerical integration of initial value problems

    KAUST Repository

    Ketcheson, David I.

    2013-01-08

    We consider the problem of finding optimally stable polynomial approximations to the exponential for application to one-step integration of initial value ordinary and partial differential equations. The objective is to find the largest stable step size and corresponding method for a given problem when the spectrum of the initial value problem is known. The problem is expressed in terms of a general least deviation feasibility problem. Its solution is obtained by a new fast, accurate, and robust algorithm based on convex optimization techniques. Global convergence of the algorithm is proven in the case that the order of approximation is one and in the case that the spectrum encloses a starlike region. Examples demonstrate the effectiveness of the proposed algorithm even when these conditions are not satisfied.

  7. Integrity evaluation for stud female threads on pressure vessel according to ASME code using FEM

    International Nuclear Information System (INIS)

    Kim, Moon Young; Chung, Nam Yong

    2003-01-01

    The extension of design life among power plants is increasingly becoming a world-wide trend. Kori no.1 unit in Korea is operating two cycle. It has two man-ways for tube inspection in a steam generator which is one of the important components in a nuclear power plant. Especially, stud bolts for man-way cover have damaged by disassembly and assembly several times and degradation for bolt materials for long term operation. It should be evaluated and compared by ASME code criteria for integrity evaluation. Integrity evaluation criteria which has been made by the manufacturer is not applied on the stud bolts of nuclear pressure vessels directly because it is controlled by the yield stress of ASME code. It can apply evaluation criteria through FEM analysis to damaged female threads and to evaluated safety for helical-coil method which is used according to code case-N-496-1. From analysis results, we found that it is the same results between stress intensity which got from FEM analysis on damaged female threads over 10% by manufacture integrity criteria and 2/3 yield strength criteria on ASME code. It was also confirmed that the helical-coil repair method would be safe

  8. Optimal Real-time Dispatch for Integrated Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Firestone, Ryan Michael [Univ. of California, Berkeley, CA (United States)

    2007-05-31

    This report describes the development and application of a dispatch optimization algorithm for integrated energy systems (IES) comprised of on-site cogeneration of heat and electricity, energy storage devices, and demand response opportunities. This work is intended to aid commercial and industrial sites in making use of modern computing power and optimization algorithms to make informed, near-optimal decisions under significant uncertainty and complex objective functions. The optimization algorithm uses a finite set of randomly generated future scenarios to approximate the true, stochastic future; constraints are included that prevent solutions to this approximate problem from deviating from solutions to the actual problem. The algorithm is then expressed as a mixed integer linear program, to which a powerful commercial solver is applied. A case study of United States Postal Service Processing and Distribution Centers (P&DC) in four cities and under three different electricity tariff structures is conducted to (1) determine the added value of optimal control to a cogeneration system over current, heuristic control strategies; (2) determine the value of limited electric load curtailment opportunities, with and without cogeneration; and (3) determine the trade-off between least-cost and least-carbon operations of a cogeneration system. Key results for the P&DC sites studied include (1) in locations where the average electricity and natural gas prices suggest a marginally profitable cogeneration system, optimal control can add up to 67% to the value of the cogeneration system; optimal control adds less value in locations where cogeneration is more clearly profitable; (2) optimal control under real-time pricing is (a) more complicated than under typical time-of-use tariffs and (b) at times necessary to make cogeneration economic at all; (3) limited electric load curtailment opportunities can be more valuable as a compliment to the cogeneration system than alone; and

  9. MC21 v.6.0 - A continuous-energy Monte Carlo particle transport code with integrated reactor feedback capabilities

    International Nuclear Information System (INIS)

    Grieshemer, D.P.; Gill, D.F.; Nease, B.R.; Carpenter, D.C.; Joo, H.; Millman, D.L.; Sutton, T.M.; Stedry, M.H.; Dobreff, P.S.; Trumbull, T.H.; Caro, E.

    2013-01-01

    MC21 is a continuous-energy Monte Carlo radiation transport code for the calculation of the steady-state spatial distributions of reaction rates in three-dimensional models. The code supports neutron and photon transport in fixed source problems, as well as iterated-fission-source (eigenvalue) neutron transport problems. MC21 has been designed and optimized to support large-scale problems in reactor physics, shielding, and criticality analysis applications. The code also supports many in-line reactor feedback effects, including depletion, thermal feedback, xenon feedback, eigenvalue search, and neutron and photon heating. MC21 uses continuous-energy neutron/nucleus interaction physics over the range from 10 -5 eV to 20 MeV. The code treats all common neutron scattering mechanisms, including fast-range elastic and non-elastic scattering, and thermal- and epithermal-range scattering from molecules and crystalline materials. For photon transport, MC21 uses continuous-energy interaction physics over the energy range from 1 keV to 100 GeV. The code treats all common photon interaction mechanisms, including Compton scattering, pair production, and photoelectric interactions. All of the nuclear data required by MC21 is provided by the NDEX system of codes, which extracts and processes data from EPDL-, ENDF-, and ACE-formatted source files. For geometry representation, MC21 employs a flexible constructive solid geometry system that allows users to create spatial cells from first- and second-order surfaces. The system also allows models to be built up as hierarchical collections of previously defined spatial cells, with interior detail provided by grids and template overlays. Results are collected by a generalized tally capability which allows users to edit integral flux and reaction rate information. Results can be collected over the entire problem or within specific regions of interest through the use of phase filters that control which particles are allowed to score each

  10. OPT13B and OPTIM4 - computer codes for optical model calculations

    International Nuclear Information System (INIS)

    Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.

    1975-01-01

    OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)

  11. Integration of CAM and CNC operation through code editing and manipulation

    International Nuclear Information System (INIS)

    Rosli Darmawan; Shalina Sheik Muhammad

    2004-01-01

    The IT technology for engineering design and manufacturing has gone through significant advancement for the last 30 years. It is widely acknowledged that IT would provide competitive advantage for engineering company in term of production cycle, productivity and efficiency. The recent development in this area is on the total system integration. While standard off-shelf CAD/CAM/CNC software and hardware packages would provide solution for system integration, more often than not users will stumble upon compatibility problems. Moreover, most of the integration deals with CAD and CAM systems. CNC integration has not been fully developed. Users always found problems in the integration of CAM and CNC machine due to the different level of technological development. CNC codes have not fundamentally progressed in the last 50 years, while CAD/CAM software packages have undergone massive evolution and improvement. This paper discusses a practical solution of CAM and CNC integration through code editing and manipulation within the CAM system in order to comply with the CNC machine requirements. (Author)

  12. The optimally sampled galaxy-wide stellar initial mass function. Observational tests and the publicly available GalIMF code

    Science.gov (United States)

    Yan, Zhiqiang; Jerabkova, Tereza; Kroupa, Pavel

    2017-11-01

    Here we present a full description of the integrated galaxy-wide initial mass function (IGIMF) theory in terms of the optimal sampling and compare it with available observations. Optimal sampling is the method we use to discretize the IMF deterministically into stellar masses. Evidence indicates that nature may be closer to deterministic sampling as observations suggest a smaller scatter of various relevant observables than random sampling would give, which may result from a high level of self-regulation during the star formation process. We document the variation of IGIMFs under various assumptions. The results of the IGIMF theory are consistent with the empirical relation between the total mass of a star cluster and the mass of its most massive star, and the empirical relation between the star formation rate (SFR) of a galaxy and the mass of its most massive cluster. Particularly, we note a natural agreement with the empirical relation between the IMF power-law index and the SFR of a galaxy. The IGIMF also results in a relation between the SFR of a galaxy and the mass of its most massive star such that, if there were no binaries, galaxies with SFR first time, we show optimally sampled galaxy-wide IMFs (OSGIMF) that mimic the IGIMF with an additional serrated feature. Finally, a Python module, GalIMF, is provided allowing the calculation of the IGIMF and OSGIMF dependent on the galaxy-wide SFR and metallicity. A copy of the python code model is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/607/A126

  13. Solution of optimization problems by means of the CASTEM 2000 computer code

    International Nuclear Information System (INIS)

    Charras, Th.; Millard, A.; Verpeaux, P.

    1991-01-01

    In the nuclear industry, it can be necessary to use robots for operation in contaminated environment. Most of the time, positioning of some parts of the robot must be very accurate, which highly depends on the structural (mass and stiffness) properties of its various components. Therefore, there is a need for a 'best' design, which is a compromise between technical (mechanical properties) and economical (material quantities, design and manufacturing cost) matters. This is precisely the aim of optimization techniques, in the frame of structural analysis. A general statement of this problem could be as follows: find the set of parameters which leads to the minimum of a given function, and satisfies some constraints. For example, in the case of a robot component, the parameters can be some geometrical data (plate thickness, ...), the function can be the weight and the constraints can consist in design criteria like a given stiffness and in some manufacturing technological constraints (minimum available thickness, etc). For nuclear industry purposes, a robust method was chosen and implemented in the new generation computer code CASTEM 2000. The solution of the optimum design problem is obtained by solving a sequence of convex subproblems, in which the various functions (the function to minimize and the constraints) are transformed by convex linearization. The method has been programmed in the case of continuous as well as discrete variables. According to the highly modular architecture of the CASTEM 2000 code, only one new operation had to be introduced: the solution of a sub problem with convex linearized functions, which is achieved by means of a conjugate gradient technique. All other operations were already available in the code, and the overall optimum design is realized by means of the Gibiane language. An example of application will be presented to illustrate the possibilities of the method. (author)

  14. Development of hydraulic analysis code for optimizing thermo-chemical is process reactors

    International Nuclear Information System (INIS)

    Terada, Atsuhiko; Hino, Ryutaro; Hirayama, Toshio; Nakajima, Norihiro; Sugiyama, Hitoshi

    2007-01-01

    The Japan Atomic Energy Agency has been conducting study on thermochemical IS process for water splitting hydrogen production. Based on the test results and know-how obtained through the bench-scale test, a pilot test plant, which has a hydrogen production performance of 30 Nm 3 /h, is being designed conceptually as the next step of the IS process development. In design of the IS pilot plant, it is important to make chemical reactors compact with high performance from the viewpoint of plant cost reduction. A new hydraulic analytical code has been developed for optimizing mixing performance of multi-phase flow involving chemical reactions especially in the Bunsen reactor. Complex flow pattern with gas-liquid chemical interaction involving flow instability will be characterized in the Bunsen reactor. Preliminary analytical results obtained with above mentioned code, especially flow patterns induced by swirling flow agreed well with that measured by water experiments, which showed vortex breakdown pattern in a simplified Bunsen reactor. (author)

  15. The role of stochasticity in an information-optimal neural population code

    International Nuclear Information System (INIS)

    Stocks, N G; Nikitin, A P; McDonnell, M D; Morse, R P

    2009-01-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  16. The role of stochasticity in an information-optimal neural population code

    Energy Technology Data Exchange (ETDEWEB)

    Stocks, N G; Nikitin, A P [School of Engineering, University of Warwick, Coventry CV4 7AL (United Kingdom); McDonnell, M D [Institute for Telecommunications Research, University of South Australia, SA 5095 (Australia); Morse, R P, E-mail: n.g.stocks@warwick.ac.u [School of Life and Health Sciences, Aston University, Birmingham B4 7ET (United Kingdom)

    2009-12-01

    In this paper we consider the optimisation of Shannon mutual information (MI) in the context of two model neural systems. The first is a stochastic pooling network (population) of McCulloch-Pitts (MP) type neurons (logical threshold units) subject to stochastic forcing; the second is (in a rate coding paradigm) a population of neurons that each displays Poisson statistics (the so called 'Poisson neuron'). The mutual information is optimised as a function of a parameter that characterises the 'noise level'-in the MP array this parameter is the standard deviation of the noise; in the population of Poisson neurons it is the window length used to determine the spike count. In both systems we find that the emergent neural architecture and, hence, code that maximises the MI is strongly influenced by the noise level. Low noise levels leads to a heterogeneous distribution of neural parameters (diversity), whereas, medium to high noise levels result in the clustering of neural parameters into distinct groups that can be interpreted as subpopulations. In both cases the number of subpopulations increases with a decrease in noise level. Our results suggest that subpopulations are a generic feature of an information optimal neural population.

  17. DABIE: a data banking system of integral experiments for reactor core characteristics computer codes

    International Nuclear Information System (INIS)

    Matsumoto, Kiyoshi; Naito, Yoshitaka; Ohkubo, Shuji; Aoyanagi, Hideo.

    1987-05-01

    A data banking system of integral experiments for reactor core characteristics computer codes, DABIE, has been developed to lighten the burden on searching so many documents to obtain experiment data required for verification of reactor core characteristics computer code. This data banking system, DABIE, has capabilities of systematic classification, registration and easy retrieval of experiment data. DABIE consists of data bank and supporting programs. Supporting programs are data registration program, data reference program and maintenance program. The system is designed so that user can easily register information of experiment systems including figures as well as geometry data and measured data or obtain those data through TSS terminal interactively. This manual describes the system structure, how-to-use and sample uses of this code system. (author)

  18. The Light-Water-Reactor Version of the URANUS Integral fuel-rod code

    Energy Technology Data Exchange (ETDEWEB)

    Labmann, K; Moreno, A

    1977-07-01

    The LWR version of the URANUS code, a digital computer programme for the thermal and mechanical analysis of fuel rods, is presented. Material properties are discussed and their effect on integral fuel rod behaviour elaborated via URANUS results for some carefully selected reference experiments. The numerical results do not represent post-irradiation analyses of in-pile experiments, they illustrate rather typical and diverse URANUS capabilities. The performance test shows that URANUS is reliable and efficient, thus the code is a most valuable tool in fuel rod analysis work. K. LaBmann developed the LWR version of the URANUS code, material properties were reviewed and supplied by A. Moreno. (Author) 41 refs.

  19. Development of an integral computer code for simulation of heat exchangers

    International Nuclear Information System (INIS)

    Horvat, A.; Catton, I.

    2001-01-01

    Heat exchangers are one of the basic installations in power and process industries. The present guidelines provide an ad-hoc solution to certain design problems. A unified approach based on simultaneous modeling of thermal-hydraulics and structural behavior does not exist. The present paper describes the development of integral numerical code for simulation of heat exchangers. The code is based on Volume Averaging Technique (VAT) for porous media flow modeling. The calculated values of the whole-section drag and heat transfer coefficients show an excellent agreement with already published values. The matching results prove the correctness of the selected approach and verify the developed numerical code used for this calculation.(author)

  20. Description of code system PLES/PTS for evaluation of pressure vessel integrity during PTS events

    International Nuclear Information System (INIS)

    Hirano, Masashi; Kohsaka, Atsuo.

    1992-02-01

    A code system PLES/PTS has been developed at the Japan Atomic Energy Research Institute (JAERI) to evaluate the integrity of the pressure vessel during plant thermal-hydraulic transients related to pressurized thermal shock (PTS) in a pressurized water reactor (PWR). The code system consists of several member codes to analyse the thermal-mixing behavior of emergency core cooling (ECC) water and primary coolant, transient stress distribution within the vessel wall, and crack growth behavior at the inner surface of the vessel. The crack growth behavior is evaluated by comparing the stress intensity factor (k I ) with the crack initiation toughness (k Ic ) and crack arrest toughness (k Ic ), taking into account the fast neutron irradiation embrittlement. This report describes the methods and models applied in PLES/PTS and the input data requirements. (author)

  1. Integrating big data and actionable health coaching to optimize wellness.

    Science.gov (United States)

    Hood, Leroy; Lovejoy, Jennifer C; Price, Nathan D

    2015-01-09

    The Hundred Person Wellness Project (HPWP) is a 10-month pilot study of 100 'well' individuals where integrated data from whole-genome sequencing, gut microbiome, clinical laboratory tests and quantified self measures from each individual are used to provide actionable results for health coaching with the goal of optimizing wellness and minimizing disease. In a commentary in BMC Medicine, Diamandis argues that HPWP and similar projects will likely result in 'unnecessary and potential harmful over-testing'. We argue that this new approach will ultimately lead to lower costs, better healthcare, innovation and economic growth. The central points of the HPWP are: 1) it is focused on optimizing wellness through longitudinal data collection, integration and mining of individual data clouds, enabling development of predictive models of wellness and disease that will reveal actionable possibilities; and 2) by extending this study to 100,000 well people, we will establish multiparameter, quantifiable wellness metrics and identify markers for wellness to early disease transitions for most common diseases, which will ultimately allow earlier disease intervention, eventually transitioning the individual early on from a disease back to a wellness trajectory.

  2. Multiphase integral reacting flow computer code (ICOMFLO): User`s guide

    Energy Technology Data Exchange (ETDEWEB)

    Chang, S.L.; Lottes, S.A.; Petrick, M.

    1997-11-01

    A copyrighted computational fluid dynamics computer code, ICOMFLO, has been developed for the simulation of multiphase reacting flows. The code solves conservation equations for gaseous species and droplets (or solid particles) of various sizes. General conservation laws, expressed by elliptic type partial differential equations, are used in conjunction with rate equations governing the mass, momentum, enthalpy, species, turbulent kinetic energy, and turbulent dissipation. Associated phenomenological submodels of the code include integral combustion, two parameter turbulence, particle evaporation, and interfacial submodels. A newly developed integral combustion submodel replacing an Arrhenius type differential reaction submodel has been implemented to improve numerical convergence and enhance numerical stability. A two parameter turbulence submodel is modified for both gas and solid phases. An evaporation submodel treats not only droplet evaporation but size dispersion. Interfacial submodels use correlations to model interfacial momentum and energy transfer. The ICOMFLO code solves the governing equations in three steps. First, a staggered grid system is constructed in the flow domain. The staggered grid system defines gas velocity components on the surfaces of a control volume, while the other flow properties are defined at the volume center. A blocked cell technique is used to handle complex geometry. Then, the partial differential equations are integrated over each control volume and transformed into discrete difference equations. Finally, the difference equations are solved iteratively by using a modified SIMPLER algorithm. The results of the solution include gas flow properties (pressure, temperature, density, species concentration, velocity, and turbulence parameters) and particle flow properties (number density, temperature, velocity, and void fraction). The code has been used in many engineering applications, such as coal-fired combustors, air

  3. Evolutionary algorithms approach for integrated bioenergy supply chains optimization

    International Nuclear Information System (INIS)

    Ayoub, Nasser; Elmoshi, Elsayed; Seki, Hiroya; Naka, Yuji

    2009-01-01

    In this paper, we propose an optimization model and solution approach for designing and evaluating integrated system of bioenergy production supply chains, SC, at the local level. Designing SC that simultaneously utilize a set of bio-resources together is a complicated task, considered here. The complication arises from the different nature and sources of bio-resources used in bioenergy production i.e., wet, dry or agriculture, industrial etc. Moreover, the different concerns that decision makers should take into account, to overcome the tradeoff anxieties of the socialists and investors, i.e., social, environmental and economical factors, was considered through the options of multi-criteria optimization. A first part of this research was introduced in earlier research work explaining the general Bioenergy Decision System gBEDS [Ayoub N, Martins R, Wang K, Seki H, Naka Y. Two levels decision system for efficient planning and implementation of bioenergy production. Energy Convers Manage 2007;48:709-23]. In this paper, brief introduction and emphasize on gBEDS are given; the optimization model is presented and followed by a case study on designing a supply chain of nine bio-resources at Iida city in the middle part of Japan.

  4. Integrated strategic and tactical biomass-biofuel supply chain optimization.

    Science.gov (United States)

    Lin, Tao; Rodríguez, Luis F; Shastri, Yogendra N; Hansen, Alan C; Ting, K C

    2014-03-01

    To ensure effective biomass feedstock provision for large-scale biofuel production, an integrated biomass supply chain optimization model was developed to minimize annual biomass-ethanol production costs by optimizing both strategic and tactical planning decisions simultaneously. The mixed integer linear programming model optimizes the activities range from biomass harvesting, packing, in-field transportation, stacking, transportation, preprocessing, and storage, to ethanol production and distribution. The numbers, locations, and capacities of facilities as well as biomass and ethanol distribution patterns are key strategic decisions; while biomass production, delivery, and operating schedules and inventory monitoring are key tactical decisions. The model was implemented to study Miscanthus-ethanol supply chain in Illinois. The base case results showed unit Miscanthus-ethanol production costs were $0.72L(-1) of ethanol. Biorefinery related costs accounts for 62% of the total costs, followed by biomass procurement costs. Sensitivity analysis showed that a 50% reduction in biomass yield would increase unit production costs by 11%. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Integrated testing strategies can be optimal for chemical risk classification.

    Science.gov (United States)

    Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John

    2017-08-01

    There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. TRAC-CFD code integration and its application to containment analysis

    International Nuclear Information System (INIS)

    Tahara, M.; Arai, K.; Oikawa, H.

    2004-01-01

    Several safety systems utilizing natural driving force have been recently adopted for operating reactors, or applied to next-generation reactor design. Examples of these safety systems are the Passive Containment Cooling System (PCCS) and the Drywell Cooler (DWC) for removing decay heat, and the Passive Auto-catalytic Recombiner (PAR) for removing flammable gas in reactor containment during an accident. DWC is used in almost all Boiling Water Reactors (BWR) in service. PAR has been introduced for some reactors in Europe and will be introduced for Japanese reactors. PCCS is a safety device of next-generation BWR. The functional mechanism of these safety systems is closely related to the transient of the thermal-hydraulic condition of the containment atmosphere. The performance depends on the containment atmospheric condition, which is eventually affected by the mass and energy changes caused by the safety system. Therefore, the thermal fluid dynamics in the containment vessel should be appropriately considered in detail to properly estimate the performance of these systems. A computational fluid dynamics (CFD) code is useful for evaluating detailed thermal hydraulic behavior related to this equipment. However, it also requires a considerable amount of computational resources when it is applied to whole containment system transient analysis. The paper describes the method and structure of the integrated analysis tool, and discusses the results of its application to the start-up behavior analysis of a containment cooling system, a drywell local cooler. The integrated analysis code was developed and applied to estimate the DWC performance during a severe accident. The integrated analysis tool is composed of three codes, TRAC-PCV, CFD-DW and TRAC-CC, and analyzes the interaction of the natural convection and steam condensation of the DWC as well as analyzing the thermal hydraulic transient behavior of the containment vessel during a severe accident in detail. The

  7. Optimal operation of integrated processes. Studies on heat recovery systems

    Energy Technology Data Exchange (ETDEWEB)

    Glemmestad, Bjoern

    1997-12-31

    Separators, reactors and a heat exchanger network (HEN) for heat recovery are important parts of an integrated plant. This thesis deals with the operation of HENs, in particular, optimal operation. The purpose of heat integration is to save energy, but the HEN also introduces new interactions and feedback into the overall plant. A prerequisite for optimisation is that there are extra degrees of freedom left after regulatory control is implemented. It is shown that extra degrees of freedom may not always be utilized for energy optimisation, and a quantitative expression for the degrees of freedom that can be so utilized are presented. A simplified expression that is often valid is also deduced. The thesis presents some improvements and generalisations of a structure based method that has been proposed earlier. Structural information is used to divide possible manipulations into three categories depending on how each manipulation affects the utility consumption. By means of these categories and two heuristic rules for operability, the possible manipulations are ordered in a priority table. This table is used to determine which manipulation should be preferred and which manipulation should be selected if an active manipulation is saturated. It is shown that the method may correspond to split-range control. A method that uses parametric information in addition to structural information is proposed. In this method, the optimal control structure is found through solving an integer programming problem. The thesis also proposes a method that combines the use of steady state optimisation and optimal selection of measurements. 86 refs., 46 figs., 8 tabs.

  8. Application of an integrated PC-based neutronics code system to criticality safety

    International Nuclear Information System (INIS)

    Briggs, J.B.; Nigg, D.W.

    1991-01-01

    An integrated system of neutronics and radiation transport software suitable for operation in an IBM PC-class environment has been under development at the Idaho National Engineering Laboratory (INEL) for the past four years. Four modules within the system are particularly useful for criticality safety applications. Using the neutronics portion of the integrated code system, effective neutron multiplication values (k eff values) have been calculated for a variety of benchmark critical experiments for metal systems (Plutonium and Uranium), Aqueous Systems (Plutonium and Uranium) and LWR fuel rod arrays. A description of the codes and methods used in the analysis and the results of the benchmark critical experiments are presented in this paper. In general, excellent agreement was found between calculated and experimental results. (Author)

  9. Study of MHD stability beta limit in LHD by hierarchy integrated simulation code

    International Nuclear Information System (INIS)

    Sato, M.; Watanabe, K.Y.; Nakamura, Y.

    2008-10-01

    The beta limit by the ideal MHD instabilities (so-called 'MHD stability beta limit') for helical plasmas is studied by a hierarchy integrated simulation code. A numerical model for the effect of the MHD instabilities is introduced such that the pressure profile is flattened around the rational surface due to the MHD instabilities. The width of the flattening of the pressure gradient is determined from the width of the eigenmode structure of the MHD instabilities. It is assumed that there is the upper limit of the mode number of the MHD instabilities which directly affect the pressure gradient. The upper limit of the mode number is determined using a recent high beta experiment in the Large Helical Device (LHD). The flattening of the pressure gradient is calculated by the transport module in a hierarchy integrated code. The achievable volume averaged beta value in the LHD is expected to be beyond 6%. (author)

  10. An optimized cosine-modulated nonuniform filter bank design for subband coding of ECG signal

    Directory of Open Access Journals (Sweden)

    A. Kumar

    2015-07-01

    Full Text Available A simple iterative technique for the design of nonuniform cosine modulated filter banks (CMFBS is presented in this paper. The proposed technique employs a single parameter for optimization. The nonuniform cosine modulated filter banks are derived by merging the adjacent filters of uniform cosine modulated filter banks. The prototype filter is designed with the aid of different adjustable window functions such as Kaiser, Cosh and Exponential, and by using the constrained equiripple finite impulse response (FIR digital filter design technique. In this method, either cut off frequency or passband edge frequency is varied in order to adjust the filter coefficients so that reconstruction error could be optimized/minimized to zero. Performance and effectiveness of the proposed method in terms of peak reconstruction error (PRE, aliasing distortion (AD, computational (CPU time, and number of iteration (NOI have been shown through the numerical examples and comparative studies. Finally, the technique is exploited for the subband coding of electrocardiogram (ECG and speech signals.

  11. Stereoscopic Visual Attention-Based Regional Bit Allocation Optimization for Multiview Video Coding

    Directory of Open Access Journals (Sweden)

    Dai Qionghai

    2010-01-01

    Full Text Available We propose a Stereoscopic Visual Attention- (SVA- based regional bit allocation optimization for Multiview Video Coding (MVC by the exploiting visual redundancies from human perceptions. We propose a novel SVA model, where multiple perceptual stimuli including depth, motion, intensity, color, and orientation contrast are utilized, to simulate the visual attention mechanisms of human visual system with stereoscopic perception. Then, a semantic region-of-interest (ROI is extracted based on the saliency maps of SVA. Both objective and subjective evaluations of extracted ROIs indicated that the proposed SVA model based on ROI extraction scheme outperforms the schemes only using spatial or/and temporal visual attention clues. Finally, by using the extracted SVA-based ROIs, a regional bit allocation optimization scheme is presented to allocate more bits on SVA-based ROIs for high image quality and fewer bits on background regions for efficient compression purpose. Experimental results on MVC show that the proposed regional bit allocation algorithm can achieve over % bit-rate saving while maintaining the subjective image quality. Meanwhile, the image quality of ROIs is improved by  dB at the cost of insensitive image quality degradation of the background image.

  12. Efficient Coding and Statistically Optimal Weighting of Covariance among Acoustic Attributes in Novel Sounds

    Science.gov (United States)

    Stilp, Christian E.; Kluender, Keith R.

    2012-01-01

    To the extent that sensorineural systems are efficient, redundancy should be extracted to optimize transmission of information, but perceptual evidence for this has been limited. Stilp and colleagues recently reported efficient coding of robust correlation (r = .97) among complex acoustic attributes (attack/decay, spectral shape) in novel sounds. Discrimination of sounds orthogonal to the correlation was initially inferior but later comparable to that of sounds obeying the correlation. These effects were attenuated for less-correlated stimuli (r = .54) for reasons that are unclear. Here, statistical properties of correlation among acoustic attributes essential for perceptual organization are investigated. Overall, simple strength of the principal correlation is inadequate to predict listener performance. Initial superiority of discrimination for statistically consistent sound pairs was relatively insensitive to decreased physical acoustic/psychoacoustic range of evidence supporting the correlation, and to more frequent presentations of the same orthogonal test pairs. However, increased range supporting an orthogonal dimension has substantial effects upon perceptual organization. Connectionist simulations and Eigenvalues from closed-form calculations of principal components analysis (PCA) reveal that perceptual organization is near-optimally weighted to shared versus unshared covariance in experienced sound distributions. Implications of reduced perceptual dimensionality for speech perception and plausible neural substrates are discussed. PMID:22292057

  13. Designing optimal bioethanol networks with purification for integrated biorefineries

    International Nuclear Information System (INIS)

    Shenoy, Akshay U.; Shenoy, Uday V.

    2014-01-01

    Highlights: • An analytical method is devised for bioethanol network integration with purification. • Minimum fresh bioethanol flow and pinch are found by the Unified Targeting Algorithm. • Optimal bioethanol networks are then synthesized by the Nearest Neighbors Algorithm. • Continuous targets and networks are developed over the purifier inlet flowrate range. • Case study of a biorefinery producing bioethanol from wheat shows large savings. - Abstract: Bioethanol networks with purification for processing pathways in integrated biorefineries are targeted and designed in this work by an analytical approach not requiring graphical constructions. The approach is based on six fundamental equations involving eight variables: two balance equations for the stream flowrate and the bioethanol load over the total network system; one equation for the above-pinch bioethanol load being picked up by the minimum fresh resource and the purified stream; and three equations for the purification unit. A solution strategy is devised by specifying the two variables associated with the purifier inlet stream. Importantly, continuous targeting is then possible over the entire purifier inlet flowrate range on deriving elegant formulae for the remaining six variables. The Unified Targeting Algorithm (UTA) is utilized to establish the minimum fresh bioethanol resource flowrate and identify the pinch purity. The fresh bioethanol resource flowrate target is shown to decrease linearly with purifier inlet flowrate provided the pinch is held by the same point. The Nearest Neighbors Algorithm (NNA) is used to methodically synthesize optimal networks matching bioethanol demands and sources. A case study of a biorefinery producing bioethanol from wheat with arabinoxylan (AX) coproduction is presented. It illustrates the versatility of the approach in generating superior practical designs with up to nearly 94% savings for integrated bioethanol networks, both with and without process

  14. Predictive Coding and Multisensory Integration: An Attentional Account of the Multisensory Mind

    Directory of Open Access Journals (Sweden)

    Durk eTalsma

    2015-03-01

    Full Text Available Multisensory integration involves a host of different cognitive processes, occurring at different stages of sensory processing. Here I argue that, despite recent insights suggesting that multisensory interactions can occur at very early latencies, the actual integration of individual sensory traces into an internally consistent mental representation is dependent on both top-down and bottom-up processes. Moreover, I argue that this integration is not limited to just sensory inputs, but that internal cognitive processes also shape the resulting mental representation. Studies showing that memory recall is affected by the initial multisensory context in which the stimuli were presented will be discussed, as well as several studies showing that mental imagery can affect multisensory illusions. This empirical evidence will be discussed from a predictive coding perspective, in which a central top-down attentional process is proposed to play a central role in coordinating the integration of all these inputs into a coherent mental representation.

  15. Manual for COMSYN: A orbit integration code for the study of beam dynamics in compact synchrotrons

    International Nuclear Information System (INIS)

    Huang, Y.

    1991-10-01

    COMSYN is a numerical integration code which is written for the study and design of the compact synchrotrons. An improved 4th-order Runge-Kutta method is used in COMSYN to integrate the exact equations of motion in a rectangular coordinate system. The magnetic field components of the dipole B x , B y and B z can be obtained from either measurement or directly computed data (MAGNUS, TOSCA). A spline interpolation method is then used to get the field value at the particle position. For standard quadrupole and sextupole, the analytical expression is employed to compute its field distribution

  16. Integrated Fuel-Coolant Interaction (IFCI 7.0) Code User's Manual

    Energy Technology Data Exchange (ETDEWEB)

    Young, Michael F.

    1999-05-01

    The integrated fuel-coolant interaction (IFCI) computer code is being developed at Sandia National Laboratories to investigate the fuel-coolant interaction (FCI) problem at large scale using a two-dimensional, three-field hydrodynamic framework and physically based models. IFCI will be capable of treating all major FCI processes in an integrated manner. This document is a description of IFCI 7.0. The user's manual describes the hydrodynamic method and physical models used in IFCI 7.0. Appendix A is an input manual provided for the creation of working decks.

  17. Integrated Fuel-Coolant Interaction (IFCI 7.0) Code User's Manual

    International Nuclear Information System (INIS)

    Young, Michael F.

    1999-01-01

    The integrated fuel-coolant interaction (IFCI) computer code is being developed at Sandia National Laboratories to investigate the fuel-coolant interaction (FCI) problem at large scale using a two-dimensional, three-field hydrodynamic framework and physically based models. IFCI will be capable of treating all major FCI processes in an integrated manner. This document is a description of IFCI 7.0. The user's manual describes the hydrodynamic method and physical models used in IFCI 7.0. Appendix A is an input manual provided for the creation of working decks

  18. Label swapper device for spectral amplitude coded optical packet networks monolithically integrated on InP.

    Science.gov (United States)

    Muñoz, P; García-Olcina, R; Habib, C; Chen, L R; Leijtens, X J M; de Vries, T; Robbins, D; Capmany, J

    2011-07-04

    In this paper the design, fabrication and experimental characterization of an spectral amplitude coded (SAC) optical label swapper monolithically integrated on Indium Phosphide (InP) is presented. The device has a footprint of 4.8x1.5 mm2 and is able to perform label swapping operations required in SAC at a speed of 155 Mbps. The device was manufactured in InP using a multiple purpose generic integration scheme. Compared to previous SAC label swapper demonstrations, using discrete component assembly, this label swapper chip operates two order of magnitudes faster.

  19. CSNI Integral Test Facility Matrices for Validation of Best-Estimate Thermal-Hydraulic Computer Codes

    International Nuclear Information System (INIS)

    Glaeser, H.

    2008-01-01

    Internationally agreed Integral Test Facility (ITF) matrices for validation of realistic thermal hydraulic system computer codes were established. ITF development is mainly for Pressurised Water Reactors (PWRs) and Boiling Water Reactors (BWRs). A separate activity was for Russian Pressurised Water-cooled and Water-moderated Energy Reactors (WWER). Firstly, the main physical phenomena that occur during considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. In this paper some specific examples from the ITF matrices will also be provided. The matrices will be a guide for code validation, will be a basis for comparisons of code predictions performed with different system codes, and will contribute to the quantification of the uncertainty range of code model predictions. In addition to this objective, the construction of such a matrix is an attempt to record information which has been generated around the world over the last years, so that it is more accessible to present and future workers in that field than would otherwise be the case.

  20. Integration of Multifidelity Multidisciplinary Computer Codes for Design and Analysis of Supersonic Aircraft

    Science.gov (United States)

    Geiselhart, Karl A.; Ozoroski, Lori P.; Fenbert, James W.; Shields, Elwood W.; Li, Wu

    2011-01-01

    This paper documents the development of a conceptual level integrated process for design and analysis of efficient and environmentally acceptable supersonic aircraft. To overcome the technical challenges to achieve this goal, a conceptual design capability which provides users with the ability to examine the integrated solution between all disciplines and facilitates the application of multidiscipline design, analysis, and optimization on a scale greater than previously achieved, is needed. The described capability is both an interactive design environment as well as a high powered optimization system with a unique blend of low, mixed and high-fidelity engineering tools combined together in the software integration framework, ModelCenter. The various modules are described and capabilities of the system are demonstrated. The current limitations and proposed future enhancements are also discussed.

  1. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC): gap analysis for high fidelity and performance assessment code development

    International Nuclear Information System (INIS)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-01-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  2. Nuclear Energy Advanced Modeling and Simulation (NEAMS) waste Integrated Performance and Safety Codes (IPSC) : gap analysis for high fidelity and performance assessment code development.

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.; Webb, Stephen Walter; Dewers, Thomas A.; Mariner, Paul E.; Edwards, Harold Carter; Fuller, Timothy J.; Freeze, Geoffrey A.; Jove-Colon, Carlos F.; Wang, Yifeng

    2011-03-01

    This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repository designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are

  3. FREQUENCY ANALYSIS OF RLE-BLOCKS REPETITIONS IN THE SERIES OF BINARY CODES WITH OPTIMAL MINIMAX CRITERION OF AUTOCORRELATION FUNCTION

    Directory of Open Access Journals (Sweden)

    A. A. Kovylin

    2013-01-01

    Full Text Available The article describes the problem of searching for binary pseudo-random sequences with quasi-ideal autocorrelation function, which are to be used in contemporary communication systems, including mobile and wireless data transfer interfaces. In the synthesis of binary sequences sets, the target set is manning them based on the minimax criterion by which a sequence is considered to be optimal according to the intended application. In the course of the research the optimal sequences with order of up to 52 were obtained; the analysis of Run Length Encoding was carried out. The analysis showed regularities in the distribution of series number of different lengths in the codes that are optimal on the chosen criteria, which would make it possible to optimize the searching process for such codes in the future.

  4. Integrating packing and distribution problems and optimization through mathematical programming

    Directory of Open Access Journals (Sweden)

    Fabio Miguel

    2016-06-01

    Full Text Available This paper analyzes the integration of two combinatorial problems that frequently arise in production and distribution systems. One is the Bin Packing Problem (BPP problem, which involves finding an ordering of some objects of different volumes to be packed into the minimal number of containers of the same or different size. An optimal solution to this NP-Hard problem can be approximated by means of meta-heuristic methods. On the other hand, we consider the Capacitated Vehicle Routing Problem with Time Windows (CVRPTW, which is a variant of the Travelling Salesman Problem (again a NP-Hard problem with extra constraints. Here we model those two problems in a single framework and use an evolutionary meta-heuristics to solve them jointly. Furthermore, we use data from a real world company as a test-bed for the method introduced here.

  5. Integration and Optimization of Renewables and Storages for Rural Electrification

    Directory of Open Access Journals (Sweden)

    Morris Brenna

    2016-09-01

    Full Text Available The electricity access in Sub-Saharan African countries is below 10%; thus, introducing a microgrid for rural electrification can overcome the endemic lack of modern electricity access that hampers the provision of basic services such as education, healthcare, safety, economic and social growth for rural communities. This work studies different possible comparison methods considering variations such as land area required, location for the storage, efficiency, availability and reliability of energy resources, and technology cost variability (investment cost and levelized cost of electricity, which are among the major key parameters used to assess the best possible utilization of renewables and storage system, either using them in the form of integrated, hybrid or independent systems. The study is carried out largely with the help of the Micropower optimization modeling simulator called HOMER for Ethiopia. As a result, the study proposes the use of Photovoltaic (PV–Wind–Hydro–Battery hybrid system model that concludes the optimal configuration of power systems at affordable price for underserved rural communities.

  6. Development of essential system technologies for advanced reactor - Development of natural circulation analysis code for integral reactor

    Energy Technology Data Exchange (ETDEWEB)

    Park, Goon Cherl; Park, Ik Gyu; Kim, Jae Hak; Lee, Sang Min; Kim, Tae Wan [Seoul National University, Seoul (Korea)

    1999-04-01

    The objective of this study is to understand the natural circulation characteristics of integral type reactors and to develope the natural circulation analysis code for integral type reactors. This study is focused on the asymmetric 3-dimensional flow during natural circulation such as 1/4 steam generator section isolation and the inclination of the reactor systems. Natural circulation experiments were done using small-scale facilities of integral reactor SMART (System-Integrated Modular Advanced ReacTor). CFX4 code was used to investigate the flow patterns and thermal mixing phenomena in upper pressure header and downcomer. Differences between normal operation of all steam generators and the 1/4 section isolation conditions were observed and the results were used as the data 1/4 section isolation conditions were observed and the results were used as the data for RETRAN-03/INT code validation. RETRAN-03 code was modified for the development of natural circulation analysis code for integral type reactors, which was development of natural circulation analysis code for integral type reactors, which was named as RETRAN-03/INT. 3-dimensional analysis models for asymmetric flow in integral type reactors were developed using vector momentum equations in RETRAN-03. Analysis results using RETRAN-03/INT were compared with experimental and CFX4 analysis results and showed good agreements. The natural circulation characteristics obtained in this study will provide the important and fundamental design features for the future small and medium integral reactors. (author). 29 refs., 75 figs., 18 tabs.

  7. Visual-vestibular cue integration for heading perception: applications of optimal cue integration theory.

    Science.gov (United States)

    Fetsch, Christopher R; Deangelis, Gregory C; Angelaki, Dora E

    2010-05-01

    The perception of self-motion is crucial for navigation, spatial orientation and motor control. In particular, estimation of one's direction of translation, or heading, relies heavily on multisensory integration in most natural situations. Visual and nonvisual (e.g., vestibular) information can be used to judge heading, but each modality alone is often insufficient for accurate performance. It is not surprising, then, that visual and vestibular signals converge frequently in the nervous system, and that these signals interact in powerful ways at the level of behavior and perception. Early behavioral studies of visual-vestibular interactions consisted mainly of descriptive accounts of perceptual illusions and qualitative estimation tasks, often with conflicting results. In contrast, cue integration research in other modalities has benefited from the application of rigorous psychophysical techniques, guided by normative models that rest on the foundation of ideal-observer analysis and Bayesian decision theory. Here we review recent experiments that have attempted to harness these so-called optimal cue integration models for the study of self-motion perception. Some of these studies used nonhuman primate subjects, enabling direct comparisons between behavioral performance and simultaneously recorded neuronal activity. The results indicate that humans and monkeys can integrate visual and vestibular heading cues in a manner consistent with optimal integration theory, and that single neurons in the dorsal medial superior temporal area show striking correlates of the behavioral effects. This line of research and other applications of normative cue combination models should continue to shed light on mechanisms of self-motion perception and the neuronal basis of multisensory integration.

  8. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC).

    Energy Technology Data Exchange (ETDEWEB)

    Schultz, Peter Andrew

    2011-12-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M&S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V&V) is required throughout the system to establish evidence-based metrics for the level of confidence in M&S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V&V challenge at the subcontinuum scale, an approach to incorporate V&V concepts into subcontinuum scale modeling and simulation (M&S), and a plan to incrementally incorporate effective V&V into subcontinuum scale M&S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  9. Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC)

    International Nuclear Information System (INIS)

    Schultz, Peter Andrew

    2011-01-01

    The objective of the U.S. Department of Energy Office of Nuclear Energy Advanced Modeling and Simulation Waste Integrated Performance and Safety Codes (NEAMS Waste IPSC) is to provide an integrated suite of computational modeling and simulation (M and S) capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive-waste storage facility or disposal repository. Achieving the objective of modeling the performance of a disposal scenario requires describing processes involved in waste form degradation and radionuclide release at the subcontinuum scale, beginning with mechanistic descriptions of chemical reactions and chemical kinetics at the atomic scale, and upscaling into effective, validated constitutive models for input to high-fidelity continuum scale codes for coupled multiphysics simulations of release and transport. Verification and validation (V and V) is required throughout the system to establish evidence-based metrics for the level of confidence in M and S codes and capabilities, including at the subcontiunuum scale and the constitutive models they inform or generate. This Report outlines the nature of the V and V challenge at the subcontinuum scale, an approach to incorporate V and V concepts into subcontinuum scale modeling and simulation (M and S), and a plan to incrementally incorporate effective V and V into subcontinuum scale M and S destined for use in the NEAMS Waste IPSC work flow to meet requirements of quantitative confidence in the constitutive models informed by subcontinuum scale phenomena.

  10. Current status of the transient integral fuel element performance code URANUS

    International Nuclear Information System (INIS)

    Preusser, T.; Lassmann, K.

    1983-01-01

    To investigate the behavior of fuel pins during normal and off-normal operation, the integral fuel rod code URANUS has been extended to include a transient version. The paper describes the current status of the program system including a presentation of newly developed models for hypothetical accident investigation. The main objective of current development work is to improve the modelling of fuel and clad material behavior during fast transients. URANUS allows detailed analysis of experiments until the onset of strong material transport phenomena. Transient fission gas analysis is carried out due to the coupling with a special version of the LANGZEIT-KURZZEIT-code (KfK). Fuel restructuring and grain growth kinetics models have been improved recently to better characterize pre-experimental steady-state operation; transient models are under development. Extensive verification of the new version has been carried out by comparison with analytical solutions, experimental evidence, and code-to-code evaluation studies. URANUS, with all these improvements, has been successfully applied to difficult fast breeder fuel rod analysis including TOP, LOF, TUCOP, local coolant blockage and specific carbide fuel experiments. Objective of further studies is the description of transient PCMI. It is expected that the results of these developments will contribute significantly to the understanding of fuel element structural behavior during severe transients. (orig.)

  11. Integrated intra-subassembly treatment in the SASSYS-1 LMR systems analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, F.

    1992-09-01

    This report discusses a hot channel treatment which has been added to the SASSYS-1 LMR systems analysis code by providing for a multiple pin treatment of each of one or more subassemblies. This is an explicit calculation of intra-subassembly effects, not a hot-channel adjustment to a calculated average channel. Thus, the code can account for effects such as transient flow redistribution, both within a subassembly and among subassemblies. The code now provides a total integrated thermal hydraulic treatment including a multiple pin treatment within subassemblies, a multi-channel treatment of the whole core, and models for the primary coolant loops, the intermediate coolant loops, the steam generators, and the balance of plant. Currently the multiple-pin option is only implemented for single-phase calculations. It is not applicable after the onset of boiling or pin disruption. The new multiple pin treatment is being verified with detailed temperature data from instrumented subassemblies in EBR-II, both steady-state and transient, with special emphasis on passive safety tests such as SHRT-45. For the SHRT-45 test, excellent agreement is obtained between code predictions and experimental measurements of coolant temperatures.

  12. Integrated intra-subassembly treatment in the SASSYS-1 LMR systems analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, F.

    1992-01-01

    This report discusses a hot channel treatment which has been added to the SASSYS-1 LMR systems analysis code by providing for a multiple pin treatment of each of one or more subassemblies. This is an explicit calculation of intra-subassembly effects, not a hot-channel adjustment to a calculated average channel. Thus, the code can account for effects such as transient flow redistribution, both within a subassembly and among subassemblies. The code now provides a total integrated thermal hydraulic treatment including a multiple pin treatment within subassemblies, a multi-channel treatment of the whole core, and models for the primary coolant loops, the intermediate coolant loops, the steam generators, and the balance of plant. Currently the multiple-pin option is only implemented for single-phase calculations. It is not applicable after the onset of boiling or pin disruption. The new multiple pin treatment is being verified with detailed temperature data from instrumented subassemblies in EBR-II, both steady-state and transient, with special emphasis on passive safety tests such as SHRT-45. For the SHRT-45 test, excellent agreement is obtained between code predictions and experimental measurements of coolant temperatures.

  13. Optimal Solar PV Arrays Integration for Distributed Generation

    Energy Technology Data Exchange (ETDEWEB)

    Omitaomu, Olufemi A [ORNL; Li, Xueping [University of Tennessee, Knoxville (UTK)

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introduce quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.

  14. Integrated Multidisciplinary Constrained Optimization of Offshore Support Structures

    International Nuclear Information System (INIS)

    Haghi, Rad; Molenaar, David P; Ashuri, Turaj; Van der Valk, Paul L C

    2014-01-01

    In the current offshore wind turbine support structure design method, the tower and foundation, which form the support structure are designed separately by the turbine and foundation designer. This method yields a suboptimal design and it results in a heavy, overdesigned and expensive support structure. This paper presents an integrated multidisciplinary approach to design the tower and foundation simultaneously. Aerodynamics, hydrodynamics, structure and soil mechanics are the modeled disciplines to capture the full dynamic behavior of the foundation and tower under different environmental conditions. The objective function to be minimized is the mass of the support structure. The model includes various design constraints: local and global buckling, modal frequencies, and fatigue damage along different stations of the structure. To show the usefulness of the method, an existing SWT-3.6-107 offshore wind turbine where its tower and foundation are designed separately is used as a case study. The result of the integrated multidisciplinary design optimization shows 12.1% reduction in the mass of the support structure, while satisfying all the design constraints

  15. Studies on optimal design and operation of integrated distillation arrangements

    Energy Technology Data Exchange (ETDEWEB)

    Christiansen, Atle Christer

    1997-12-31

    During the last decades, there has been growing concern in the chemical engineering environment over the task of developing more cost- and energy efficient process equipment. This thesis discusses measures for improving the end-use energy efficiency of separation systems. It emphasises a certain class of integrated distillation arrangements, in particular it considers means for direct coupling of distillation columns so as to use the underlying physics to facilitate more energy efficient separations. The numerical methods discussed are well suited to solve models of distillation columns. A tear and grid method is proposed that to some extent exploits the sparsity, since the number of tear variables required for solving a distillation model usually is rather small. The parameter continuation method described is well suited for ill-conditioned problems. The analysis of integrated columns is extended beyond the scope of numerical simulations by means of analytical results that applies in certain limiting cases. The consept of preferred separation, which is important for prefractionator arrangements, is considered. From this analysis is derived information that is important for the practical operation of such columns. Finally, the proposed numerical methods are used to optimize Petlyuk arrangements for separating ternary and quaternary mixtures. 166 refs., 130 figs., 20 tabs.

  16. Integrating Fuzzy Logic, Optimization, and GIS for Ecological Impact Assessments

    Science.gov (United States)

    Bojórquez-Tapia, Luis A.; Juárez, Lourdes; Cruz-Bello, Gustavo

    2002-09-01

    Appraisal of ecological impacts has been problematic because of the behavior of ecological system and the responses of these systems to human intervention are far from fully understood. While it has been relatively easy to itemize the potential ecological impacts, it has been difficult to arrive at accurate predictions of how these impacts affect populations, communities, or ecosystems. Furthermore, the spatial heterogeneity of ecological systems has been overlooked because its examination is practically impossible through matrix techniques, the most commonly used impact assessment approach. Besides, the public has become increasingly aware of the importance of the EIA in decision-making and thus the interpretation of impact significance is complicated further by the different value judgments of stakeholders. Moreover, impact assessments are carried out with a minimum of data, high uncertainty, and poor conceptual understanding. Hence, the evaluation of ecological impacts entails the integration of subjective and often conflicting judgments from a variety of experts and stakeholders. The purpose of this paper is to present an environmental impact assessment approach based on the integration fuzzy logic, geographical information systems and optimization techniques. This approach enables environmental analysts to deal with the intrinsic imprecision and ambiguity associated with the judgments of experts and stakeholders, the description of ecological systems, and the prediction of ecological impacts. The application of this approach is illustrated through an example, which shows how consensus about impact mitigation can be attained within a conflict resolution framework.

  17. Enhanced Protein Production in Escherichia coli by Optimization of Cloning Scars at the Vector-Coding Sequence Junction

    DEFF Research Database (Denmark)

    Mirzadeh, Kiavash; Martinez, Virginia; Toddo, Stephen

    2015-01-01

    are poorly expressed even when they are codon-optimized and expressed from vectors with powerful genetic elements. In this study, we show that poor expression can be caused by certain nucleotide sequences (e.g., cloning scars) at the junction between the vector and the coding sequence. Since these sequences...

  18. Optimizing the Betts-Miller-Janjic cumulus parameterization with Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Huang, Melin; Huang, Bormin; Huang, Allen H.-L.

    2015-10-01

    The schemes of cumulus parameterization are responsible for the sub-grid-scale effects of convective and/or shallow clouds, and intended to represent vertical fluxes due to unresolved updrafts and downdrafts and compensating motion outside the clouds. Some schemes additionally provide cloud and precipitation field tendencies in the convective column, and momentum tendencies due to convective transport of momentum. The schemes all provide the convective component of surface rainfall. Betts-Miller-Janjic (BMJ) is one scheme to fulfill such purposes in the weather research and forecast (WRF) model. National Centers for Environmental Prediction (NCEP) has tried to optimize the BMJ scheme for operational application. As there are no interactions among horizontal grid points, this scheme is very suitable for parallel computation. With the advantage of Intel Xeon Phi Many Integrated Core (MIC) architecture, efficient parallelization and vectorization essentials, it allows us to optimize the BMJ scheme. If compared to the original code respectively running on one CPU socket (eight cores) and on one CPU core with Intel Xeon E5-2670, the MIC-based optimization of this scheme running on Xeon Phi coprocessor 7120P improves the performance by 2.4x and 17.0x, respectively.

  19. The role of crossover operator in evolutionary-based approach to the problem of genetic code optimization.

    Science.gov (United States)

    Błażej, Paweł; Wnȩtrzak, Małgorzata; Mackiewicz, Paweł

    2016-12-01

    One of theories explaining the present structure of canonical genetic code assumes that it was optimized to minimize harmful effects of amino acid replacements resulting from nucleotide substitutions and translational errors. A way to testify this concept is to find the optimal code under given criteria and compare it with the canonical genetic code. Unfortunately, the huge number of possible alternatives makes it impossible to find the optimal code using exhaustive methods in sensible time. Therefore, heuristic methods should be applied to search the space of possible solutions. Evolutionary algorithms (EA) seem to be ones of such promising approaches. This class of methods is founded both on mutation and crossover operators, which are responsible for creating and maintaining the diversity of candidate solutions. These operators possess dissimilar characteristics and consequently play different roles in the process of finding the best solutions under given criteria. Therefore, the effective searching for the potential solutions can be improved by applying both of them, especially when these operators are devised specifically for a given problem. To study this subject, we analyze the effectiveness of algorithms for various combinations of mutation and crossover probabilities under three models of the genetic code assuming different restrictions on its structure. To achieve that, we adapt the position based crossover operator for the most restricted model and develop a new type of crossover operator for the more general models. The applied fitness function describes costs of amino acid replacement regarding their polarity. Our results indicate that the usage of crossover operators can significantly improve the quality of the solutions. Moreover, the simulations with the crossover operator optimize the fitness function in the smaller number of generations than simulations without this operator. The optimal genetic codes without restrictions on their structure

  20. Performance Modeling and Optimization of a High Energy CollidingBeam Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-06-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms.

  1. Performance Modeling and Optimization of a High Energy Colliding Beam Simulation Code

    International Nuclear Information System (INIS)

    Shan, Hongzhang; Strohmaier, Erich; Qiang, Ji; Bailey, David H.; Yelick, Kathy

    2006-01-01

    An accurate modeling of the beam-beam interaction is essential to maximizing the luminosity in existing and future colliders. BeamBeam3D was the first parallel code that can be used to study this interaction fully self-consistently on high-performance computing platforms. Various all-to-all personalized communication (AAPC) algorithms dominate its communication patterns, for which we developed a sequence of performance models using a series of micro-benchmarks. We find that for SMP based systems the most important performance constraint is node-adapter contention, while for 3D-Torus topologies good performance models are not possible without considering link contention. The best average model prediction error is very low on SMP based systems with of 3% to 7%. On torus based systems errors of 29% are higher but optimized performance can again be predicted within 8% in some cases. These excellent results across five different systems indicate that this methodology for performance modeling can be applied to a large class of algorithms

  2. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    Science.gov (United States)

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  3. Integrated analysis of core debris interactions and their effects on containment integrity using the CONTAIN computer code

    International Nuclear Information System (INIS)

    Carroll, D.E.; Bergeron, K.D.; Williams, D.C.; Tills, J.L.; Valdez, G.D.

    1987-01-01

    The CONTAIN computer code includes a versatile system of phenomenological models for analyzing the physical, chemical and radiological conditions inside the containment building during severe reactor accidents. Important contributors to these conditions are the interactions which may occur between released corium and cavity concrete. The phenomena associated with interactions between ejected corium debris and the containment atmosphere (Direct Containment Heating or DCH) also pose a potential threat to containment integrity. In this paper, we describe recent enhancements of the CONTAIN code which allow an integrated analysis of these effects in the presence of other mitigating or aggravating physical processes. In particular, the recent inclusion of the CORCON and VANESA models is described and a calculation example presented. With this capability CONTAIN can model core-concrete interactions occurring simultaneously in multiple compartments and can couple the aerosols thereby generated to the mechanistic description of all atmospheric aerosol components. Also discussed are some recent results of modeling the phenomena involved in Direct Containment Heating. (orig.)

  4. Comparison of different methods used in integral codes to model coagulation of aerosols

    Science.gov (United States)

    Beketov, A. I.; Sorokin, A. A.; Alipchenkov, V. M.; Mosunova, N. A.

    2013-09-01

    The methods for calculating coagulation of particles in the carrying phase that are used in the integral codes SOCRAT, ASTEC, and MELCOR, as well as the Hounslow and Jacobson methods used to model aerosol processes in the chemical industry and in atmospheric investigations are compared on test problems and against experimental results in terms of their effectiveness and accuracy. It is shown that all methods are characterized by a significant error in modeling the distribution function for micrometer particles if calculations are performed using rather "coarse" spectra of particle sizes, namely, when the ratio of the volumes of particles from neighboring fractions is equal to or greater than two. With reference to the problems considered, the Hounslow method and the method applied in the aerosol module used in the ASTEC code are the most efficient ones for carrying out calculations.

  5. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  6. Development of system analysis code for thermal-hydraulic simulation of integral reactor, Rex-10

    International Nuclear Information System (INIS)

    Lee, Y. G.; Kim, J. W.; Yoon, S. J.; Park, G. C.

    2010-10-01

    Rex-10 is an environment-friendly and economical small-scale nuclear reactor to provide the energy for district heating as well as the electric power in micro-grid. This integral reactor comprises several innovative concepts supported by advanced primary circuit components, low coolant parameters and natural circulation cooling. To evaluate the system performance and thermal-hydraulic behavior of the reactor, a system analysis code is being developed so that the new designs and technologies adopted in Rex-10 can be reflected. The research efforts are absorbed in programming the simple and fast-running thermal-hydraulic analysis software. The details of hydrodynamic governing equations component models and numerical solution scheme used in this code are presented in this paper. On the basis of one-dimensional momentum integral model, the models of point reactor neutron kinetics for thorium-fueled core, physical processes in the steam-gas pressurizer, and heat transfers in helically coiled steam generator are implemented to the system code. Implicit numerical scheme is employed to momentum and energy equations to assure the numerical stability. The accuracy of simulation is validated by applying the solution method to the Rex-10 test facility. Calculated natural circulation flow rate and coolant temperature at steady-state are compared to the experimental data. The validation is also carried out for the transients in which the sudden reduction in the core power or the feedwater flow takes place. The code's capability to predict the steady-state flow by natural convection and the qualitative behaviour of the primary system in the transients is confirmed. (Author)

  7. Development of system analysis code for thermal-hydraulic simulation of integral reactor, Rex-10

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2010-10-15

    Rex-10 is an environment-friendly and economical small-scale nuclear reactor to provide the energy for district heating as well as the electric power in micro-grid. This integral reactor comprises several innovative concepts supported by advanced primary circuit components, low coolant parameters and natural circulation cooling. To evaluate the system performance and thermal-hydraulic behavior of the reactor, a system analysis code is being developed so that the new designs and technologies adopted in Rex-10 can be reflected. The research efforts are absorbed in programming the simple and fast-running thermal-hydraulic analysis software. The details of hydrodynamic governing equations component models and numerical solution scheme used in this code are presented in this paper. On the basis of one-dimensional momentum integral model, the models of point reactor neutron kinetics for thorium-fueled core, physical processes in the steam-gas pressurizer, and heat transfers in helically coiled steam generator are implemented to the system code. Implicit numerical scheme is employed to momentum and energy equations to assure the numerical stability. The accuracy of simulation is validated by applying the solution method to the Rex-10 test facility. Calculated natural circulation flow rate and coolant temperature at steady-state are compared to the experimental data. The validation is also carried out for the transients in which the sudden reduction in the core power or the feedwater flow takes place. The code's capability to predict the steady-state flow by natural convection and the qualitative behaviour of the primary system in the transients is confirmed. (Author)

  8. Study on severe accidents and countermeasures for WWER-1000 reactors using the integral code ASTEC

    International Nuclear Information System (INIS)

    Tusheva, P.; Schaefer, F.; Altstadt, E.; Kliem, S.; Reinke, N.

    2011-01-01

    The research field focussing on the investigations and the analyses of severe accidents is an important part of the nuclear safety. To maintain the safety barriers as long as possible and to retain the radioactivity within the airtight premises or the containment, to avoid or mitigate the consequences of such events and to assess the risk, thorough studies are needed. On the one side, it is the aim of the severe accident research to understand the complex phenomena during the in- and ex-vessel phase, involving reactor-physics, thermal-hydraulics, physicochemical and mechanical processes. On the other side the investigations strive for effective severe accident management measures. This paper is focused on the possibilities for accident management measures in case of severe accidents. The reactor pressure vessel is the last barrier to keep the molten materials inside the reactor, and thus to prevent higher loads to the containment. To assess the behaviour of a nuclear power plant during transient or accident conditions, computer codes are widely used, which have to be validated against experiments or benchmarked against other codes. The analyses performed with the integral code ASTEC cover two accident sequences which could lead to a severe accident: a small break loss of coolant accident and a station blackout. The results have shown that in case of unavailability of major active safety systems the reactor pressure vessel would ultimately fail. The discussed issues concern the main phenomena during the early and late in-vessel phase of the accident, the time to core heat-up, the hydrogen production, the mass of corium in the reactor pressure vessel lower plenum and the failure of the reactor pressure vessel. Additionally, possible operator's actions and countermeasures in the preventive or mitigative domain are addressed. The presented investigations contribute to the validation of the European integral severe accidents code ASTEC for WWER-1000 type of reactors

  9. Evaluation of Advanced Thermohydraulic System Codes for Design and Safety Analysis of Integral Type Reactors

    International Nuclear Information System (INIS)

    2014-02-01

    The integral pressurized water reactor (PWR) concept, which incorporates the nuclear steam supply systems within the reactor vessel, is one of the innovative reactor types with high potential for near term deployment. An International Collaborative Standard Problem (ICSP) on Integral PWR Design, Natural Circulation Flow Stability and Thermohydraulic Coupling of Primary System and Containment during Accidents was established in 2010. Oregon State University, which made available the use of its experimental facility built to demonstrate the feasibility of the Multi-application Small Light Water Reactor (MASLWR) design, and sixteen institutes from seven Member States participated in this ICSP. The objective of the ICSP is to assess computer codes for reactor system design and safety analysis. This objective is achieved through the production of experimental data and computer code simulation of experiments. A loss of feedwater transient with subsequent automatic depressurization system blowdown and long term cooling was selected as the reference event since many different modes of natural circulation phenomena, including the coupling of primary system, high pressure containment and cooling pool are expected to occur during this transient. The power maneuvering transient is also tested to examine the stability of natural circulation during the single and two phase conditions. The ICSP was conducted in three phases: pre-test (with designed initial and boundary conditions established before the experiment was conducted), blind (with real initial and boundary conditions after the experiment was conducted) and open simulation (after the observation of real experimental data). Most advanced thermohydraulic system analysis codes such as TRACE, RELAPS and MARS have been assessed against experiments conducted at the MASLWR test facility. The ICSP has provided all participants with the opportunity to evaluate the strengths and weaknesses of their system codes in the transient

  10. Optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme for Intel Many Integrated Core (MIC) architecture

    Science.gov (United States)

    Mielikainen, Jarno; Huang, Bormin; Huang, Allen H.-L.

    2015-05-01

    Intel Many Integrated Core (MIC) ushers in a new era of supercomputing speed, performance, and compatibility. It allows the developers to run code at trillions of calculations per second using the familiar programming model. In this paper, we present our results of optimizing the updated Goddard shortwave radiation Weather Research and Forecasting (WRF) scheme on Intel Many Integrated Core Architecture (MIC) hardware. The Intel Xeon Phi coprocessor is the first product based on Intel MIC architecture, and it consists of up to 61 cores connected by a high performance on-die bidirectional interconnect. The co-processor supports all important Intel development tools. Thus, the development environment is familiar one to a vast number of CPU developers. Although, getting a maximum performance out of Xeon Phi will require using some novel optimization techniques. Those optimization techniques are discusses in this paper. The results show that the optimizations improved performance of the original code on Xeon Phi 7120P by a factor of 1.3x.

  11. ROXIE A Computer Code for the Integrated Design of Accelerator Magnets

    CERN Document Server

    Russenschuck, Stephan

    1998-01-01

    The paper describes the ROXIE software program package which has been developed for the design of the superconducting magnets for the LHC at CERN. The software is used as an approach towards the integrated design of superconducting magnets including feature-based coil geometry creation, conceptual design using genetic algorithms, optimization of the coil and iron cross-sections using a reduced vector-potential formulation, 3-D coil end geometry and field optimization using deterministic vector-optimization techniques, tolerance analysis, production of drawings by means of a DXF interface, end-spacer design with interfaces to CAD-CAM for the CNC machining of these pieces, and the tracing of manufacturing errors using field quality measurements.

  12. The Effectiveness of Business Codes: A Critical Examination of Existing Studies and the Development of an Integrated Research Model

    OpenAIRE

    Kaptein, S.P.; Schwartz, M.S.

    2007-01-01

    textabstractBusiness codes are a widely used management instrument. Research into the effectiveness of business codes has, however, produced conflicting results. The main reasons for the divergent findings are: varying definitions of key terms; deficiencies in the empirical data and methodologies used; and a lack of theory. In this paper, we propose an integrated research model and suggest directions for future research.

  13. An Analysis of Countries which have Integrated Coding into their Curricula and the Content Analysis of Academic Studies on Coding Training in Turkey

    Directory of Open Access Journals (Sweden)

    Hüseyin Uzunboylu

    2017-11-01

    Full Text Available The first aim is to conduct a general analysis of countries which have integrated coding training into their curricula, and the second aim is to conduct a content analysis of studies on coding training in Turkey. It was identified that there are only a few academic studies on coding training in Turkey, and that the majority of them were published in 2016, the intended population was mainly “undergraduate students” and that the majority of these students were Computer Education and Instructional Technology undergraduates. It was determined that the studies mainly focused on the subjects of “programming” and “Scratch”, the terms programming and coding were used as synonyms, most of the studies were carried out using quantitative methods and data was obtained mostly by literature review and scale/survey interval techniques.

  14. Evaluation of angular integrals in the generation of transfer matrices for multigroup transport codes

    International Nuclear Information System (INIS)

    Garcia, R.D.M.

    1985-01-01

    The generalization of a semi-analytical technique for the evaluation of angular integrals that appear in the generation of elastic and discrete inelastic tranfer matrices for transport codes is carried out. In contrast to the generalized series expansions which are found to be too complex and thus of little practical value, when compared to the Gaussian quadrature technique, the recursion relations developed in this work are superior to the quadrature scheme, for those cases where the round-off error propagation is not significant. (Author) [pt

  15. Setting value optimization method in integration for relay protection based on improved quantum particle swarm optimization algorithm

    Science.gov (United States)

    Yang, Guo Sheng; Wang, Xiao Yang; Li, Xue Dong

    2018-03-01

    With the establishment of the integrated model of relay protection and the scale of the power system expanding, the global setting and optimization of relay protection is an extremely difficult task. This paper presents a kind of application in relay protection of global optimization improved particle swarm optimization algorithm and the inverse time current protection as an example, selecting reliability of the relay protection, selectivity, quick action and flexibility as the four requires to establish the optimization targets, and optimizing protection setting values of the whole system. Finally, in the case of actual power system, the optimized setting value results of the proposed method in this paper are compared with the particle swarm algorithm. The results show that the improved quantum particle swarm optimization algorithm has strong search ability, good robustness, and it is suitable for optimizing setting value in the relay protection of the whole power system.

  16. SU-E-T-254: Optimization of GATE and PHITS Monte Carlo Code Parameters for Uniform Scanning Proton Beam Based On Simulation with FLUKA General-Purpose Code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, K [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Osaka (Japan); Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Takashina, M; Koizumi, M [Department of Medical Physics ' Engineering, Osaka University Graduate School of Medicine, Osaka (Japan); Das, I; Moskvin, V [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN (United States)

    2014-06-01

    Purpose: Monte Carlo codes are becoming important tools for proton beam dosimetry. However, the relationships between the customizing parameters and percentage depth dose (PDD) of GATE and PHITS codes have not been reported which are studied for PDD and proton range compared to the FLUKA code and the experimental data. Methods: The beam delivery system of the Indiana University Health Proton Therapy Center was modeled for the uniform scanning beam in FLUKA and transferred identically into GATE and PHITS. This computational model was built from the blue print and validated with the commissioning data. Three parameters evaluated are the maximum step size, cut off energy and physical and transport model. The dependence of the PDDs on the customizing parameters was compared with the published results of previous studies. Results: The optimal parameters for the simulation of the whole beam delivery system were defined by referring to the calculation results obtained with each parameter. Although the PDDs from FLUKA and the experimental data show a good agreement, those of GATE and PHITS obtained with our optimal parameters show a minor discrepancy. The measured proton range R90 was 269.37 mm, compared to the calculated range of 269.63 mm, 268.96 mm, and 270.85 mm with FLUKA, GATE and PHITS, respectively. Conclusion: We evaluated the dependence of the results for PDDs obtained with GATE and PHITS Monte Carlo generalpurpose codes on the customizing parameters by using the whole computational model of the treatment nozzle. The optimal parameters for the simulation were then defined by referring to the calculation results. The physical model, particle transport mechanics and the different geometrybased descriptions need accurate customization in three simulation codes to agree with experimental data for artifact-free Monte Carlo simulation. This study was supported by Grants-in Aid for Cancer Research (H22-3rd Term Cancer Control-General-043) from the Ministry of Health

  17. Integrated Tiger Series of electron/photon Monte Carlo transport codes: a user's guide for use on IBM mainframes

    International Nuclear Information System (INIS)

    Kirk, B.L.

    1985-12-01

    The ITS (Integrated Tiger Series) Monte Carlo code package developed at Sandia National Laboratories and distributed as CCC-467/ITS by the Radiation Shielding Information Center (RSIC) at Oak Ridge National Laboratory (ORNL) consists of eight codes - the standard codes, TIGER, CYLTRAN, ACCEPT; the P-codes, TIGERP, CYLTRANP, ACCEPTP; and the M-codes ACCEPTM, CYLTRANM. The codes have been adapted to run on the IBM 3081, VAX 11/780, CDC-7600, and Cray 1 with the use of the update emulator UPEML. This manual should serve as a guide to a user running the codes on IBM computers having 370 architecture. The cases listed were tested on the IBM 3033, under the MVS operating system using the VS Fortran Level 1.3.1 compiler

  18. Optimization of Coding of AR Sources for Transmission Across Channels with Loss

    DEFF Research Database (Denmark)

    Arildsen, Thomas

    Source coding concerns the representation of information in a source signal using as few bits as possible. In the case of lossy source coding, it is the encoding of a source signal using the fewest possible bits at a given distortion or, at the lowest possible distortion given a specified bit rate....... Channel coding is usually applied in combination with source coding to ensure reliable transmission of the (source coded) information at the maximal rate across a channel given the properties of this channel. In this thesis, we consider the coding of auto-regressive (AR) sources which are sources that can...... compared to the case where the encoder is unaware of channel loss. We finally provide an extensive overview of cross-layer communication issues which are important to consider due to the fact that the proposed algorithm interacts with the source coding and exploits channel-related information typically...

  19. Combining independent de novo assemblies optimizes the coding transcriptome for nonconventional model eukaryotic organisms.

    Science.gov (United States)

    Cerveau, Nicolas; Jackson, Daniel J

    2016-12-09

    Next-generation sequencing (NGS) technologies are arguably the most revolutionary technical development to join the list of tools available to molecular biologists since PCR. For researchers working with nonconventional model organisms one major problem with the currently dominant NGS platform (Illumina) stems from the obligatory fragmentation of nucleic acid material that occurs prior to sequencing during library preparation. This step creates a significant bioinformatic challenge for accurate de novo assembly of novel transcriptome data. This challenge becomes apparent when a variety of modern assembly tools (of which there is no shortage) are applied to the same raw NGS dataset. With the same assembly parameters these tools can generate markedly different assembly outputs. In this study we present an approach that generates an optimized consensus de novo assembly of eukaryotic coding transcriptomes. This approach does not represent a new assembler, rather it combines the outputs of a variety of established assembly packages, and removes redundancy via a series of clustering steps. We test and validate our approach using Illumina datasets from six phylogenetically diverse eukaryotes (three metazoans, two plants and a yeast) and two simulated datasets derived from metazoan reference genome annotations. All of these datasets were assembled using three currently popular assembly packages (CLC, Trinity and IDBA-tran). In addition, we experimentally demonstrate that transcripts unique to one particular assembly package are likely to be bioinformatic artefacts. For all eight datasets our pipeline generates more concise transcriptomes that in fact possess more unique annotatable protein domains than any of the three individual assemblers we employed. Another measure of assembly completeness (using the purpose built BUSCO databases) also confirmed that our approach yields more information. Our approach yields coding transcriptome assemblies that are more likely to be

  20. Code Optimization, Frozen Glassy Phase and Improved Decoding Algorithms for Low-Density Parity-Check Codes

    International Nuclear Information System (INIS)

    Huang Hai-Ping

    2015-01-01

    The statistical physics properties of low-density parity-check codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity method. By evaluating the entropy function at the Nishimori temperature, we find that irregular constructions with heterogeneous degree distribution of check (bit) nodes have higher decoding thresholds compared to regular counterparts with homogeneous degree distribution. We also show that the instability of the mean-field calculation takes place only after the entropy crisis, suggesting the presence of a frozen glassy phase at low temperatures. When no prior knowledge of channel noise is assumed (searching for the ground state), we find that a reinforced strategy on normal belief propagation will boost the decoding threshold to a higher value than the normal belief propagation. This value is close to the dynamical transition where all local search heuristics fail to identify the true message (codeword or the ferromagnetic state). After the dynamical transition, the number of metastable states with larger energy density (than the ferromagnetic state) becomes exponentially numerous. When the noise level of the transmission channel approaches the static transition point, there starts to exist exponentially numerous codewords sharing the identical ferromagnetic energy. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  1. Content Adaptive Lagrange Multiplier Selection for Rate-Distortion Optimization in 3-D Wavelet-Based Scalable Video Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2018-03-01

    Full Text Available Rate-distortion optimization (RDO plays an essential role in substantially enhancing the coding efficiency. Currently, rate-distortion optimized mode decision is widely used in scalable video coding (SVC. Among all the possible coding modes, it aims to select the one which has the best trade-off between bitrate and compression distortion. Specifically, this tradeoff is tuned through the choice of the Lagrange multiplier. Despite the prevalence of conventional method for Lagrange multiplier selection in hybrid video coding, the underlying formulation is not applicable to 3-D wavelet-based SVC where the explicit values of the quantization step are not available, with on consideration of the content features of input signal. In this paper, an efficient content adaptive Lagrange multiplier selection algorithm is proposed in the context of RDO for 3-D wavelet-based SVC targeting quality scalability. Our contributions are two-fold. First, we introduce a novel weighting method, which takes account of the mutual information, gradient per pixel, and texture homogeneity to measure the temporal subband characteristics after applying the motion-compensated temporal filtering (MCTF technique. Second, based on the proposed subband weighting factor model, we derive the optimal Lagrange multiplier. Experimental results demonstrate that the proposed algorithm enables more satisfactory video quality with negligible additional computational complexity.

  2. Development of Geometry Optimization Methodology with In-house CFD code, and Challenge in Applying to Fuel Assembly

    International Nuclear Information System (INIS)

    Jeong, J. H.; Lee, K. L.

    2016-01-01

    The wire spacer has important roles to avoid collisions between adjacent rods, to mitigate a vortex induced vibration, and to enhance convective heat transfer by wire spacer induced secondary flow. Many experimental and numerical works has been conducted to understand the thermal-hydraulics of the wire-wrapped fuel bundles. There has been enormous growth in computing capability. Recently, a huge increase of computer power allows to three-dimensional simulation of thermal-hydraulics of wire-wrapped fuel bundles. In this study, the geometry optimization methodology with RANS based in-house CFD (Computational Fluid Dynamics) code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI (General Grid Interface) function is developed for in-house CFD code. Furthermore, three-dimensional flow fields calculated with in-house CFD code are compared with those calculated with general purpose commercial CFD solver, CFX. The geometry optimization methodology with RANS based in-house CFD code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI function is developed for in-house CFD code as same as CFX. Even though both analyses are conducted with same computational meshes, numerical error due to GGI function locally occurred in only CFX solver around rod surface and boundary region between inner fluid region and outer fluid region.

  3. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    Directory of Open Access Journals (Sweden)

    Jensen Søren Holdt

    2005-01-01

    Full Text Available Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of audio signals. In this paper, we present a new perceptual model that predicts masked thresholds for sinusoidal distortions. The model relies on signal detection theory and incorporates more recent insights about spectral and temporal integration in auditory masking. As a consequence, the model is able to predict the distortion detectability. In fact, the distortion detectability defines a (perceptually relevant norm on the underlying signal space which is beneficial for optimisation algorithms such as rate-distortion optimisation or linear predictive coding. We evaluate the merits of the model by combining it with a sinusoidal extraction method and compare the results with those obtained with the ISO MPEG-1 Layer I-II recommended model. Listening tests show a clear preference for the new model. More specifically, the model presented here leads to a reduction of more than 20% in terms of number of sinusoids needed to represent signals at a given quality level.

  4. Development of integrated computer code for analysis of risk reduction strategy

    International Nuclear Information System (INIS)

    Kim, Dong Ha; Kim, See Darl; Kim, Hee Dong

    2002-05-01

    The development of the MIDAS/TH integrated severe accident code was performed in three main areas: 1) addition of new models derived from the national experimental programs and models for APR-1400 Korea next generation reactor, 2) improvement of the existing models using the recently available results, and 3) code restructuring for user friendliness. The unique MIDAS/TH models include: 1) a kinetics module for core power calculation during ATWS, 2) a gap cooling module between the molten corium pool and the reactor vessel wall, 3) a penetration tube failure module, 4) a PAR analysis module, and 5) a look-up table for the pressure and dynamic load during steam explosion. The improved models include: 1) a debris dispersal module considering the cavity geometry during DCH, 2) hydrogen burn and deflagration-to-detonation transition criteria, 3) a peak pressure estimation module for hydrogen detonation, and 4) the heat transfer module between the molten corium pool and the overlying water. The sparger and the ex-vessel heat transfer module were assessed. To enhance user friendliness, code restructuring was performed. In addition, a sample of severe accident analysis results was organized under the preliminary database structure

  5. Towards an Integrated QR Code Biosensor: Light-Driven Sample Acquisition and Bacterial Cellulose Paper Substrate.

    Science.gov (United States)

    Yuan, Mingquan; Jiang, Qisheng; Liu, Keng-Ku; Singamaneni, Srikanth; Chakrabartty, Shantanu

    2018-06-01

    This paper addresses two key challenges toward an integrated forward error-correcting biosensor based on our previously reported self-assembled quick-response (QR) code. The first challenge involves the choice of the paper substrate for printing and self-assembling the QR code. We have compared four different substrates that includes regular printing paper, Whatman filter paper, nitrocellulose membrane and lab synthesized bacterial cellulose. We report that out of the four substrates bacterial cellulose outperforms the others in terms of probe (gold nanorods) and ink retention capability. The second challenge involves remote activation of the analyte sampling and the QR code self-assembly process. In this paper, we use light as a trigger signal and a graphite layer as a light-absorbing material. The resulting change in temperature due to infrared absorption leads to a temperature gradient that then exerts a diffusive force driving the analyte toward the regions of self-assembly. The working principle has been verified in this paper using assembled biosensor prototypes where we demonstrate higher sample flow rate due to light induced thermal gradients.

  6. Psychometric properties of the Motivational Interviewing Treatment Integrity coding system 4.2 with jail inmates.

    Science.gov (United States)

    Owens, Mandy D; Rowell, Lauren N; Moyers, Theresa

    2017-10-01

    Motivational Interviewing (MI) is an evidence-based approach shown to be helpful for a variety of behaviors across many populations. Treatment fidelity is an important tool for understanding how and with whom MI may be most helpful. The Motivational Interviewing Treatment Integrity coding system was recently updated to incorporate new developments in the research and theory of MI, including the relational and technical hypotheses of MI (MITI 4.2). To date, no studies have examined the MITI 4.2 with forensic populations. In this project, twenty-two brief MI interventions with jail inmates were evaluated to test the reliability of the MITI 4.2. Validity of the instrument was explored using regression models to examine the associations between global scores (Empathy, Partnership, Cultivating Change Talk and Softening Sustain Talk) and outcomes. Reliability of this coding system with these data was strong. We found that therapists had lower ratings of Empathy with participants who had more extensive criminal histories. Both Relational and Technical global scores were associated with criminal histories as well as post-intervention ratings of motivation to decrease drug use. Findings indicate that the MITI 4.2 was reliable for coding sessions with jail inmates. Additionally, results provided information related to the relational and technical hypotheses of MI. Future studies can use the MITI 4.2 to better understand the mechanisms behind how MI works with this high-risk group. Published by Elsevier Ltd.

  7. Naval application of battery optimized reactor integral system

    International Nuclear Information System (INIS)

    Kim, N. H.; Kim, T. W.; Son, H. M.; Suh, K. Y.

    2007-01-01

    Past civilian N.S. Savanna (80 MW t h), Otto-Hahn (38 MW t h) and Mutsu (36 MW t h) experienced stable operations under various sea conditions to prove that the reactors were stable and suitable for ship power source. Russian nuclear icebreakers such as Lenin (90 MW t h x2), Arukuchika (150 MW t h x2) showed stable operations under severe conditions during navigation on the Arctic Sea. These reactor systems, however, should be made even more efficient, compact, safe and long life, because adding support from the land may not be available on the sea. In order to meet these requirements, a compact, simple, safe and innovative integral system named Naval Application Vessel Integral System (NAVIS) is being designed with such novel concepts as a primary liquid metal coolant, a secondary supercritical carbon dioxide (SCO 2 ) coolant, emergency reactor cooling system, safety containment and so on. NAVIS is powered by Battery Optimized Reactor Integral System (BORIS). An ultra-small, ultra-long-life, versatile-purpose, fast-spectrum reactor named BORIS is being developed for a multi-purpose application such as naval power source, electric power generation in remote areas, seawater desalination, and district heating. NAVIS aims to satisfy special environment on the sea with BORIS using the lead (Pb) coolant in the primary system. NAVIS improves the economical efficiency resorting to the SCO 2 Brayton cycle for the secondary system. BORIS is operated by natural circulation of Pb without needing pumps. The reactor power is autonomously controlled by load-following operation without an active reactivity control system, whereas B 4 C based shutdown control rod is equipped for an emergency condition. SCO 2 promises a high power conversion efficiency of the recompression Brayton cycle due to its excellent compressibility reducing the compression work at the bottom of the cycle and to a higher density than helium or steam decreasing the component size. Therefore, the SCO 2 Brayton

  8. Duration estimates within a modality are integrated sub-optimally

    Directory of Open Access Journals (Sweden)

    Ming Bo eCai

    2015-08-01

    Full Text Available Perceived duration can be influenced by various properties of sensory stimuli. For example, visual stimuli of higher temporal frequency are perceived to last longer than those of lower temporal frequency. How does the brain form a representation of duration when each of two simultaneously presented stimuli influences perceived duration in different way? To answer this question, we investigated the perceived duration of a pair of dynamic visual stimuli of different temporal frequencies in comparison to that of a single visual stimulus of either low or high temporal frequency. We found that the duration representation of simultaneously occurring visual stimuli is best described by weighting the estimates of duration based on each individual stimulus. However, the weighting performance deviates from the prediction of statistically optimal integration. In addition, we provided a Bayesian account to explain a difference in the apparent sensitivity of the psychometric curves introduced by the order in which the two stimuli are displayed in a two-alternative forced-choice task.

  9. Embedded Systems Hardware Integration and Code Development for Maraia Capsule and E-MIST

    Science.gov (United States)

    Carretero, Emmanuel S.

    2015-01-01

    The cost of sending large spacecraft to orbit makes them undesirable for carrying out smaller scientific missions. Small spacecraft are more economical and can be tailored for missions where specific tasks need to be carried out, the Maraia capsule is such a spacecraft. Maraia will allow for samples of experiments conducted on the International Space Station to be returned to earth. The use of balloons to conduct experiments at the edge of space is a practical approach to reducing the large expense of using rockets. E-MIST is a payload designed to fly on a high altitude balloon. It can maintain science experiments in a controlled manner at the edge of space. The work covered here entails the integration of hardware onto each of the mentioned systems and the code associated with such work. In particular, the resistance temperature detector, pressure transducers, cameras, and thrusters for Maraia are discussed. The integration of the resistance temperature detectors and motor controllers to E-MIST is described. Several issues associated with sensor accuracy, code lock-up, and in-flight reset issues are mentioned. The solutions and proposed solutions to these issues are explained.

  10. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2001-01-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  11. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)

    2001-07-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  12. Development of Integrated Code for Risk Assessment (INCORIA) for Physical Protection System

    International Nuclear Information System (INIS)

    Jang, Sung Soon; Seo, Hyung Min; Yoo, Ho Sik

    2010-01-01

    A physical protection system (PPS) integrates people, procedures and equipment for the protection of assets or facilities against theft, sabotage or other malevolent human attacks. Among critical facilities, nuclear facilities and nuclear weapon sites require the highest level of PPS. After the September 11, 2001 terrorist attacks, international communities, including the IAEA, have made substantial efforts to protect nuclear material and nuclear facilities. The international flow on nuclear security is using the concept or risk assessment. The concept of risk assessment is firstly devised by nuclear safety people. They considered nuclear safety including its possible risk, which is the frequency of failure and possible consequence. Nuclear security people also considers security risk, which is the frequency of threat action, vulnerability, and consequences. The concept means that we should protect more when the credible threat exists and the possible radiological consequence is high. Even if there are several risk assessment methods of nuclear security, the application needs the help of tools because of a lot of calculation. It's also hard to find tools for whole phase of risk assessment. Several codes exist for the part of risk assessment. SAVI are used for vulnerability of PPS. Vital area identification code is used for consequence analysis. We are developing Integrated Code for Risk Assessment (INCORIA) to apply risk assessment methods for nuclear facilities. INCORIA evaluates PP-KINAC measures and generation tools for threat scenario. PP-KINAC is risk assessment measures for physical protection system developed by Hosik Yoo and is easy to apply. A threat scenario tool is used to generate threat scenario, which is used as one of input value to PP-KINAC measures

  13. Integrating environmental goals into urban redevelopment schemes: lessons from the Code River, Yogyakarta, Indonesia.

    Science.gov (United States)

    Setiawan, B B

    2002-01-01

    The settlement along the bank of the Code River in Yogyakarta, Indonesia provides housing for a large mass of the city's poor. Its strategic location and the fact that most urban poor do not have access to land, attracts people to "illegally" settle along the bank of the river. This brings negative consequences for the environment, particularly the increasing domestic waste along the river and the annual flooding in the rainy season. While the public controversies regarding the existence of the settlement along the Code River were still not resolved, at the end of the 1980s, a group of architects, academics and community members proposed the idea of constructing a dike along the River as part of a broader settlement improvement program. From 1991 to 1998, thousands of local people mobilized their resources and were able to construct 6,000 metres of riverside dike along the Code River. The construction of the riverside dike along the River has become an important "stimulant" that generated not only settlement improvement, but also a better treatment of river water. As all housing units located along the River are now facing the River, the River itself is considered the "front-yard". Before the dike was constructed, the inhabitants used to treat the River as the "backyard" and therefore just throw waste into the River. They now really want to have a cleaner river, since the River is an important part of their settlement. The settlement along the Code River presents a complex range of persistent problems with informal settlements in Indonesia; such problems are related to the issues of how to provide more affordable and adequate housing for the poor, while at the same time, to improve the water quality of the river. The project represents a good case, which shows that through a mutual partnership among stakeholders, it is possible to integrate environmental goals into urban redevelopment schemes.

  14. Topology and boundary shape optimization as an integrated design tool

    Science.gov (United States)

    Bendsoe, Martin Philip; Rodrigues, Helder Carrico

    1990-01-01

    The optimal topology of a two dimensional linear elastic body can be computed by regarding the body as a domain of the plane with a high density of material. Such an optimal topology can then be used as the basis for a shape optimization method that computes the optimal form of the boundary curves of the body. This results in an efficient and reliable design tool, which can be implemented via common FEM mesh generator and CAD type input-output facilities.

  15. A Distributed Flow Rate Control Algorithm for Networked Agent System with Multiple Coding Rates to Optimize Multimedia Data Transmission

    Directory of Open Access Journals (Sweden)

    Shuai Zeng

    2013-01-01

    Full Text Available With the development of wireless technologies, mobile communication applies more and more extensively in the various walks of life. The social network of both fixed and mobile users can be seen as networked agent system. At present, kinds of devices and access network technology are widely used. Different users in this networked agent system may need different coding rates multimedia data due to their heterogeneous demand. This paper proposes a distributed flow rate control algorithm to optimize multimedia data transmission of the networked agent system with the coexisting various coding rates. In this proposed algorithm, transmission path and upload bandwidth of different coding rate data between source node, fixed and mobile nodes are appropriately arranged and controlled. On the one hand, this algorithm can provide user nodes with differentiated coding rate data and corresponding flow rate. On the other hand, it makes the different coding rate data and user nodes networked, which realizes the sharing of upload bandwidth of user nodes which require different coding rate data. The study conducts mathematical modeling on the proposed algorithm and compares the system that adopts the proposed algorithm with the existing system based on the simulation experiment and mathematical analysis. The results show that the system that adopts the proposed algorithm achieves higher upload bandwidth utilization of user nodes and lower upload bandwidth consumption of source node.

  16. Ada Integrated Environment III Computer Program Development Specification. Volume III. Ada Optimizing Compiler.

    Science.gov (United States)

    1981-12-01

    file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler

  17. A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications

    Science.gov (United States)

    DuMonthier, Jeffrey; Suarez, George

    2013-01-01

    Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the

  18. Optimization of an Electromagnetics Code with Multicore Wavefront Diamond Blocking and Multi-dimensional Intra-Tile Parallelization

    KAUST Repository

    Malas, Tareq M.

    2016-07-21

    Understanding and optimizing the properties of solar cells is becoming a key issue in the search for alternatives to nuclear and fossil energy sources. A theoretical analysis via numerical simulations involves solving Maxwell\\'s Equations in discretized form and typically requires substantial computing effort. We start from a hybrid-parallel (MPI+OpenMP) production code that implements the Time Harmonic Inverse Iteration Method (THIIM) with Finite-Difference Frequency Domain (FDFD) discretization. Although this algorithm has the characteristics of a strongly bandwidth-bound stencil update scheme, it is significantly different from the popular stencil types that have been exhaustively studied in the high performance computing literature to date. We apply a recently developed stencil optimization technique, multicore wavefront diamond tiling with multi-dimensional cache block sharing, and describe in detail the peculiarities that need to be considered due to the special stencil structure. Concurrency in updating the components of the electric and magnetic fields provides an additional level of parallelism. The dependence of the cache size requirement of the optimized code on the blocking parameters is modeled accurately, and an auto-tuner searches for optimal configurations in the remaining parameter space. We were able to completely decouple the execution from the memory bandwidth bottleneck, accelerating the implementation by a factor of three to four compared to an optimal implementation with pure spatial blocking on an 18-core Intel Haswell CPU.

  19. User's manual for the BNW-II optimization code for dry/wet-cooled power plants

    Energy Technology Data Exchange (ETDEWEB)

    Braun, D.J.; Bamberger, J.A.; Braun, D.J.; Faletti, D.W.; Wiles, L.E.

    1978-05-01

    The User's Manual describes how to operate BNW-II, a computer code developed by the Pacific Northwest Laboratory (PNL) as a part of its activities under the Department of Energy (DOE) Dry Cooling Enhancement Program. The computer program offers a comprehensive method of evaluating the cost savings potential of dry/wet-cooled heat rejection systems. Going beyond simple ''figure-of-merit'' cooling tower optimization, this method includes such items as the cost of annual replacement capacity, and the optimum split between plant scale-up and replacement capacity, as well as the purchase and operating costs of all major heat rejection components. Hence the BNW-II code is a useful tool for determining potential cost savings of new dry/wet surfaces, new piping, or other components as part of an optimized system for a dry/wet-cooled plant.

  20. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  1. The Application of Social Characteristic and L1 Optimization in the Error Correction for Network Coding in Wireless Sensor Networks.

    Science.gov (United States)

    Zhang, Guangzhi; Cai, Shaobin; Xiong, Naixue

    2018-02-03

    One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C /2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C /2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

  2. Integrated topology and shape optimization in structural design

    Science.gov (United States)

    Bremicker, M.; Chirehdast, M.; Kikuchi, N.; Papalambros, P. Y.

    1990-01-01

    Structural optimization procedures usually start from a given design topology and vary its proportions or boundary shapes to achieve optimality under various constraints. Two different categories of structural optimization are distinguished in the literature, namely sizing and shape optimization. A major restriction in both cases is that the design topology is considered fixed and given. Questions concerning the general layout of a design (such as whether a truss or a solid structure should be used) as well as more detailed topology features (e.g., the number and connectivities of bars in a truss or the number of holes in a solid) have to be resolved by design experience before formulating the structural optimization model. Design quality of an optimized structure still depends strongly on engineering intuition. This article presents a novel approach for initiating formal structural optimization at an earlier stage, where the design topology is rigorously generated in addition to selecting shape and size dimensions. A three-phase design process is discussed: an optimal initial topology is created by a homogenization method as a gray level image, which is then transformed to a realizable design using computer vision techniques; this design is then parameterized and treated in detail by sizing and shape optimization. A fully automated process is described for trusses. Optimization of two dimensional solid structures is also discussed. Several application-oriented examples illustrate the usefulness of the proposed methodology.

  3. Accuracy improvement of SPACE code using the optimization for CHF subroutine

    International Nuclear Information System (INIS)

    Yang, Chang Keun; Kim, Yo Han; Park, Jong Eun; Ha, Sang Jun

    2010-01-01

    Typically, a subroutine to calculate the CHF (Critical Heat Flux) is loaded in code for safety analysis of nuclear power plant. CHF subroutine calculates CHF phenomenon using arbitrary condition (Temperature, pressure, flow rate, power, etc). When safety analysis for nuclear power plant is performed using major factor, CHF parameter is one of the most important factor. But the subroutines used in most codes, such as Biasi method, etc., estimate some different values from experimental data. Most CHF subroutines in the codes could predict only in their specification area, such as pressure, mass flow, void fraction, etc. Even though the most accurate CHF subroutine is used in the high quality nuclear safety analysis code, it is not assured that the valued predicted values by the subroutine are acceptable out of their application area. To overcome this hardship, various approaches to estimate the CHF have been examined during the code developing stage of SPACE. And the six sigma technique was adopted for the examination as mentioned this study. The objective of this study is to improvement of CHF prediction accuracy for nuclear power plant safety analysis code using the CHF database and Six Sigma technique. Through the study, it was concluded that the six sigma technique was useful to quantify the deviation of prediction values to experimental data and the implemented CHF prediction method in SPACE code had well-predict capabilities compared with those from other methods

  4. Integrated optimization of temperature, CO2, screen use and artificial lighting in greenhouse crops

    DEFF Research Database (Denmark)

    Aaslyng, J.M.; Körner, O.; Andreassen, A.U.

    2005-01-01

    A leaf photosynthesis model is suggested for integrated optimization of temperature, CO2, screen use and artificial lighting in greenhouse crops. Three different approaches for the optimization are presented. First, results from greenhouse experiments with model based optimization are presented....... Second, a model-based analysis of a commercial grower's production possibility is shown. Third, results from a simulation of the effect of a new lighting strategy are demonstrated. The results demonstrate that it is possible to optimize plant production by using a model-based integrated optimization...... of temperature, CO2, and light in the greenhouse...

  5. Integrating prediction, provenance, and optimization into high energy workflows

    Energy Technology Data Exchange (ETDEWEB)

    Schram, M.; Bansal, V.; Friese, R. D.; Tallent, N. R.; Yin, J.; Barker, K. J.; Stephan, E.; Halappanavar, M.; Kerbyson, D. J.

    2017-10-01

    We propose a novel approach for efficient execution of workflows on distributed resources. The key components of this framework include: performance modeling to quantitatively predict workflow component behavior; optimization-based scheduling such as choosing an optimal subset of resources to meet demand and assignment of tasks to resources; distributed I/O optimizations such as prefetching; and provenance methods for collecting performance data. In preliminary results, these techniques improve throughput on a small Belle II workflow by 20%.

  6. Research Integrity and Research Ethics in Professional Codes of Ethics: Survey of Terminology Used by Professional Organizations across Research Disciplines.

    Science.gov (United States)

    Komić, Dubravka; Marušić, Stjepan Ljudevit; Marušić, Ana

    2015-01-01

    Professional codes of ethics are social contracts among members of a professional group, which aim to instigate, encourage and nurture ethical behaviour and prevent professional misconduct, including research and publication. Despite the existence of codes of ethics, research misconduct remains a serious problem. A survey of codes of ethics from 795 professional organizations from the Illinois Institute of Technology's Codes of Ethics Collection showed that 182 of them (23%) used research integrity and research ethics terminology in their codes, with differences across disciplines: while the terminology was common in professional organizations in social sciences (82%), mental health (71%), sciences (61%), other organizations had no statements (construction trades, fraternal social organizations, real estate) or a few of them (management, media, engineering). A subsample of 158 professional organizations we judged to be directly involved in research significantly more often had statements on research integrity/ethics terminology than the whole sample: an average of 10.4% of organizations with a statement (95% CI = 10.4-23-5%) on any of the 27 research integrity/ethics terms compared to 3.3% (95% CI = 2.1-4.6%), respectively (Porganizations should define research integrity and research ethics issues in their ethics codes and collaborate within and across disciplines to adequately address responsible conduct of research and meet contemporary needs of their communities.

  7. Research Integrity and Research Ethics in Professional Codes of Ethics: Survey of Terminology Used by Professional Organizations across Research Disciplines

    Science.gov (United States)

    Komić, Dubravka; Marušić, Stjepan Ljudevit; Marušić, Ana

    2015-01-01

    Professional codes of ethics are social contracts among members of a professional group, which aim to instigate, encourage and nurture ethical behaviour and prevent professional misconduct, including research and publication. Despite the existence of codes of ethics, research misconduct remains a serious problem. A survey of codes of ethics from 795 professional organizations from the Illinois Institute of Technology’s Codes of Ethics Collection showed that 182 of them (23%) used research integrity and research ethics terminology in their codes, with differences across disciplines: while the terminology was common in professional organizations in social sciences (82%), mental health (71%), sciences (61%), other organizations had no statements (construction trades, fraternal social organizations, real estate) or a few of them (management, media, engineering). A subsample of 158 professional organizations we judged to be directly involved in research significantly more often had statements on research integrity/ethics terminology than the whole sample: an average of 10.4% of organizations with a statement (95% CI = 10.4-23-5%) on any of the 27 research integrity/ethics terms compared to 3.3% (95% CI = 2.1–4.6%), respectively (Pethics concepts used prescriptive language in describing the standard of practice. Professional organizations should define research integrity and research ethics issues in their ethics codes and collaborate within and across disciplines to adequately address responsible conduct of research and meet contemporary needs of their communities. PMID:26192805

  8. Fission-product release modelling in the ASTEC integral code: the status of the ELSA module

    International Nuclear Information System (INIS)

    Plumecocq, W.; Kissane, M.P.; Manenc, H.; Giordano, P.

    2003-01-01

    Safety assessment of water-cooled nuclear reactors encompasses potential severe accidents where, in particular, the release of fission products (FPs) and actinides into the reactor coolant system (RCS) is evaluated. The ELSA module is used in the ASTEC integral code to model all releases into the RCS. A wide variety of experiments is used for validation: small-scale CRL, ORNL and VERCORS tests; large-scale Phebus-FP tests; etc. Being a tool that covers intact fuel and degraded states, ELSA is being improved maximizing the use of information from degradation modelling. Short-term improvements will include some treatment of initial FP release due to intergranular inventories and implementing models for release of additional structural materials (Sn, Fe, etc.). (author)

  9. Conversion coefficients for individual monitoring calculated with integrated tiger series, ITS3, Monte Carlo code

    International Nuclear Information System (INIS)

    Devine, R.T.; Hsu, Hsiao-Hua

    1994-01-01

    The current basis for conversion coefficients for calibrating individual photon dosimeters in terms of dose equivalents is found in the series of papers by Grosswent. In his calculation the collision kerma inside the phantom is determined by calculation of the energy fluence at the point of interest and the use of the mass energy absorption coefficient. This approximates the local absorbed dose. Other Monte Carlo methods can be sued to provide calculations of the conversion coefficients. Rogers has calculated fluence-to-dose equivalent conversion factors with the Electron-Gamma Shower Version 3, EGS3, Monte Carlo program and produced results similar to Grosswent's calculations. This paper will report on calculations using the Integrated TIGER Series Version 3, ITS3, code to calculate the conversion coefficients in ICRU Tissue and in PMMA. A complete description of the input parameters to the program is given and comparison to previous results is included

  10. Behaviors of impurity in ITER and DEMOs using BALDUR integrated predictive modeling code

    International Nuclear Information System (INIS)

    Onjun, Thawatchai; Buangam, Wannapa; Wisitsorasak, Apiwat

    2015-01-01

    The behaviors of impurity are investigated using self-consistent modeling of 1.5D BALDUR integrated predictive modeling code, in which theory-based models are used for both core and edge region. In these simulations, a combination of NCLASS neoclassical transport and Multi-mode (MMM95) anomalous transport model is used to compute a core transport. The boundary is taken to be at the top of the pedestal, where the pedestal values are described using a theory-based pedestal model. This pedestal temperature model is based on a combination of magnetic and flow shear stabilization pedestal width scaling and an infinite-n ballooning pressure gradient model. The time evolution of plasma current, temperature and density profiles is carried out for ITER and DEMOs plasmas. As a result, the impurity behaviors such as impurity accumulation and impurity transport can be investigated. (author)

  11. Counter-part Test and Code Analysis of the Integral Test Loop, SNUF

    International Nuclear Information System (INIS)

    Park, Goon Cherl; Bae, B. U.; Lee, K. H.; Cho, Y. J.

    2007-02-01

    The thermal-hydraulic phenomena of Direct Vessel Injection (DVI) line Small-Break Loss-of-Coolant Accident (SBLOCA) in pressurized water reactor, APR1400, were investigated. The reduced-height and reduced-pressure integral test loop, SNUF (Seoul National University Facility), was constructed with scaling down the prototype. For the appropriate test conditions in the experiment of SNUF, the energy scaling methodology was suggested as scaling the coolant mass inventory and thermal power for the reduced-pressure condition. From the MARS code analysis, the energy scaling methodology was confirmed to show the reasonable transient when ideally scaled-down SNUF model was compared to the prototype model. In the experiments according to the conditions determined by energy scaling methodology, the phenomenon of downcomer seal clearing had a dominant role in decrease of the system pressure and increase of the coolant level of core. The experimental results was utilized to validate the calculation capability of MARS

  12. CLUB - a multigroup integral transport theory code for lattice calculations of PHWR cells

    International Nuclear Information System (INIS)

    Krishnani, P.D.

    1992-01-01

    The computer code CLUB has been developed to calculate lattice parameters as a function of burnup for a pressurised heavy water reactor (PHWR) lattice cell containing fuel in the form of cluster. It solves the multigroup integral transport equation by the method based on combination of small scale collision probability (CP) method and large scale interface current technique. The calculations are performed by using WIMS 69 group cross section library or its condensed versions of 27 or 28 group libraries. It can also compute Keff from the given geometrical buckling in the input using multigroup diffusion theory in fundamental mode. The first order differential burnup equations can be solved by either Trapezoidal rule or Runge-Kutta method. (author). 17 refs., 2 figs

  13. Integrated Numerical Experiments (INEX) and the Free-Electron Laser Physical Process Code (FELPPC)

    International Nuclear Information System (INIS)

    Thode, L.E.; Chan, K.C.D.; Schmitt, M.J.; McKee, J.; Ostic, J.; Elliott, C.J.; McVey, B.D.

    1990-01-01

    The strong coupling of subsystem elements, such as the accelerator, wiggler, and optics, greatly complicates the understanding and design of a free electron laser (FEL), even at the conceptual level. To address the strong coupling character of the FEL the concept of an Integrated Numerical Experiment (INEX) was proposed. Unique features of the INEX approach are consistency and numerical equivalence of experimental diagnostics. The equivalent numerical diagnostics mitigates the major problem of misinterpretation that often occurs when theoretical and experimental data are compared. The INEX approach has been applied to a large number of accelerator and FEL experiments. Overall, the agreement between INEX and the experiments is very good. Despite the success of INEX, the approach is difficult to apply to trade-off and initial design studies because of the significant manpower and computational requirements. On the other hand, INEX provides a base from which realistic accelerator, wiggler, and optics models can be developed. The Free Electron Laser Physical Process Code (FELPPC) includes models developed from INEX, provides coupling between the subsystem models, and incorporates application models relevant to a specific trade-off or design study. In other words, FELPPC solves the complete physical process model using realistic physics and technology constraints. Because FELPPC provides a detailed design, a good estimate for the FEL mass, cost, and size can be made from a piece-part count of the FEL. FELPPC requires significant accelerator and FEL expertise to operate. The code can calculate complex FEL configurations including multiple accelerator and wiggler combinations

  14. Development and Validation of a Momentum Integral Numerical Analysis Code for Liquid Metal Fast Reactor

    International Nuclear Information System (INIS)

    Chen, Xiangyi; Suh, Kune Y.

    2016-01-01

    In this work, this benchmark problem is conducted to assess the precision of the upgraded in-house code MINA. Comparison of the results from different best estimate codes employed by various grid spacer pressure drop correlations is carried out to suggest the best one. By modifying In's method, it presents good agreement with the experiment data which is shown in Figure 7. The reason for the failure of the prediction in previous work is caused by the utilization of Rehme's method which is categorized into four groups according to different fitting strategy. Through comparison of drag coefficients calculated by four groups of Rheme's method, equivalent drag coefficient calculated by In's method and experiment data shown in Figure 8, we can conclude that Rehme's method considerably underestimate the drag coefficients in grid spacers used in HELIOS and In's method give a reasonable prediction. Starting from the core inlet, the accumulated pressure losses are presented in figure 9 along the accumulated length of the forced convection flow path; the good agreement of the prediction from MINA with the experiment result shows MINA has very good capability in integrated momentum analysis makes it robust in the future design scoping method development of LFR.

  15. Development and Validation of a Momentum Integral Numerical Analysis Code for Liquid Metal Fast Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiangyi; Suh, Kune Y. [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this work, this benchmark problem is conducted to assess the precision of the upgraded in-house code MINA. Comparison of the results from different best estimate codes employed by various grid spacer pressure drop correlations is carried out to suggest the best one. By modifying In's method, it presents good agreement with the experiment data which is shown in Figure 7. The reason for the failure of the prediction in previous work is caused by the utilization of Rehme's method which is categorized into four groups according to different fitting strategy. Through comparison of drag coefficients calculated by four groups of Rheme's method, equivalent drag coefficient calculated by In's method and experiment data shown in Figure 8, we can conclude that Rehme's method considerably underestimate the drag coefficients in grid spacers used in HELIOS and In's method give a reasonable prediction. Starting from the core inlet, the accumulated pressure losses are presented in figure 9 along the accumulated length of the forced convection flow path; the good agreement of the prediction from MINA with the experiment result shows MINA has very good capability in integrated momentum analysis makes it robust in the future design scoping method development of LFR.

  16. The Impact of Diagnostic Code Misclassification on Optimizing the Experimental Design of Genetic Association Studies

    Directory of Open Access Journals (Sweden)

    Steven J. Schrodi

    2017-01-01

    Full Text Available Diagnostic codes within electronic health record systems can vary widely in accuracy. It has been noted that the number of instances of a particular diagnostic code monotonically increases with the accuracy of disease phenotype classification. As a growing number of health system databases become linked with genomic data, it is critically important to understand the effect of this misclassification on the power of genetic association studies. Here, I investigate the impact of this diagnostic code misclassification on the power of genetic association studies with the aim to better inform experimental designs using health informatics data. The trade-off between (i reduced misclassification rates from utilizing additional instances of a diagnostic code per individual and (ii the resulting smaller sample size is explored, and general rules are presented to improve experimental designs.

  17. A SURVEY ON OPTIMIZATION APPROACHES TO SEMANTIC SERVICE DISCOVERY TOWARDS AN INTEGRATED SOLUTION

    Directory of Open Access Journals (Sweden)

    Chellammal Surianarayanan

    2012-07-01

    Full Text Available The process of semantic service discovery using an ontology reasoner such as Pellet is time consuming. This restricts the usage of web services in real time applications having dynamic composition requirements. As performance of semantic service discovery is crucial in service composition, it should be optimized. Various optimization methods are being proposed to improve the performance of semantic discovery. In this work, we investigate the existing optimization methods and broadly classify optimization mechanisms into two categories, namely optimization by efficient reasoning and optimization by efficient matching. Optimization by efficient matching is further classified into subcategories such as optimization by clustering, optimization by inverted indexing, optimization by caching, optimization by hybrid methods, optimization by efficient data structures and optimization by efficient matching algorithms. With a detailed study of different methods, an integrated optimization infrastructure along with matching method has been proposed to improve the performance of semantic matching component. To achieve better optimization the proposed method integrates the effects of caching, clustering and indexing. Theoretical aspects of performance evaluation of the proposed method are discussed.

  18. SWAT4.0 - The integrated burnup code system driving continuous energy Monte Carlo codes MVP, MCNP and deterministic calculation code SRAC

    International Nuclear Information System (INIS)

    Kashima, Takao; Suyama, Kenya; Takada, Tomoyuki

    2015-03-01

    There have been two versions of SWAT depending on details of its development history: the revised SWAT that uses the deterministic calculation code SRAC as a neutron transportation solver, and the SWAT3.1 that uses the continuous energy Monte Carlo code MVP or MCNP5 for the same purpose. It takes several hours, however, to execute one calculation by the continuous energy Monte Carlo code even on the super computer of the Japan Atomic Energy Agency. Moreover, two-dimensional burnup calculation is not practical using the revised SWAT because it has problems on production of effective cross section data and applying them to arbitrary fuel geometry when a calculation model has multiple burnup zones. Therefore, SWAT4.0 has been developed by adding, to SWAT3.1, a function to utilize the deterministic code SARC2006, which has shorter calculation time, as an outer module of neutron transportation solver for burnup calculation. SWAT4.0 has been enabled to execute two-dimensional burnup calculation by providing an input data template of SRAC2006 to SWAT4.0 input data, and updating atomic number densities of burnup zones in each burnup step. This report describes outline, input data instruction, and examples of calculations of SWAT4.0. (author)

  19. Iterative Phase Optimization of Elementary Quantum Error Correcting Codes (Open Access, Publisher’s Version)

    Science.gov (United States)

    2016-08-24

    to the seven-qubit Steane code [29] and also represents the smallest instance of a 2D topological color code [30]. Since the realized quantum error...Quantum Computations on a Topologically Encoded Qubit, Science 345, 302 (2014). [17] M. Cramer, M. B. Plenio, S. T. Flammia, R. Somma, D. Gross, S. D...Memory, J. Math . Phys. (N.Y.) 43, 4452 (2002). [20] B. M. Terhal, Quantum Error Correction for Quantum Memories, Rev. Mod. Phys. 87, 307 (2015). [21] D

  20. Performance Evaluation of a Novel Optimization Sequential Algorithm (SeQ Code for FTTH Network

    Directory of Open Access Journals (Sweden)

    Fazlina C.A.S.

    2017-01-01

    Full Text Available The SeQ codes has advantages, such as variable cross-correlation property at any given number of users and weights, as well as effectively suppressed the impacts of phase induced intensity noise (PIIN and multiple access interference (MAI cancellation property. The result revealed, at system performance analysis of BER = 10-09, the SeQ code capable to achieved 1 Gbps up to 60 km.

  1. Rare earth-doped integrated glass components: modeling and optimization

    DEFF Research Database (Denmark)

    Lumholt, Ole; Bjarklev, Anders Overgaard; Rasmussen, Thomas

    1995-01-01

    is performed, and the influence of variations in the launched pump power, the core cross section, the waveguide length, the erbium concentration, and the background losses are evaluated. Optimal design proposals are given, and the process reproducibility of the proposed optimal design is examined. Requirements...

  2. Integrated Reliability-Based Optimal Design of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1987-01-01

    In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables......-based optimal design is discussed. Next, an optimal inspection and repair strategy for existing structural systems is presented. An optimization problem is formulated , where the objective is to minimize the expected total future cost of inspection and repair subject to the constraint that the reliability...... value. The reliability can be measured from an element and/or a systems point of view. A number of methods to solve reliability-based optimization problems has been suggested, see e.g. Frangopol [I]. Murotsu et al. (2], Thoft-Christensen & Sørensen (3] and Sørensen (4). For structures where...

  3. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    Energy Technology Data Exchange (ETDEWEB)

    Kurosu, Keita [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, Osaka University Graduate School of Medicine, Suita, Osaka 565-0871 (Japan); Department of Radiology, Osaka University Hospital, Suita, Osaka 565-0871 (Japan); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Moskvin, Vadim P. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, IN 46202 (United States); Department of Radiation Oncology, St. Jude Children’s Research Hospital, Memphis, TN 38105 (United States)

    2016-01-15

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm{sup 3}, which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm{sup 3} voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation

  4. Optimization of GATE and PHITS Monte Carlo code parameters for spot scanning proton beam based on simulation with FLUKA general-purpose code

    International Nuclear Information System (INIS)

    Kurosu, Keita; Das, Indra J.; Moskvin, Vadim P.

    2016-01-01

    Spot scanning, owing to its superior dose-shaping capability, provides unsurpassed dose conformity, in particular for complex targets. However, the robustness of the delivered dose distribution and prescription has to be verified. Monte Carlo (MC) simulation has the potential to generate significant advantages for high-precise particle therapy, especially for medium containing inhomogeneities. However, the inherent choice of computational parameters in MC simulation codes of GATE, PHITS and FLUKA that is observed for uniform scanning proton beam needs to be evaluated. This means that the relationship between the effect of input parameters and the calculation results should be carefully scrutinized. The objective of this study was, therefore, to determine the optimal parameters for the spot scanning proton beam for both GATE and PHITS codes by using data from FLUKA simulation as a reference. The proton beam scanning system of the Indiana University Health Proton Therapy Center was modeled in FLUKA, and the geometry was subsequently and identically transferred to GATE and PHITS. Although the beam transport is managed by spot scanning system, the spot location is always set at the center of a water phantom of 600 × 600 × 300 mm 3 , which is placed after the treatment nozzle. The percentage depth dose (PDD) is computed along the central axis using 0.5 × 0.5 × 0.5 mm 3 voxels in the water phantom. The PDDs and the proton ranges obtained with several computational parameters are then compared to those of FLUKA, and optimal parameters are determined from the accuracy of the proton range, suppressed dose deviation, and computational time minimization. Our results indicate that the optimized parameters are different from those for uniform scanning, suggesting that the gold standard for setting computational parameters for any proton therapy application cannot be determined consistently since the impact of setting parameters depends on the proton irradiation technique

  5. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    International Nuclear Information System (INIS)

    Binh, Do Quang; Huy, Ngo Quang; Hai, Nguyen Hoang

    2014-01-01

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  6. A binary mixed integer coded genetic algorithm for multi-objective optimization of nuclear research reactor fuel reloading

    Energy Technology Data Exchange (ETDEWEB)

    Binh, Do Quang [University of Technical Education Ho Chi Minh City (Viet Nam); Huy, Ngo Quang [University of Industry Ho Chi Minh City (Viet Nam); Hai, Nguyen Hoang [Centre for Research and Development of Radiation Technology, Ho Chi Minh City (Viet Nam)

    2014-12-15

    This paper presents a new approach based on a binary mixed integer coded genetic algorithm in conjunction with the weighted sum method for multi-objective optimization of fuel loading patterns for nuclear research reactors. The proposed genetic algorithm works with two types of chromosomes: binary and integer chromosomes, and consists of two types of genetic operators: one working on binary chromosomes and the other working on integer chromosomes. The algorithm automatically searches for the most suitable weighting factors of the weighting function and the optimal fuel loading patterns in the search process. Illustrative calculations are implemented for a research reactor type TRIGA MARK II loaded with the Russian VVR-M2 fuels. Results show that the proposed genetic algorithm can successfully search for both the best weighting factors and a set of approximate optimal loading patterns that maximize the effective multiplication factor and minimize the power peaking factor while satisfying operational and safety constraints for the research reactor.

  7. Optimal integration and test plans for software releases of lithographic systems

    NARCIS (Netherlands)

    Boumen, R.; Jong, de I.S.M.; Mortel - Fronczak, van de J.M.; Rooda, J.E.

    2007-01-01

    This paper describes a method to determine the optimal integration and test plan for embedded systems software releases. The method consists of four steps: 1)describe the integration and test problem in an integration and test model which is introduced in this paper, 2) determine possible test

  8. The EUCLID/V1 Integrated Code for Safety Assessment of Liquid Metal Cooled Fast Reactors. Part 1: Basic Models

    Science.gov (United States)

    Mosunova, N. A.

    2018-05-01

    The article describes the basic models included in the EUCLID/V1 integrated code intended for safety analysis of liquid metal (sodium, lead, and lead-bismuth) cooled fast reactors using fuel rods with a gas gap and pellet dioxide, mixed oxide or nitride uranium-plutonium fuel under normal operation, under anticipated operational occurrences and accident conditions by carrying out interconnected thermal-hydraulic, neutronics, and thermal-mechanical calculations. Information about the Russian and foreign analogs of the EUCLID/V1 integrated code is given. Modeled objects, equation systems in differential form solved in each module of the EUCLID/V1 integrated code (the thermal-hydraulic, neutronics, fuel rod analysis module, and the burnup and decay heat calculation modules), the main calculated quantities, and also the limitations on application of the code are presented. The article also gives data on the scope of functions performed by the integrated code's thermal-hydraulic module, using which it is possible to describe both one- and twophase processes occurring in the coolant. It is shown that, owing to the availability of the fuel rod analysis module in the integrated code, it becomes possible to estimate the performance of fuel rods in different regimes of the reactor operation. It is also shown that the models implemented in the code for calculating neutron-physical processes make it possible to take into account the neutron field distribution over the fuel assembly cross section as well as other features important for the safety assessment of fast reactors.

  9. Integral Optimization of Systematic Parameters of Flip-Flow Screens

    Institute of Scientific and Technical Information of China (English)

    翟宏新

    2004-01-01

    The synthetic index Ks for evaluating flip-flow screens is proposed and systematically optimized in view of the whole system. A series of optimized values of relevant parameters are found and then compared with those of the current industrial specifications. The results show that the optimized value Ks approaches the one of those famous flip-flow screens in the world. Some new findings on geometric and kinematics parameters are useful for improving the flip-flow screens with a low Ks value, which is helpful in developing clean coal technology.

  10. Tri-Lab Co-Design Milestone: In-Depth Performance Portability Analysis of Improved Integrated Codes on Advanced Architecture.

    Energy Technology Data Exchange (ETDEWEB)

    Hoekstra, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Simon David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Richards, David [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bergen, Ben [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-01

    This milestone is a tri-lab deliverable supporting ongoing Co-Design efforts impacting applications in the Integrated Codes (IC) program element Advanced Technology Development and Mitigation (ATDM) program element. In FY14, the trilabs looked at porting proxy application to technologies of interest for ATS procurements. In FY15, a milestone was completed evaluating proxy applications in multiple programming models and in FY16, a milestone was completed focusing on the migration of lessons learned back into production code development. This year, the co-design milestone focuses on extracting the knowledge gained and/or code revisions back into production applications.

  11. Integrated quantitative pharmacology for treatment optimization in oncology

    NARCIS (Netherlands)

    Hasselt, J.G.C. van

    2014-01-01

    This thesis describes the development and application of quantitative pharmacological models in oncology for treatment optimization and for the design and analysis of clinical trials with respect to pharmacokinetics, toxicity, efficacy and cost-effectiveness. A recurring theme throughout this

  12. Optimization of multi-response dynamic systems integrating multiple ...

    African Journals Online (AJOL)

    regression and Taguchi's dynamic signal-to-noise ratio concept ..... algorithm for dynamic multi-response optimization based on goal programming approach. .... problem-solving confirmation, if no grave infringement of model suppositions is ...

  13. General productivity code: productivity optimization of gaseous diffusion cascades. The programmer's guide

    International Nuclear Information System (INIS)

    Tunstall, J.N.

    1975-05-01

    The General Productivity Code is a FORTRAN IV computer program for the IBM System 360. With its model of the productivity of gaseous diffusion cascades, the program is used to determine optimum cascade performance based on specified operating conditions and to aid in the calculation of optimum operating conditions for a complex of diffusion cascades. This documentation of the program is directed primarily to programmers who will be responsible for updating the code as requested by the users. It is also intended to be an aid in training new Productivity Code users and to serve as a general reference manual. Elements of the mathematical model, the input data requirements, the definitions of the various tasks (Instructions) that can be performed, and a detailed description of most FORTRAN variables and program subroutines are presented. A sample problem is also included. (auth)

  14. Lifting scheme-based method for joint coding 3D stereo digital cinema with luminace correction and optimized prediction

    Science.gov (United States)

    Darazi, R.; Gouze, A.; Macq, B.

    2009-01-01

    Reproducing a natural and real scene as we see in the real world everyday is becoming more and more popular. Stereoscopic and multi-view techniques are used for this end. However due to the fact that more information are displayed requires supporting technologies such as digital compression to ensure the storage and transmission of the sequences. In this paper, a new scheme for stereo image coding is proposed. The original left and right images are jointly coded. The main idea is to optimally exploit the existing correlation between the two images. This is done by the design of an efficient transform that reduces the existing redundancy in the stereo image pair. This approach was inspired by Lifting Scheme (LS). The novelty in our work is that the prediction step is been replaced by an hybrid step that consists in disparity compensation followed by luminance correction and an optimized prediction step. The proposed scheme can be used for lossless and for lossy coding. Experimental results show improvement in terms of performance and complexity compared to recently proposed methods.

  15. PV-PCM integration in glazed building. Co-simulation and genetic optimization study

    DEFF Research Database (Denmark)

    Elarga, Hagar; Dal Monte, Andrea; Andersen, Rune Korsholm

    2017-01-01

    . An exploratory step has also been considered prior to the optimization algorithm: it evaluates the energy profiles before and after the application of PCM to PV module integrated in glazed building. The optimization analysis investigate parameters such as ventilation flow rates and time schedule to obtain......The study describes a multi-objective optimization algorithm for an innovative integration of forced ventilated PV-PCM modules in glazed façade buildings: the aim is to identify and optimize the parameters that most affect thermal and energy performances. 1-D model, finite difference method FDM...

  16. Optimized Reactive Power Flow of DFIG Power Converters for Better Reliability Performance Considering Grid Codes

    DEFF Research Database (Denmark)

    Zhou, Dao; Blaabjerg, Frede; Lau, Mogens

    2015-01-01

    . In order to fulfill the modern grid codes, over-excited reactive power injection will further reduce the lifetime of the rotor-side converter. In this paper, the additional stress of the power semiconductor due to the reactive power injection is firstly evaluated in terms of modulation index...

  17. Power Allocation Optimization: Linear Precoding Adapted to NB-LDPC Coded MIMO Transmission

    Directory of Open Access Journals (Sweden)

    Tarek Chehade

    2015-01-01

    Full Text Available In multiple-input multiple-output (MIMO transmission systems, the channel state information (CSI at the transmitter can be used to add linear precoding to the transmitted signals in order to improve the performance and the reliability of the transmission system. This paper investigates how to properly join precoded closed-loop MIMO systems and nonbinary low density parity check (NB-LDPC. The q elements in the Galois field, GF(q, are directly mapped to q transmit symbol vectors. This allows NB-LDPC codes to perfectly fit with a MIMO precoding scheme, unlike binary LDPC codes. The new transmission model is detailed and studied for several linear precoders and various designed LDPC codes. We show that NB-LDPC codes are particularly well suited to be jointly used with precoding schemes based on the maximization of the minimum Euclidean distance (max-dmin criterion. These results are theoretically supported by extrinsic information transfer (EXIT analysis and are confirmed by numerical simulations.

  18. Optimization Using Metamodeling in the Context of Integrated Computational Materials Engineering (ICME)

    Energy Technology Data Exchange (ETDEWEB)

    Hammi, Youssef; Horstemeyer, Mark F; Wang, Paul; David, Francis; Carino, Ricolindo

    2013-11-18

    Predictive Design Technologies, LLC (PDT) proposed to employ Integrated Computational Materials Engineering (ICME) tools to help the manufacturing industry in the United States regain the competitive advantage in the global economy. ICME uses computational materials science tools within a holistic system in order to accelerate materials development, improve design optimization, and unify design and manufacturing. With the advent of accurate modeling and simulation along with significant increases in high performance computing (HPC) power, virtual design and manufacturing using ICME tools provide the means to reduce product development time and cost by alleviating costly trial-and-error physical design iterations while improving overall quality and manufacturing efficiency. To reduce the computational cost necessary for the large-scale HPC simulations and to make the methodology accessible for small and medium-sized manufacturers (SMMs), metamodels are employed. Metamodels are approximate models (functional relationships between input and output variables) that can reduce the simulation times by one to two orders of magnitude. In Phase I, PDT, partnered with Mississippi State University (MSU), demonstrated the feasibility of the proposed methodology by employing MSU?s internal state variable (ISV) plasticity-damage model with the help of metamodels to optimize the microstructure-process-property-cost for tube manufacturing processes used by Plymouth Tube Company (PTC), which involves complicated temperature and mechanical loading histories. PDT quantified the microstructure-property relationships for PTC?s SAE J525 electric resistance-welded cold drawn low carbon hydraulic 1010 steel tube manufacturing processes at seven different material states and calibrated the ISV plasticity material parameters to fit experimental tensile stress-strain curves. PDT successfully performed large scale finite element (FE) simulations in an HPC environment using the ISV plasticity

  19. ASTEC V2 severe accident integral code: Fission product modelling and validation

    International Nuclear Information System (INIS)

    Cantrel, L.; Cousin, F.; Bosland, L.; Chevalier-Jabet, K.; Marchetto, C.

    2014-01-01

    One main goal of the severe accident integral code ASTEC V2, jointly developed since almost more than 15 years by IRSN and GRS, is to simulate the overall behaviour of fission products (FP) in a damaged nuclear facility. ASTEC applications are source term determinations, level 2 Probabilistic Safety Assessment (PSA2) studies including the determination of uncertainties, accident management studies and physical analyses of FP experiments to improve the understanding of the phenomenology. ASTEC is a modular code and models of a part of the phenomenology are implemented in each module: the release of FPs and structural materials from degraded fuel in the ELSA module; the transport through the reactor coolant system approximated as a sequence of control volumes in the SOPHAEROS module; and the radiochemistry inside the containment nuclear building in the IODE module. Three other modules, CPA, ISODOP and DOSE, allow respectively computing the deposition rate of aerosols inside the containment, the activities of the isotopes as a function of time, and the gaseous dose rate which is needed to model radiochemistry in the gaseous phase. In ELSA, release models are semi-mechanistic and have been validated for a wide range of experimental data, and noticeably for VERCORS experiments. For SOPHAEROS, the models can be divided into two parts: vapour phase phenomena and aerosol phase phenomena. For IODE, iodine and ruthenium chemistry are modelled based on a semi-mechanistic approach, these FPs can form some volatile species and are particularly important in terms of potential radiological consequences. The models in these 3 modules are based on a wide experimental database, resulting for a large part from international programmes, and they are considered at the state of the art of the R and D knowledge. This paper illustrates some FPs modelling capabilities of ASTEC and computed values are compared to some experimental results, which are parts of the validation matrix

  20. Optimal design of integrated CHP systems for housing complexes

    International Nuclear Information System (INIS)

    Fuentes-Cortés, Luis Fabián; Ponce-Ortega, José María; Nápoles-Rivera, Fabricio; Serna-González, Medardo; El-Halwagi, Mahmoud M.

    2015-01-01

    Highlights: • An optimization formulation for designing domestic CHP systems is presented. • The operating scheme, prime mover and thermal storage system are optimized. • Weather conditions and behavior demands are considered. • Simultaneously economic and environmental objectives are considered. • Two case studies from Mexico are presented. - Abstract: This paper presents a multi-objective optimization approach for designing residential cogeneration systems based on a new superstructure that allows satisfying the demands of hot water and electricity at the minimum cost and the minimum environmental impact. The optimization involves the selection of technologies, size of required units and operating modes of equipment. Two residential complexes in different cities of the State of Michoacán in Mexico were considered as case studies. One is located on the west coast and the other one is in the mountainous area. The results show that the implementation of the proposed optimization method yields significant economic and environmental benefits due to the simultaneous reduction in the total annual cost and overall greenhouse gas emissions

  1. The computer code system for reactor radiation shielding in design of nuclear power plant

    International Nuclear Information System (INIS)

    Li Chunhuai; Fu Shouxin; Liu Guilian

    1995-01-01

    The computer code system used in reactor radiation shielding design of nuclear power plant includes the source term codes, discrete ordinate transport codes, Monte Carlo and Albedo Monte Carlo codes, kernel integration codes, optimization code, temperature field code, skyshine code, coupling calculation codes and some processing codes for data libraries. This computer code system has more satisfactory variety of codes and complete sets of data library. It is widely used in reactor radiation shielding design and safety analysis of nuclear power plant and other nuclear facilities

  2. Application of Flow and Transport Optimization Codes to Groundwater Pump and Treat Systems- VOLUME 2

    National Research Council Canada - National Science Library

    Minsker, Barbara

    2004-01-01

    .... Recent studies completed by the EPA and the Navy indicate that the majority of pump and treat systems are not operating as designed, have unachievable or undefined goals, and have not been optimized since installation...

  3. Thought insertion as a self-disturbance: An integration of predictive coding and phenomenological approaches

    Directory of Open Access Journals (Sweden)

    Philipp Sterzer

    2016-10-01

    Full Text Available Current theories in the framework of hierarchical predictive coding propose that positive symptoms of schizophrenia, such as delusions and hallucinations, arise from an alteration in Bayesian inference, the term inference referring to a process by which learned predictions are used to infer probable causes of sensory data. However, for one particularly striking and frequent symptom of schizophrenia, thought insertion, no plausible account has been proposed in terms of the predictive-coding framework. Here we propose that thought insertion is due to an altered experience of thoughts as coming from nowhere, as is already indicated by the early 20th century phenomenological accounts by the early Heidelberg School of psychiatry. These accounts identified thought insertion as one of the self-disturbances (from German: Ichstörungen of schizophrenia and used mescaline as a model-psychosis in healthy individuals to explore the possible mechanisms. The early Heidelberg School (Gruhle, Mayer-Gross, Beringer first named and defined the self-disturbances, and proposed that thought insertion involves a disruption of the inner connectedness of thoughts and experiences, and a becoming sensory of those thoughts experienced as inserted. This account offers a novel way to integrate the phenomenology of thought insertion with the predictive coding framework. We argue that the altered experience of thoughts may be caused by a reduced precision of context-dependent predictions, relative to sensory precision. According to the principles of Bayesian inference, this reduced precision leads to increased prediction-error signals evoked by the neural activity that encodes thoughts. Thus, in analogy with the prediction-error related aberrant salience of external events that has been proposed previously, internal events such as thoughts (including volitions, emotions and memories can also be associated with increased prediction-error signaling and are thus imbued with

  4. Hermeneutics framework: integration of design rationale and optimizing software modules

    NARCIS (Netherlands)

    Aksit, Mehmet; Malakuti Khah Olun Abadi, Somayeh

    To tackle the evolution challenges of adaptive systems, this paper argues on the necessity of hermeneutic approaches that help to avoid too early elimination of design alternatives. This visionary paper proposes the Hermeneutics Framework, which computationally integrates a design rationale

  5. Simulation of the containment spray system test PACOS PX2.2 with the integral code ASTEC and the containment code system COCOSYS

    International Nuclear Information System (INIS)

    Risken, Tobias; Koch, Marco K.

    2011-01-01

    The reactor safety research contains the analysis of postulated accidents in nuclear power plants (npp). These accidents may involve a loss of coolant from the nuclear plant's reactor coolant system, during which heat and pressure within the containment are increased. To handle these atmospheric conditions, containment spray systems are installed in various light water reactors (LWR) worldwide as a part of the accident management system. For the improvement and the safety ensurance in npp operation and accident management, numeric simulations of postulated accident scenarios are performed. The presented calculations regard the predictability of the containment spray system's effect with the integral code ASTEC and the containment code system COCOSYS, performed at Ruhr-Universitaet Bochum. Therefore the test PACOS Px2.2 is simulated, in which water is sprayed in the stratified containment atmosphere of the BMC (Battelle Modell-Containment). (orig.)

  6. Optimal trading quantity integration as a basis for optimal portfolio management

    Directory of Open Access Journals (Sweden)

    Saša Žiković

    2005-06-01

    Full Text Available The author in this paper points out the reason behind calculating and using optimal trading quantity in conjunction with Markowitz’s Modern portfolio theory. In the opening part the author presents an example of calculating optimal weights using Markowitz’s Mean-Variance approach, followed by an explanation of basic logic behind optimal trading quantity. The use of optimal trading quantity is not limited to systems with Bernoulli outcome, but can also be used when trading shares, futures, options etc. Optimal trading quantity points out two often-overlooked axioms: (1 a system with negative mathematical expectancy can never be transformed in a system with positive mathematical expectancy, (2 by missing the optimal trading quantity an investor can turn a system with positive expectancy into a negative one. Optimal trading quantity is that quantity which maximizes geometric mean (growth function of a particular system. To determine the optimal trading quantity for simpler systems, with a very limited number of outcomes, a set of Kelly’s formulas is appropriate. In the conclusion the summary of the paper is presented.

  7. Study of integrated optimization design of wind farm in complex terrain

    DEFF Research Database (Denmark)

    Xu, Chang; Chen, Dandan; Han, Xingxing

    2017-01-01

    wind farm design in complex terrain and setting up integrated optimization mathematical model for micro-site selection, power lines and road maintenance design etc.. Based on the existing 1-year wind measurement data in the wind farm area, the genetic algorithm was used to optimize the micro......-site selection. On the basis of location optimization of wind turbine, the optimization algorithms such as single-source shortest path algorithm and minimum spanning tree algorithm were used to optimize electric lines and maintenance roads. The practice shows that the research results can provide important...

  8. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    NARCIS (Netherlands)

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  9. Integrated quantitative pharmacology for treatment optimization in oncology

    NARCIS (Netherlands)

    van Hasselt, J.G.C.

    2014-01-01

    This thesis describes the development and application of quantitative pharmacological models in oncology for treatment optimization and for the design and analysis of clinical trials with respect to pharmacokinetics, toxicity, efficacy and cost-effectiveness. A recurring theme throughout this thesis

  10. Optimal configuration of an integrated power and transport system

    DEFF Research Database (Denmark)

    Juul, Nina; Meibom, Peter

    2011-01-01

    optimal investments in both power plants and vehicle technologies is presented in this article. The model includes the interactions between the power system and the transport system including the competition between flexibility measures such as hydrogen storage in combination with electrolysis, heat...... storage in combination with heat pumps and heat boilers, and plug-in electric vehicles....

  11. Integrated energy optimization with smart home energy management systems

    NARCIS (Netherlands)

    Asare-Bediako, B.; Ribeiro, P.F.; Kling, W.L.

    2012-01-01

    Optimization of energy use is a vital concept in providing solutions to many of the energy challenges in our world today. Large chemical, mechanical, pneumatic, hydraulic, and electrical systems require energy efficiency as one of the important aspects of operating systems. At the micro-scale, the

  12. Optimization of surface integrity in dry hard turning using RSM

    Indian Academy of Sciences (India)

    This paper investigates the effect of different cutting parameters (cutting ... with coated carbide tool under different settings of cutting parameters. ... procedure of response surface methodology (RSM) to determine optimal ..... The numerical opti- .... and analysis of experiments, New Delhi, A. K. Ghosh, PHI Learning Private.

  13. Optimization of multi-response dynamic systems integrating multiple ...

    African Journals Online (AJOL)

    It also results in better optimization performance than back-propagation neural network-based approach and data mining-based approach reported by the past researchers. Keywords: multiple responses, multiple regression, weighted dynamic signal-to-noise ratio, performance measure modelling, response function ...

  14. Analysis and Optimization of Sparse Random Linear Network Coding for Reliable Multicast Services

    DEFF Research Database (Denmark)

    Tassi, Andrea; Chatzigeorgiou, Ioannis; Roetter, Daniel Enrique Lucani

    2016-01-01

    Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different random linear network coding (RLNC......) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC...... techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet...

  15. Stochastic optimization of GeantV code by use of genetic algorithms

    Science.gov (United States)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Behera, S. P.; Brun, R.; Canal, P.; Carminati, F.; Cosmo, G.; Duhem, L.; Elvira, D.; Folger, G.; Gheata, A.; Gheata, M.; Goulas, I.; Hariri, F.; Jun, S. Y.; Konstantinov, D.; Kumawat, H.; Ivantchenko, V.; Lima, G.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.

    2017-10-01

    GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.

  16. Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2012-12-01

    Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.

  17. Optimization of Grillages Using Genetic Algorithms for Integrating Matlab and Fortran Environments

    Directory of Open Access Journals (Sweden)

    Darius Mačiūnas

    2013-02-01

    Full Text Available The purpose of the paper is to present technology applied for the global optimization of grillage-type pile foundations (further grillages. The goal of optimization is to obtain the optimal layout of pile placement in the grillages. The problem can be categorized as a topology optimization problem. The objective function is comprised of maximum reactive force emerging in a pile. The reactive force is minimized during the procedure of optimization during which variables enclose the positions of piles beneath connecting beams. Reactive forces in all piles are computed utilizing an original algorithm implemented in the Fortran programming language. The algorithm is integrated into the MatLab environment where the optimization procedure is executed utilizing a genetic algorithm. The article also describes technology enabling the integration of MatLab and Fortran environments. The authors seek to evaluate the quality of a solution to the problem analyzing experimental results obtained applying the proposed technology.

  18. An integrative approach to predicting the functional effects of small indels in non-coding regions of the human genome.

    Science.gov (United States)

    Ferlaino, Michael; Rogers, Mark F; Shihab, Hashem A; Mort, Matthew; Cooper, David N; Gaunt, Tom R; Campbell, Colin

    2017-10-06

    Small insertions and deletions (indels) have a significant influence in human disease and, in terms of frequency, they are second only to single nucleotide variants as pathogenic mutations. As the majority of mutations associated with complex traits are located outside the exome, it is crucial to investigate the potential pathogenic impact of indels in non-coding regions of the human genome. We present FATHMM-indel, an integrative approach to predict the functional effect, pathogenic or neutral, of indels in non-coding regions of the human genome. Our method exploits various genomic annotations in addition to sequence data. When validated on benchmark data, FATHMM-indel significantly outperforms CADD and GAVIN, state of the art models in assessing the pathogenic impact of non-coding variants. FATHMM-indel is available via a web server at indels.biocompute.org.uk. FATHMM-indel can accurately predict the functional impact and prioritise small indels throughout the whole non-coding genome.

  19. Battelle integrity of nuclear piping program. Summary of results and implications for codes/standards

    International Nuclear Information System (INIS)

    Miura, Naoki

    2005-01-01

    The BINP(Battelle Integrity of Nuclear Piping) program was proposed by Battelle to elaborate pipe fracture evaluation methods and to improve LBB and in-service flaw evaluation criteria. The program has been conducted from October 1998 to September 2003. In Japan, CRIEPI participated in the program on behalf of electric utilities and fabricators to catch up the technical backgrounds for possible future revision of LBB and in-service flaw evaluation standards and to investigate the issues needed to be reflected to current domestic standards. A series of the results obtained from the program has been well utilized for the new LBB Regulatory Guide Program by USNRC and for proposal of revised in-service flaw evaluation criteria to the ASME Code Committee. The results were assessed whether they had implications for the existing or future domestic standards. As a result, the impact of many of these issues, which were concerned to be adversely affected to LBB approval or allowable flaw sizes in flaw evaluation criteria, was found to be relatively minor under actual plant conditions. At the same time, some issues that needed to be resolved to address advanced and rational standards in the future were specified. (author)

  20. Synthetic radiation diagnostics in PIConGPU. Integrating spectral detectors into particle-in-cell codes

    Energy Technology Data Exchange (ETDEWEB)

    Pausch, Richard; Burau, Heiko; Huebl, Axel; Steiniger, Klaus [Helmholtz-Zentrum Dresden-Rossendorf (Germany); Technische Universitaet Dresden (Germany); Debus, Alexander; Widera, Rene; Bussmann, Michael [Helmholtz-Zentrum Dresden-Rossendorf (Germany)

    2016-07-01

    We present the in-situ far field radiation diagnostics in the particle-in-cell code PIConGPU. It was developed to close the gap between simulated plasma dynamics and radiation observed in laser plasma experiments. Its predictive capabilities, both qualitative and quantitative, have been tested against analytical models. Now, we apply this synthetic spectral diagnostics to investigate plasma dynamics in laser wakefield acceleration, laser foil irradiation and plasma instabilities. Our method is based on the far field approximation of the Lienard-Wiechert potential and allows predicting both coherent and incoherent radiation spectrally from infrared to X-rays. Its capability to resolve the radiation polarization and to determine the temporal and spatial origin of the radiation enables us to correlate specific spectral signatures with characteristic dynamics in the plasma. Furthermore, its direct integration into the highly-scalable GPU framework of PIConGPU allows computing radiation spectra for thousands of frequencies, hundreds of detector positions and billions of particles efficiently. In this talk we will demonstrate these capabilities on resent simulations of laser wakefield acceleration (LWFA) and high harmonics generation during target normal sheath acceleration (TNSA).

  1. Basic data, computer codes and integral experiments: The tools for modelling in nuclear technology

    International Nuclear Information System (INIS)

    Sartori, E.

    2001-01-01

    When studying applications in nuclear technology we need to understand and be able to predict the behavior of systems manufactured by human enterprise. First, the underlying basic physical and chemical phenomena need to be understood. We have then to predict the results from the interplay of the large number of the different basic events: i.e. the macroscopic effects. In order to be able to build confidence in our modelling capability, we need then to compare these results against measurements carried out on such systems. The different levels of modelling require the solution of different types of equations using different type of parameters. The tools required for carrying out a complete validated analysis are: - The basic nuclear or chemical data; - The computer codes, and; - The integral experiments. This article describes the role each component plays in a computational scheme designed for modelling purposes. It describes also which tools have been developed and are internationally available. The role of the OECD/NEA Data Bank, the Radiation Shielding Information Computational Center (RSICC), and the IAEA Nuclear Data Section are playing in making these elements available to the community of scientists and engineers is described. (author)

  2. Welded joints integrity analysis and optimization for fiber laser welding of dissimilar materials

    Science.gov (United States)

    Ai, Yuewei; Shao, Xinyu; Jiang, Ping; Li, Peigen; Liu, Yang; Liu, Wei

    2016-11-01

    Dissimilar materials welded joints provide many advantages in power, automotive, chemical, and spacecraft industries. The weld bead integrity which is determined by process parameters plays a significant role in the welding quality during the fiber laser welding (FLW) of dissimilar materials. In this paper, an optimization method by taking the integrity of the weld bead and weld area into consideration is proposed for FLW of dissimilar materials, the low carbon steel and stainless steel. The relationships between the weld bead integrity and process parameters are developed by the genetic algorithm optimized back propagation neural network (GA-BPNN). The particle swarm optimization (PSO) algorithm is taken for optimizing the predicted outputs from GA-BPNN for the objective. Through the optimization process, the desired weld bead with good integrity and minimum weld area are obtained and the corresponding microstructure and microhardness are excellent. The mechanical properties of the optimized joints are greatly improved compared with that of the un-optimized welded joints. Moreover, the effects of significant factors are analyzed based on the statistical approach and the laser power (LP) is identified as the most significant factor on the weld bead integrity and weld area. The results indicate that the proposed method is effective for improving the reliability and stability of welded joints in the practical production.

  3. Equilibrium optimization code OPEQ and results of applying it to HT-7U

    International Nuclear Information System (INIS)

    Zha Xuejun; Zhu Sizheng; Yu Qingquan

    2003-01-01

    The plasma equilibrium configuration has a strong impact on the confinement and MHD stability in tokamaks. For designing a tokamak device, it is an important issue to determine the sites and currents of poloidal coils which have some constraint conditions from physics and engineering with a prescribed equilibrium shape of the plasma. In this paper, an effective method based on multi-variables equilibrium optimization is given. The method can optimize poloidal coils when the previously prescribed plasma parameters are treated as an object function. We apply it to HT-7U equilibrium calculation, and obtain good results

  4. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  5. Integration of safety engineering into a cost optimized development program.

    Science.gov (United States)

    Ball, L. W.

    1972-01-01

    A six-segment management model is presented, each segment of which represents a major area in a new product development program. The first segment of the model covers integration of specialist engineers into 'systems requirement definition' or the system engineering documentation process. The second covers preparation of five basic types of 'development program plans.' The third segment covers integration of system requirements, scheduling, and funding of specialist engineering activities into 'work breakdown structures,' 'cost accounts,' and 'work packages.' The fourth covers 'requirement communication' by line organizations. The fifth covers 'performance measurement' based on work package data. The sixth covers 'baseline requirements achievement tracking.'

  6. Optimizing Computation of Repairs from Active Integrity Constraints

    DEFF Research Database (Denmark)

    Cruz-Filipe, Luís

    2014-01-01

    Active integrity constraints (AICs) are a form of integrity constraints for databases that not only identify inconsistencies, but also suggest how these can be overcome. The semantics for AICs defines different types of repairs, but deciding whether an inconsistent database can be repaired...... and finding possible repairs is a NP- or Σ2p-complete problem, depending on the type of repairs one has in mind. In this paper, we introduce two different relations on AICs: an equivalence relation of independence, allowing the search to be parallelized among the equivalence classes, and a precedence relation...

  7. Integrated Design Optimization of a 5-DOF Assistive Light-weight Anthropomorphic Arm

    DEFF Research Database (Denmark)

    Zhou, Lelai; Bai, Shaoping; Hansen, Michael Rygaard

    2011-01-01

    An integrated dimensional and drive train optimization method was developed for light-weight robotic arm design. The method deals with the determination of optimal link lengths and the optimal selection of motors and gearboxes from commercially available components. Constraints are formulated...... on the basis of kinematic performance and dynamic requirements, whereas the main objective is to minimize the weight. The design of a human-like arm, which is 10 kg in weight with a load capacity of 5 kg, is described....

  8. Qualitative and Quantitative Integrated Modeling for Stochastic Simulation and Optimization

    Directory of Open Access Journals (Sweden)

    Xuefeng Yan

    2013-01-01

    Full Text Available The simulation and optimization of an actual physics system are usually constructed based on the stochastic models, which have both qualitative and quantitative characteristics inherently. Most modeling specifications and frameworks find it difficult to describe the qualitative model directly. In order to deal with the expert knowledge, uncertain reasoning, and other qualitative information, a qualitative and quantitative combined modeling specification was proposed based on a hierarchical model structure framework. The new modeling approach is based on a hierarchical model structure which includes the meta-meta model, the meta-model and the high-level model. A description logic system is defined for formal definition and verification of the new modeling specification. A stochastic defense simulation was developed to illustrate how to model the system and optimize the result. The result shows that the proposed method can describe the complex system more comprehensively, and the survival probability of the target is higher by introducing qualitative models into quantitative simulation.

  9. Transfer prices assignment with integrated production and marketing optimization models

    Directory of Open Access Journals (Sweden)

    Enrique Parra

    2018-04-01

    Full Text Available Purpose: In decentralized organizations (today a great majority of the large multinational groups, much of the decision-making power is in its individual business units-BUs-. In these cases, the management control system (MCS uses transfer prices to coordinate actions of the BUs and to evaluate their performance with the goal of guaranteeing the whole corporation optimum. The purpose of the investigation is to design transfer prices that suit this goal. Design/methodology/approach: Considering the results of the whole company supply chain optimization models (in the presence of seasonality of demand the question is to design a mechanism that creates optimal incentives for the managers of each business unit to drive the corporation to the optimal performance. Mathematical programming models are used as a start point. Findings: Different transfer prices computation methods are introduced in this paper for decentralised organizations with two divisions (production and marketing. The methods take into account the results of the solution of the whole company supply chain optimization model, if exists, and can be adapted to the type of information available in the company. It is mainly focused on transport costs assignment. Practical implications: Using the methods proposed in this paper a decentralized corporation can implement more accurate transfer prices to drive the whole organization to the global optimum performance. Originality/value: The methods proposed are a new contribution to the literature on transfer prices with special emphasis on the practical and easy implementation in a modern corporation with several business units and with high seasonality of demand. Also, the methods proposed are very flexible and can be tuned depending on the type of information available in the company.

  10. Metrology for WEST components design and integration optimization

    International Nuclear Information System (INIS)

    Brun, C.; Archambeau, G.; Blanc, L.; Bucalossi, J.; Chantant, M.; Gargiulo, L.; Hermenier, A.; Le, R.; Pilia, A.

    2015-01-01

    Highlights: • Metrology methods. • Interests of metrology campaign to optimize margins by reducing uncertainties. • Assembly problems are solved and validated on a numerical mock up. • Post treatment of full 3DScan of the vacuum vessel. - Abstract: On WEST new components will be implemented in an existing environment, emphasis has to be put on the metrology to optimize the design and the assembly. Hence, at a particular stage of the project, several components have to coexist in the limited vessel. Therefore, all the difficulty consists in validating the mechanical interfaces between existing components and new one; minimize the risk of the assembling and to maximize the plasma volume. The CEA/IRFM takes the opportunity of the ambitious project to sign a partnership with an industrial specialized in multipurpose metrology domains. To optimize the assembly procedure, the IRFM Assembly group works in strong collaboration with its industrial, to define and plan the campaigns of metrology. The paper will illustrate the organization, methods and results of the dedicated metrology campaigns have been defined and carried out in the WEST dis/assembly phase. To conclude, the future needs of metrology at CEA/IRFM will be exposed to define the next steps.

  11. Innovation of genetic algorithm code GenA for WWER fuel loading optimization

    International Nuclear Information System (INIS)

    Sustek, J.

    2005-01-01

    One of the stochastic search techniques - genetic algorithms - was recently used for optimization of arrangement of fuel assemblies (FA) in core of reactors WWER-440 and WWER-1000. Basic algorithm was modified by incorporation of SPEA scheme. Both were enhanced and some results are presented (Authors)

  12. EDISON – Study on optimal grid integration of electric vehicles

    DEFF Research Database (Denmark)

    Foosnæs, Anders Holm; Andersen, Claus Amtrup; Christensen, Linda

    2011-01-01

    The Danish EDISON project has been launched to investigate how a large fleet of electric vehicles (EVs) can be integrated in a way that supports the electric grid while benefitting both individual car owners, and society as a whole through reductions in CO2 emissions. The consortium partners...

  13. SUSTAIN:Urban Modeling Systems Integrating Optimization and Economics

    Science.gov (United States)

    The System for Urban Stormwater Treatment and Analysis INtegration (SUSTAIN) was developed by the U.S. Environmental Protection Agency to support practitioners in developing cost-effective management plans for municipal storm water programs and evaluating and selecting Best Manag...

  14. Optimal control for integrated emission management in diesel engines

    NARCIS (Netherlands)

    Donkers, M.C.F.; van Schijndel, J.; Heemels, W.P.M.H.; Willems, F.

    2017-01-01

    Integrated Emission Management (IEM) is a supervisory control strategy that minimises operational costs (consisting of fuel and AdBlue) for diesel engines with an aftertreatment system, while satisfying emission constraints imposed by legislation. In most work on IEM, a suboptimal heuristic

  15. Optimal control for integrated emission management in diesel engines

    NARCIS (Netherlands)

    Donkers, M.C.F.; Schijndel, J. van; Heemels, W.P.M.H.; Willems, F.P.T.

    2016-01-01

    Integrated Emission Management (IEM) is a supervisory control strategy that minimises operational costs (consisting of fuel and AdBlue) for diesel engines with an aftertreatment system, while satisfying emission constraints imposed by legislation. In most work on IEM, a suboptimal heuristic

  16. Development of an optimized procedure bridging design and structural analysis codes for the automatized design of the SMART

    International Nuclear Information System (INIS)

    Kim, Tae Wan; Park, Keun Bae; Choi, Suhn; Kim, Kang Soo; Jeong, Kyeong Hoon; Lee, Gyu Mahn

    1998-09-01

    In this report, an optimized design and analysis procedure is established to apply to the SMART (System-integrated Modular Advanced ReacTor) development. The development of an optimized procedure is to minimize the time consumption and engineering effort by squeezing the design and feedback interactions. To achieve this goal, the data and information generated through the design development should be directly transferred to the analysis program with minimum operation. The verification of the design concept requires considerable effort since the communication between the design and analysis involves time consuming stage for the conversion of input information. In this report, an optimized procedure is established bridging the design and analysis stage utilizing the IDEAS, ABAQUS and ANSYS. (author). 3 refs., 2 tabs., 5 figs

  17. On the Optimality of Repetition Coding among Rate-1 DC-offset STBCs for MIMO Optical Wireless Communications

    KAUST Repository

    Sapenov, Yerzhan

    2017-07-06

    In this paper, an optical wireless multiple-input multiple-output communication system employing intensity-modulation direct-detection is considered. The performance of direct current offset space-time block codes (DC-STBC) is studied in terms of pairwise error probability (PEP). It is shown that among the class of DC-STBCs, the worst case PEP corresponding to the minimum distance between two codewords is minimized by repetition coding (RC), under both electrical and optical individual power constraints. It follows that among all DC-STBCs, RC is optimal in terms of worst-case PEP for static channels and also for varying channels under any turbulence statistics. This result agrees with previously published numerical results showing the superiority of RC in such systems. It also agrees with previously published analytic results on this topic under log-normal turbulence and further extends it to arbitrary turbulence statistics. This shows the redundancy of the time-dimension of the DC-STBC in this system. This result is further extended to sum power constraints with static and turbulent channels, where it is also shown that the time dimension is redundant, and the optimal DC-STBC has a spatial beamforming structure. Numerical results are provided to demonstrate the difference in performance for systems with different numbers of receiving apertures and different throughput.

  18. Optimization of the Penelope code in F language for the simulation of the X-ray spectrum in radiodiagnosis

    International Nuclear Information System (INIS)

    Ballon P, C. I.; Quispe V, N. Y.; Vega R, J. L. J.

    2017-10-01

    The computational simulation to obtain the X-ray spectrum in the range of radio-diagnosis, allows a study and advance knowledge of the transport process of X-rays in the interaction with matter using the Monte Carlo method. With the obtaining of the X-ray spectra we can know the dose that the patient receives when he undergoes a radiographic study or CT, improving the quality of the obtained image. The objective of the present work was to implement and optimize the open source Penelope (Monte Carlo code for the simulation of the transport of electrons and photons in the matter) 2008 version programming extra code in functional language F, managing to double the processing speed, thus reducing the simulation time spent and errors when optimizing the software initially programmed in Fortran 77. The results were compared with those of Penelope, obtaining a good concordance. We also simulated the obtaining of a Pdd curve (depth dose profile) for a Theratron Equinox cobalt-60 teletherapy device, also validating the software implemented for high energies. (Author)

  19. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    Science.gov (United States)

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  20. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Dongyul Lee

    2014-01-01

    Full Text Available The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC with adaptive modulation and coding (AMC provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  1. Low-complexity BCH codes with optimized interleavers for DQPSK systems with laser phase noise

    DEFF Research Database (Denmark)

    Leong, Miu Yoong; Larsen, Knud J.; Jacobsen, Gunnar

    2017-01-01

    The presence of high phase noise in addition to additive white Gaussian noise in coherent optical systems affects the performance of forward error correction (FEC) schemes. In this paper, we propose a simple scheme for such systems, using block interleavers and binary Bose...... simulations. For a target post-FEC BER of 10−6, codes selected using our method result in BERs around 3× target and achieve the target with around 0.2 dB extra signal-to-noise ratio....

  2. OPTIMIZATION OF ATM AND BRANCH CASH OPERATIONS USING AN INTEGRATED CASH REQUIREMENT FORECASTING AND CASH OPTIMIZATION MODEL

    Directory of Open Access Journals (Sweden)

    Canser BİLİR

    2018-04-01

    Full Text Available In this study, an integrated cash requirement forecasting and cash inventory optimization model is implemented in both the branch and automated teller machine (ATM networks of a mid-sized bank in Turkey to optimize the bank’s cash supply chain. The implemented model’s objective is to minimize the idle cash levels at both branches and ATMs without decreasing the customer service level (CSL by providing the correct amount of cash at the correct location and time. To the best of our knowledge, the model is the first integrated model in the literature to be applied to both ATMs and branches simultaneously. The results demonstrated that the integrated model dramatically decreased the idle cash levels at both branches and ATMs without degrading the availability of cash and hence customer satisfaction. An in-depth analysis of the results also indicated that the results were more remarkable for branches. The results also demonstrated that the utilization of various seasonal indices plays a very critical role in the forecasting of cash requirements for a bank. Another unique feature of the study is that the model is the first to include the recycling feature of ATMs. The results demonstrated that as a result of the inclusion of the deliberate seasonal indices in the forecasting model, the integrated cash optimization models can be used to estimate the cash requirements of recycling ATMs.

  3. PULSE: Integrated Parametric Modeling for a Shading System : From Daylight Optimization to Additive Manufacturing

    NARCIS (Netherlands)

    Teeling, M.V.M.T.; Turrin, M.; de Ruiter, P.; Turrin, Michela; Peters, Brady; O'Brien, William; Stouffs, Rudi; Dogan, Timur

    2017-01-01

    This paper presents a parametric approach to an integrated and performance-oriented design, from the conceptual design phase towards materialization. The novelty occurs in the use of parametric models as a way of integrating multidisciplinary design constraints, from daylight optimization to the

  4. OPTIMIZATION OF ATM AND BRANCH CASH OPERATIONS USING AN INTEGRATED CASH REQUIREMENT FORECASTING AND CASH OPTIMIZATION MODEL

    OpenAIRE

    Canser BİLİR

    2018-01-01

    In this study, an integrated cash requirement forecasting and cash inventory optimization model is implemented in both the branch and automated teller machine (ATM) networks of a mid-sized bank in Turkey to optimize the bank’s cash supply chain. The implemented model’s objective is to minimize the idle cash levels at both branches and ATMs without decreasing the customer service level (CSL) by providing the correct amount of cash at the correct location and time. To the best of our knowledge,...

  5. SEJITS: embedded specializers to turn patterns-based designs into optimized parallel code

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    All software should be parallel software. This is natural result of the transition to a many core world. For a small fraction of the world's programmers (efficiency programmers), this is not a problem. They enjoy mapping algorithms onto the details of a particular system and are well served by low level languages and OpenMP, MPI, or OpenCL. Most programmers, however, are "domain specialists" who write code. They are too busy working in their domain of choice (such as physics) to master the intricacies of each computer they use. How do we make these programmers productive without giving up performance? We have been working with a team at UC Berkeley's ParLab to address this problem. The key is a clear software architecture expressed in terms of design patterns that exposes the concurrency in a problem. The resulting code is written using a patterns-based framework within a high level, productivity language (such as Python). Then a separate system is used by a small group o...

  6. Advanced hybrid and electric vehicles system optimization and vehicle integration

    CERN Document Server

    2016-01-01

    This contributed volume contains the results of the research program “Agreement for Hybrid and Electric Vehicles”, funded by the International Energy Agency. The topical focus lies on technology options for the system optimization of hybrid and electric vehicle components and drive train configurations which enhance the energy efficiency of the vehicle. The approach to the topic is genuinely interdisciplinary, covering insights from fields. The target audience primarily comprises researchers and industry experts in the field of automotive engineering, but the book may also be beneficial for graduate students.

  7. Optimizing integrated reference cases in the OCTAVIUS project

    DEFF Research Database (Denmark)

    Kvamsdal, Hanne M.; Ehlers, Sören; Kather, Alfons

    2016-01-01

    . This is important especially for the coal fired power plant, where integration of waste heat from the capture plant or the CO2 compressor intercoolers can lead to a significant increase in overall efficiency. The configuration of intercoolers for the CO2 compressor is adapted to achieve the highest overall...... the CESAR, CAESAR, and DECARBit projects, two reference power plants are modelled in Ebsilon®Professional. The first is an 800 MWe coal case, the second a 430 MWe natural gas combined cycle (NGCC) case. For each power plant two separate capture plants are considered: one using 30 wt% MEA as solvent system...... efficiency. For the natural gas combined cycle plant, integration is not that beneficial, since there is no heat sink available in the water steam cycle. In the end, the cost of electricity and cost of CO2 avoided is calculated for all four cases. While the CESAR1 solvent system in a conventional absorber...

  8. Optimized connectome architecture for sensory-motor integration

    Directory of Open Access Journals (Sweden)

    Jacob C. Worrell

    2017-12-01

    Full Text Available The intricate connectivity patterns of neural circuits support a wide repertoire of communication processes and functional interactions. Here we systematically investigate how neural signaling is constrained by anatomical connectivity in the mesoscale Drosophila (fruit fly brain network. We use a spreading model that describes how local perturbations, such as external stimuli, trigger global signaling cascades that spread through the network. Through a series of simple biological scenarios we demonstrate that anatomical embedding potentiates sensory-motor integration. We find that signal spreading is faster from nodes associated with sensory transduction (sensors to nodes associated with motor output (effectors. Signal propagation was accelerated if sensor nodes were activated simultaneously, suggesting a topologically mediated synergy among sensors. In addition, the organization of the network increases the likelihood of convergence of multiple cascades towards effector nodes, thereby facilitating integration prior to motor output. Moreover, effector nodes tend to coactivate more frequently than other pairs of nodes, suggesting an anatomically enhanced coordination of motor output. Altogether, our results show that the organization of the mesoscale Drosophila connectome imparts privileged, behaviorally relevant communication patterns among sensors and effectors, shaping their capacity to collectively integrate information. The complex network spanned by neurons and their axonal projections promotes a diverse set of functions. In the present report, we study how the topological organization of the fruit fly brain supports sensory-motor integration. Using a simple communication model, we demonstrate that the topology of this network allows efficient coordination among sensory and motor neurons. Our results suggest that brain network organization may profoundly shape the functional repertoire of this simple organism.

  9. Performance of an improved logarithmic phase mask with optimized parameters in a wavefront-coding system.

    Science.gov (United States)

    Zhao, Hui; Li, Yingcai

    2010-01-10

    In two papers [Proc. SPIE 4471, 272-280 (2001) and Appl. Opt. 43, 2709-2721 (2004)], a logarithmic phase mask was proposed and proved to be effective in extending the depth of field; however, according to our research, this mask is not that perfect because the corresponding defocused modulation transfer function has large oscillations in the low-frequency region, even when the mask is optimized. So, in a previously published paper [Opt. Lett. 33, 1171-1173 (2008)], we proposed an improved logarithmic phase mask by making a small modification. The new mask can not only eliminate the drawbacks to a certain extent but can also be even less sensitive to focus errors according to Fisher information criteria. However, the performance comparison was carried out with the modified mask not being optimized, which was not reasonable. In this manuscript, we optimize the modified logarithmic phase mask first before analyzing its performance and more convincing results have been obtained based on the analysis of several frequently used metrics.

  10. An integrated DEA-COLS-SFA algorithm for optimization and policy making of electricity distribution units

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Omrani, H.; Eivazy, H.

    2009-01-01

    This paper presents an integrated data envelopment analysis (DEA)-corrected ordinary least squares (COLS)-stochastic frontier analysis (SFA)-principal component analysis (PCA)-numerical taxonomy (NT) algorithm for performance assessment, optimization and policy making of electricity distribution units. Previous studies have generally used input-output DEA models for benchmarking and evaluation of electricity distribution units. However, this study proposes an integrated flexible approach to measure the rank and choose the best version of the DEA method for optimization and policy making purposes. It covers both static and dynamic aspects of information environment due to involvement of SFA which is finally compared with the best DEA model through the Spearman correlation technique. The integrated approach would yield in improved ranking and optimization of electricity distribution systems. To illustrate the usability and reliability of the proposed algorithm, 38 electricity distribution units in Iran have been considered, ranked and optimized by the proposed algorithm of this study.

  11. Maintenance modeling and optimization integrating human and material resources

    International Nuclear Information System (INIS)

    Martorell, S.; Villamizar, M.; Carlos, S.; Sanchez, A.

    2010-01-01

    Maintenance planning is a subject of concern to many industrial sectors as plant safety and business depend on it. Traditionally, the maintenance planning is formulated in terms of a multi-objective optimization (MOP) problem where reliability, availability, maintainability and cost (RAM+C) act as decision criteria and maintenance strategies (i.e. maintenance tasks intervals) act as the only decision variables. However the appropriate development of each maintenance strategy depends not only on the maintenance intervals but also on the resources (human and material) available to implement such strategies. Thus, the effect of the necessary resources on RAM+C needs to be modeled and accounted for in formulating the MOP affecting the set of objectives and constraints. In this paper RAM+C models to explicitly address the effect of human resources and material resources (spare parts) on RAM+C criteria are proposed. This extended model allows accounting for explicitly how the above decision criteria depends on the basic model parameters representing the type of strategies, maintenance intervals, durations, human resources and material resources. Finally, an application case is performed to optimize the maintenance plan of a motor-driven pump equipment considering as decision variables maintenance and test intervals and human and material resources.

  12. Maintenance modeling and optimization integrating human and material resources

    Energy Technology Data Exchange (ETDEWEB)

    Martorell, S., E-mail: smartore@iqn.upv.e [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Villamizar, M.; Carlos, S. [Dpto. Ingenieria Quimica y Nuclear, Universidad Politecnica Valencia (Spain); Sanchez, A. [Dpto. Estadistica e Investigacion Operativa Aplicadas y Calidad, Universidad Politecnica Valencia (Spain)

    2010-12-15

    Maintenance planning is a subject of concern to many industrial sectors as plant safety and business depend on it. Traditionally, the maintenance planning is formulated in terms of a multi-objective optimization (MOP) problem where reliability, availability, maintainability and cost (RAM+C) act as decision criteria and maintenance strategies (i.e. maintenance tasks intervals) act as the only decision variables. However the appropriate development of each maintenance strategy depends not only on the maintenance intervals but also on the resources (human and material) available to implement such strategies. Thus, the effect of the necessary resources on RAM+C needs to be modeled and accounted for in formulating the MOP affecting the set of objectives and constraints. In this paper RAM+C models to explicitly address the effect of human resources and material resources (spare parts) on RAM+C criteria are proposed. This extended model allows accounting for explicitly how the above decision criteria depends on the basic model parameters representing the type of strategies, maintenance intervals, durations, human resources and material resources. Finally, an application case is performed to optimize the maintenance plan of a motor-driven pump equipment considering as decision variables maintenance and test intervals and human and material resources.

  13. Evaluating Maximum Photovoltaic Integration in District Distribution Systems Considering Optimal Inverter Dispatch and Cloud Shading Conditions

    DEFF Research Database (Denmark)

    Ding, Tao; Kou, Yu; Yang, Yongheng

    2017-01-01

    . However, the intermittency of solar PV energy (e.g., due to passing clouds) may affect the PV generation in the district distribution network. To address this issue, the voltage magnitude constraints under the cloud shading conditions should be taken into account in the optimization model, which can......As photovoltaic (PV) integration increases in distribution systems, to investigate the maximum allowable PV integration capacity for a district distribution system becomes necessary in the planning phase, an optimization model is thus proposed to evaluate the maximum PV integration capacity while...

  14. Comparative evaluation of structural integrity for ITER blanket shield block based on SDC-IC and ASME code

    Energy Technology Data Exchange (ETDEWEB)

    Shim, Hee-Jin [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Ha, Min-Su, E-mail: msha12@nfri.re.kr [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Sa-Woong; Jung, Hun-Chea [ITER Korea, National Fusion Research Institute, 169-148 Gwahak-Ro, Yuseong-Gu, Daejeon (Korea, Republic of); Kim, Duck-Hoi [ITER Organization, Route de Vinon sur Verdon - CS 90046, 13067 Sant Paul Lez Durance (France)

    2016-11-01

    Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K{sub e} factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.

  15. Comparative evaluation of structural integrity for ITER blanket shield block based on SDC-IC and ASME code

    International Nuclear Information System (INIS)

    Shim, Hee-Jin; Ha, Min-Su; Kim, Sa-Woong; Jung, Hun-Chea; Kim, Duck-Hoi

    2016-01-01

    Highlights: • The procedure of structural integrity and fatigue assessment was described. • Case studies were performed according to both SDC-IC and ASME Sec. • III codes The conservatism of the ASME code was demonstrated. • The study only covers the specifically comparable case about fatigue usage factor. - Abstract: The ITER blanket Shield Block is a bulk structure to absorb radiation and to provide thermal shielding to vacuum vessel and external vessel components, therefore the most significant load for Shield Block is the thermal load. In the previous study, the thermo-mechanical analysis has been performed under the inductive operation as representative loading condition. And the fatigue evaluations were conducted to assure structural integrity for Shield Block according to Structural Design Criteria for In-vessel Components (SDC-IC) which provided by ITER Organization (IO) based on the code of RCC-MR. Generally, ASME code (especially, B&PV Sec. III) is widely applied for design of nuclear components, and is usually well known as more conservative than other specific codes. For the view point of the fatigue assessment, ASME code is very conservative compared with SDC-IC in terms of the reflected K_e factor, design fatigue curve and other factors. Therefore, an accurate fatigue assessment comparison is needed to measure of conservatism. The purpose of this study is to provide the fatigue usage comparison resulting from the specified operating conditions shall be evaluated for Shield Block based on both SDC-IC and ASME code, and to discuss the conservatism of the results.

  16. Synthesis of the ASTEC integral code activities in SARNET – Focus on ASTEC V2 plant applications

    International Nuclear Information System (INIS)

    Chatelard, P.; Reinke, N.; Ezzidi, A.; Lombard, V.; Barnak, M.; Lajtha, G.; Slaby, J.; Constantin, M.; Majumdar, P.

    2014-01-01

    Highlights: • Independent assessment of the ASTEC severe accident code vs. experiments is summarised. • Main remaining modelling issues and development perspectives are identified. • Independent assessment of ASTEC code at full scale conditions is described. • Main requirements to address BWR and PHWR types of reactors are identified. - Abstract: Among the 43 organisations which joined the SARNET2 FP7 project from 2009 to 2013, 31 have been involved in the activities on the ASTEC code. This paper presents a synthesis of the main achievements that have been obtained on the ASTEC V2 integral code, jointly developed by IRSN (France) and GRS (Germany), on development, validation vs. experimental data and applications at full scale conditions for both Gen.II and Gen.III plants. As to code development, while the current V2.0 series of ASTEC versions was continuously improved (elaboration and release by IRSN and GRS of three successive V2.0 revisions), IRSN and GRS have also intensively continued in parallel the elaboration of the second ASTEC V2 major version (version V2.1) to be delivered end of 2014. Regarding code validation vs. experiments, the partners have assessed the V2.0 version and subsequent revisions vs. more than 50 experiments; this extended assessment notably confirmed that most models are today close to the State of the Art, while it also corroborated the yet known key-topics on which modelling efforts should focus in priority. As to plant applications, the comparison of ASTEC results with other codes allows concluding on a globally good agreement for in-vessel and ex-vessel severe accident progression. As to ASTEC adaptations to BWR and PHWR, significant achievements have been obtained through the elaboration and integration in the future V2.1 version of dedicated core degradation models, notably to account for multi coolant flows

  17. XML-Based Generator of C++ Code for Integration With GUIs

    Science.gov (United States)

    Hua, Hook; Oyafuso, Fabiano; Klimeck, Gerhard

    2003-01-01

    An open source computer program has been developed to satisfy a need for simplified organization of structured input data for scientific simulation programs. Typically, such input data are parsed in from a flat American Standard Code for Information Interchange (ASCII) text file into computational data structures. Also typically, when a graphical user interface (GUI) is used, there is a need to completely duplicate the input information while providing it to a user in a more structured form. Heretofore, the duplication of the input information has entailed duplication of software efforts and increases in susceptibility to software errors because of the concomitant need to maintain two independent input-handling mechanisms. The present program implements a method in which the input data for a simulation program are completely specified in an Extensible Markup Language (XML)-based text file. The key benefit for XML is storing input data in a structured manner. More importantly, XML allows not just storing of data but also describing what each of the data items are. That XML file contains information useful for rendering the data by other applications. It also then generates data structures in the C++ language that are to be used in the simulation program. In this method, all input data are specified in one place only, and it is easy to integrate the data structures into both the simulation program and the GUI. XML-to-C is useful in two ways: 1. As an executable, it generates the corresponding C++ classes and 2. As a library, it automatically fills the objects with the input data values.

  18. DOUBLE SHELL TANK INTEGRITY PROJECT HIGH LEVEL WASTE CHEMISTRY OPTIMIZATION

    International Nuclear Information System (INIS)

    WASHENFELDER DJ

    2008-01-01

    The U.S. Department of Energy's Office (DOE) of River Protection (ORP) has a continuing program for chemical optimization to better characterize corrosion behavior of High-Level Waste (HLW). The DOE controls the chemistry in its HLW to minimize the propensity of localized corrosion, such as pitting, and stress corrosion cracking (SCC) in nitrate-containing solutions. By improving the control of localized corrosion and SCC, the ORP can increase the life of the Double-Shell Tank (DST) carbon steel structural components and reduce overall mission costs. The carbon steel tanks at the Hanford Site are critical to the mission of safely managing stored HLW until it can be treated for disposal. The DOE has historically used additions of sodium hydroxide to retard corrosion processes in HLW tanks. This also increases the amount of waste to be treated. The reactions with carbon dioxide from the air and solid chemical species in the tank continually deplete the hydroxide ion concentration, which then requires continued additions. The DOE can reduce overall costs for caustic addition and treatment of waste, and more effectively utilize waste storage capacity by minimizing these chemical additions. Hydroxide addition is a means to control localized and stress corrosion cracking in carbon steel by providing a passive environment. The exact mechanism that causes nitrate to drive the corrosion process is not yet clear. The SCC is less of a concern in the newer stress relieved double shell tanks due to reduced residual stress. The optimization of waste chemistry will further reduce the propensity for SCC. The corrosion testing performed to optimize waste chemistry included cyclic potentiodynamic volarization studies. slow strain rate tests. and stress intensity factor/crack growth rate determinations. Laboratory experimental evidence suggests that nitrite is a highly effective:inhibitor for pitting and SCC in alkaline nitrate environments. Revision of the corrosion control

  19. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  20. Routing and Scheduling in Tramp Shipping - Integrating Bunker Optimization

    DEFF Research Database (Denmark)

    Vilhelmsen, Charlotte; Lusby, Richard Martin; Larsen, Jesper

    A tramp ship operator typically has some contracted cargoes that must be carried and seeks to maximize proFIt by carrying optional cargoes. Hence, tramp ships operate much like taxies following available cargoes and not according to a fixed route network and itinerary as liner ships. Marine fuel....... We devise a solution method based on column generation with a dynamic programming algorithm to generate columns. The method is heuristic mainly due to a discretization of the continuous bunker purchase variables. We show that the integrated planning approach can increase profits and that the decision...

  1. Routing and Scheduling in Tramp Shipping - Integrating Bunker Optimization

    DEFF Research Database (Denmark)

    Vilhelmsen, Charlotte; Lusby, Richard Martin; Larsen, Jesper

    A tramp ship operator typically has some contracted cargoes that must be carried and seeks to maximize pro_t by carrying optional cargoes. Hence, tramp ships operate much like taxies following available cargoes and not according to a _xed route network and itinerary as liner ships. Marine fuel...... and bunker consumption. We devise a solution method based on column generation with a dynamic programming algorithm to generate columns. The method is heuristic mainly due to a discretization of the continuous bunker purchase variables. We show that the integrated planning approach can increase pro...

  2. Optimal integral force feedback for active vibration control

    Science.gov (United States)

    Teo, Yik R.; Fleming, Andrew J.

    2015-11-01

    This paper proposes an improvement to Integral Force Feedback (IFF), which is a popular method for active vibration control of structures and mechanical systems. Benefits of IFF include robustness, guaranteed stability and simplicity. However, the maximum damping performance is dependent on the stiffness of the system; hence, some systems cannot be adequately controlled. In this paper, an improvement to the classical force feedback control scheme is proposed. The improved method achieves arbitrary damping for any mechanical system by introducing a feed-through term. The proposed improvement is experimentally demonstrated by actively damping an objective lens assembly for a high-speed confocal microscope.

  3. Reactive Robustness and Integrated Approaches for Railway Optimization Problems

    DEFF Research Database (Denmark)

    Haahr, Jørgen Thorlund

    journeys helps the driver to drive efficiently and enhances robustness in a realistic (dynamic) environment. Four international scientific prizes have been awarded for distinct parts of the research during the course of this PhD project. The first prize was awarded for work during the \\2014 RAS Problem...... to absorb or withstand unexpected events such as delays. Making robust plans is central in order to maintain a safe and timely railway operation. This thesis focuses on reactive robustness, i.e., the ability to react once a plan is rendered infeasible in operation due to disruptions. In such time...... Solving Competition", where a freight yard optimization problem was considered. The second junior (PhD) prize was awared for the work performed in the \\ROADEF/EURO Challenge 2014: Trains don't vanish!", where the planning of rolling stock movements at a large station was considered. An honorable mention...

  4. Optimal Dimensioning of Broadband Integrated Services Digital Network

    Directory of Open Access Journals (Sweden)

    Zdenka Chmelikova

    2005-01-01

    Full Text Available Tasks of this paper are research of input flow statistic parametres influence and parameters demands relating to VP (Virtual Path or VC (Virtual Channel dimensioning. However it is necessary to consider different time of flow arrival and differend time of holding time during connection level. Process of input flow arrival is considered as Poisson process. Holding time is considered as exponential function. Permanent allocation of VP is made by separate VP, where each of VPs enable transmitting offered laod with explicit bandwidth and explicit loss. The mathematic model was created to verify the above mentioned dependences for different types of telecomunications signals and different input flows. "Comnet III" software was selected for experimental verification of process optimization. The simulation model was based on thiss software, which simulate ATM network traffic in behalf of different input flow.

  5. Integrating robust timetabling in line plan optimization for railway systems

    DEFF Research Database (Denmark)

    Burggraeve, Sofie; Bull, Simon Henry; Vansteenwegen, Pieter

    2017-01-01

    We propose a heuristic algorithm to build a railway line plan from scratch that minimizes passenger travel time and operator cost and for which a feasible and robust timetable exists. A line planning module and a timetabling module work iteratively and interactively. The line planning module......, but is constrained by limited shunt capacity. While the operator and passenger cost remain close to those of the initially and (for these costs) optimally built line plan, the timetable corresponding to the finally developed robust line plan significantly improves the minimum buffer time, and thus the robustness...... creates an initial line plan. The timetabling module evaluates the line plan and identifies a critical line based on minimum buffer times between train pairs. The line planning module proposes a new line plan in which the time length of the critical line is modified in order to provide more flexibility...

  6. SU-F-T-193: Evaluation of a GPU-Based Fast Monte Carlo Code for Proton Therapy Biological Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Taleei, R; Qin, N; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States); Peeler, C [UT MD Anderson Cancer Center, Houston, TX (United States); Jia, X [The University of Texas Southwestern Medical Ctr, Dallas, TX (United States)

    2016-06-15

    Purpose: Biological treatment plan optimization is of great interest for proton therapy. It requires extensive Monte Carlo (MC) simulations to compute physical dose and biological quantities. Recently, a gPMC package was developed for rapid MC dose calculations on a GPU platform. This work investigated its suitability for proton therapy biological optimization in terms of accuracy and efficiency. Methods: We performed simulations of a proton pencil beam with energies of 75, 150 and 225 MeV in a homogeneous water phantom using gPMC and FLUKA. Physical dose and energy spectra for each ion type on the central beam axis were scored. Relative Biological Effectiveness (RBE) was calculated using repair-misrepair-fixation model. Microdosimetry calculations were performed using Monte Carlo Damage Simulation (MCDS). Results: Ranges computed by the two codes agreed within 1 mm. Physical dose difference was less than 2.5 % at the Bragg peak. RBE-weighted dose agreed within 5 % at the Bragg peak. Differences in microdosimetric quantities such as dose average lineal energy transfer and specific energy were < 10%. The simulation time per source particle with FLUKA was 0.0018 sec, while gPMC was ∼ 600 times faster. Conclusion: Physical dose computed by FLUKA and gPMC were in a good agreement. The RBE differences along the central axis were small, and RBE-weighted dose difference was found to be acceptable. The combined accuracy and efficiency makes gPMC suitable for proton therapy biological optimization.

  7. MagRad: A code to optimize the operation of superconducting magnets in a radiation environment

    International Nuclear Information System (INIS)

    Yeaw, C.T.

    1995-01-01

    A powerful computational tool, called MagRad, has been developed which optimizes magnet design for operation in radiation fields. Specifically, MagRad has been used for the analysis and design modification of the cable-in-conduit conductors of the TF magnet systems in fusion reactor designs. Since the TF magnets must operate in a radiation environment which damages the material components of the conductor and degrades their performance, the optimization of conductor design must account not only for start-up magnet performance, but also shut-down performance. The degradation in performance consists primarily of three effects: reduced stability margin of the conductor; a transition out of the well-cooled operating regime; and an increased maximum quench temperature attained in the conductor. Full analysis of the magnet performance over the lifetime of the reactor includes: radiation damage to the conductor, stability, protection, steady state heat removal, shielding effectiveness, optimal annealing schedules, and finally costing of the magnet and reactor. Free variables include primary and secondary conductor geometric and compositional parameters, as well as fusion reactor parameters. A means of dealing with the radiation damage to the conductor, namely high temperature superconductor anneals, is proposed, examined, and demonstrated to be both technically feasible and cost effective. Additionally, two relevant reactor designs (ITER CDA and ARIES-II/IV) have been analyzed. Upon addition of pure copper strands to the cable, the ITER CDA TF magnet design was found to be marginally acceptable, although much room for both performance improvement and cost reduction exists. A cost reduction of 10-15% of the capital cost of the reactor can be achieved by adopting a suitable superconductor annealing schedule. In both of these reactor analyses, the performance predictive capability of MagRad and its associated costing techniques have been demonstrated

  8. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    International Nuclear Information System (INIS)

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2015-01-01

    Highlights: • COBRA-TF was adopted by the Consortium for Advanced Simulation of LWRs. • We have improved code performance to support running large-scale LWR simulations. • Code optimization has led to reductions in execution time and memory usage. • An MPI parallelization has reduced full-core simulation time from days to minutes. - Abstract: This paper describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations—including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices—are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a “single program multiple data” parallelization strategy targeting distributed memory “multiple instruction multiple data” platforms utilizing domain decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard Message-Passing Interface (MPI) calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pressurized water reactor (PWR) pre-processor utility that uses a greatly simplified set of user input compared with the traditional CTF input. To run CTF in

  9. A Perceptual Model for Sinusoidal Audio Coding Based on Spectral Integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrausch, A.; Heusdens, R.; Jensen, J.; Holdt Jensen, S.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  10. A perceptual model for sinusoidal audio coding based on spectral integration

    NARCIS (Netherlands)

    Van de Par, S.; Kohlrauch, A.; Heusdens, R.; Jensen, J.; Jensen, S.H.

    2005-01-01

    Psychoacoustical models have been used extensively within audio coding applications over the past decades. Recently, parametric coding techniques have been applied to general audio and this has created the need for a psychoacoustical model that is specifically suited for sinusoidal modelling of

  11. Integration of QR codes into an anesthesia information management system for resident case log management.

    Science.gov (United States)

    Avidan, Alexander; Weissman, Charles; Levin, Phillip D

    2015-04-01

    Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Optimization of the German integrated information and measurement system (IMIS)

    International Nuclear Information System (INIS)

    Wirth, E.; Weiss, W.

    2002-01-01

    The Chernobyl accident led to a widespread contamination of the environment in most European countries. In Germany, like in all other countries, it took some time to evaluate the radiological situation, time which is extremely valuable in the early phases of an accident when decisions on countermeasures like sheltering, iodine prophylaxis or evacuation have to be taken. For a better emergency preparedness the Integrated Information and Measurement System (IMIS) has been developed and established in Germany. In case of a widespread contamination of the environment, the system will provide the decision makers with all information necessary to evaluate the radiological situation and to decide on countermeasures. Presently this system is upgraded due to the adoption of the European decision supporting system RODOS and by the improvement of the national information exchange. For this purpose the web based information system ELAN has been developed. The national systems have to be integrated into the European and international communication systems. In this presentation the IMIS system is briefly described and the new features and modules of the system are discussed in greater detail

  13. Exergoeconomic optimization of integrated geothermal system in Simav, Kutahya

    International Nuclear Information System (INIS)

    Arslan, Oguz; Kose, Ramazan

    2010-01-01

    The aim of this study is to investigate the integrated use of the geothermal resources in the Kutahya-Simav region, Turkey. Although geothermal energy has been in use for years in the others countries, the integrated use of the geothermal fluid is new in Turkey. The high temperature level of the geothermal fluid in the Simav field makes it possible to utilize it for electricity generation, space heating and balneology. In this regard, a multiple complex has been proposed there in order to use the energy of the geothermal fluid more efficiently. Therefore, the possibility of electricity generation by a binary cycle has been preliminarily researched. After the electricity generation process, the waste geothermal fluid has been conducted to residences and greenhouses later for heating purpose in the field. In this regard, twenty one different models have been formed and analyzed using exergy and LCC methods. As a conclusion, the pre-feasibility study indicates that utilization of this geothermal capacity for multiple uses would be an attractive investment for Simav region.

  14. Development and Application of a Plant Code to the Analysis of Transients in Integrated Reactors

    International Nuclear Information System (INIS)

    Rabiti, A.; Gimenez, M.; Delmastro, D.; Zanocco, P.

    2003-01-01

    In this work, a secondary system model for a CAREM-25 type nuclear power plant was developed.A two-phase flow homogenous model was used and found adequate for the scope of the present work.A finite difference scheme was used for the numerical implementation of the model.This model was coupled to the HUARPE code, a primary circuit code, in order to obtain a plant code.This plant code was used to analyze the inherent response of the system, without control feedback loops, for a transient of steam generator feed-water mass flow reduction.The results obtained are satisfactory, but a validation against other plant codes is still necessary

  15. Economic dispatch optimization for system integrating renewable energy sources

    Science.gov (United States)

    Jihane, Kartite; Mohamed, Cherkaoui

    2018-05-01

    Nowadays, the use of energy is growing especially in transportation and electricity industries. However this energy is based on conventional sources which pollute the environment. Multi-source system is seen as the best solution to sustainable development. This paper proposes the Economic Dispatch (ED) of hybrid renewable power system. The hybrid system is composed of ten thermal generators, photovoltaic (PV) generator and wind turbine generator. To show the importance of renewable energy sources (RES) in the energy mix we have ran the simulation for system integrated PV only and PV plus wind. The result shows that the system with renewable energy sources (RES) is more compromising than the system without RES in terms of fuel cost.

  16. Integrated and Modular Design of an Optimized Process Architecture

    Directory of Open Access Journals (Sweden)

    Colin Raßfeld

    2013-07-01

    Full Text Available Global economic integration increased the complexity of business activities, so organizations are forced to become more efficient each day. Process organization is a very useful way of aligning organizational systems towards business processes. However, an organization must do more than just focus its attention and efforts on processes. The layout design has also a significant impact on the system performance.. We contribute to this field by developing a tailored process-oriented organizational structure and new layout design for the quality assurance of a leading German automotive manufacturer. The target concept we developed was evaluated by process owners and an IT-based process simulation. Our results provide solid empirical back-up in which the performance and effects are  assessed from a qualitative and quantitative perspective

  17. Optimizing production and imperfect preventive maintenance planning's integration in failure-prone manufacturing systems

    International Nuclear Information System (INIS)

    Aghezzaf, El-Houssaine; Khatab, Abdelhakim; Tam, Phuoc Le

    2016-01-01

    This paper investigates the issue of integrating production and maintenance planning in a failure-prone manufacturing system. It is assumed that the system's operating state is stochastically predictable, in terms of its operating age, and that it can accordingly be preventively maintained during preplanned periods. Preventive maintenance is assumed to be imperfect, that is when performed, it brings the manufacturing system to an operating state that lies between ‘as bad as old’ and ‘as good as new’. Only an overhauling of the system brings it to a ‘as good as new’ operating state again. A practical integrated production and preventive maintenance planning model, that takes into account the system's manufacturing capacity and its operational reliability state, is developed. The model is naturally formulated as a mixed-integer non-linear optimization problem, for which an extended mixed-integer linear reformulation is proposed. This reformulation, while it solves the proposed integrated planning problem to optimality, remains quite demanding in terms of computational time. A fix-and-optimize procedure, that takes advantage of some properties of the original model, is then proposed. The reformulation and the fix-and-optimize procedure are tested on some test instances adapted from those available in the literature. The results show that the proposed fix-and-optimize procedure performs quite well and opens new research direction for future improvements. - Highlights: • Integration of production planning and imperfect preventive maintenance is explored. • Imperfect maintenance is modeled using a fitting age reduction hybrid hazard rate. • A practical approximate optimization model for this integration is proposed. • The resulting naturally MINL optimization model is reformulated and solved as a MILP. • An effective fix-and-optimize procedure is proposed for large instances of this MILP.

  18. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  19. Comparison of experimental pulse-height distributions in germanium detectors with integrated-tiger-series-code predictions

    International Nuclear Information System (INIS)

    Beutler, D.E.; Halbleib, J.A.; Knott, D.P.

    1989-01-01

    This paper reports pulse-height distributions in two different types of Ge detectors measured for a variety of medium-energy x-ray bremsstrahlung spectra. These measurements have been compared to predictions using the integrated tiger series (ITS) Monte Carlo electron/photon transport code. In general, the authors find excellent agreement between experiments and predictions using no free parameters. These results demonstrate that the ITS codes can predict the combined bremsstrahlung production and energy deposition with good precision (within measurement uncertainties). The one region of disagreement observed occurs for low-energy (<50 keV) photons using low-energy bremsstrahlung spectra. In this case the ITS codes appear to underestimate the produced and/or absorbed radiation by almost an order of magnitude

  20. Integrated System-Level Optimization for Concurrent Engineering With Parametric Subsystem Modeling

    Science.gov (United States)

    Schuman, Todd; DeWeck, Oliver L.; Sobieski, Jaroslaw

    2005-01-01

    The introduction of concurrent design practices to the aerospace industry has greatly increased the productivity of engineers and teams during design sessions as demonstrated by JPL's Team X. Simultaneously, advances in computing power have given rise to a host of potent numerical optimization methods capable of solving complex multidisciplinary optimization problems containing hundreds of variables, constraints, and governing equations. Unfortunately, such methods are tedious to set up and require significant amounts of time and processor power to execute, thus making them unsuitable for rapid concurrent engineering use. This paper proposes a framework for Integration of System-Level Optimization with Concurrent Engineering (ISLOCE). It uses parametric neural-network approximations of the subsystem models. These approximations are then linked to a system-level optimizer that is capable of reaching a solution quickly due to the reduced complexity of the approximations. The integration structure is described in detail and applied to the multiobjective design of a simplified Space Shuttle external fuel tank model. Further, a comparison is made between the new framework and traditional concurrent engineering (without system optimization) through an experimental trial with two groups of engineers. Each method is evaluated in terms of optimizer accuracy, time to solution, and ease of use. The results suggest that system-level optimization, running as a background process during integrated concurrent engineering sessions, is potentially advantageous as long as it is judiciously implemented.

  1. Flow analysis and port optimization of geRotor pump using commercial CFD code

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Byung Jo; Seong, Seung Hak; Yoon, Soon Hyun [Pusan National Univ., Pusan (Korea, Republic of)

    2005-07-01

    GeRotor pump is widely used in the automotive industry for fuel lift, injection, engine oil lubrication, and also in transmission systems. The CFD study of the pump, which is characterized by transient flow with moving rotor boundaries, has been performed to obtain the most optimum shape of the inlet/outlet port of the pump. Various shapes of the port have been tested to investigate how they affect flow rates and fluctuations. Based on the parametric study, an optimum shape has been determined for the maximum flow rate and minimum fluctuations. The result has been confirmed by experiments. For the optimization, Taguchi method has been adapted. The groove shape has been found to be the most important factor among the selected several parameters related to flow rate and fluctuations.

  2. Optimal coding-decoding for systems controlled via a communication channel

    Science.gov (United States)

    Yi-wei, Feng; Guo, Ge

    2013-12-01

    In this article, we study the problem of controlling plants over a signal-to-noise ratio (SNR) constrained communication channel. Different from previous research, this article emphasises the importance of the actual channel model and coder/decoder in the study of network performance. Our major objectives include coder/decoder design for an additive white Gaussian noise (AWGN) channel with both standard network configuration and Youla parameter network architecture. We find that the optimal coder and decoder can be realised for different network configuration. The results are useful in determining the minimum channel capacity needed in order to stabilise plants over communication channels. The coder/decoder obtained can be used to analyse the effect of uncertainty on the channel capacity. An illustrative example is provided to show the effectiveness of the results.

  3. Optimized logarithmic phase masks used to generate defocus invariant modulation transfer function for wavefront coding system.

    Science.gov (United States)

    Zhao, Hui; Li, Yingcai

    2010-08-01

    In a previous Letter [Opt. Lett. 33, 1171 (2008)], we proposed an improved logarithmic phase mask by making modifications to the original one designed by Sherif. However, further studies in another paper [Appl. Opt. 49, 229 (2010)] show that even when the Sherif mask and the improved one are optimized, their corresponding defocused modulation transfer functions (MTFs) are still not stable with respect to focus errors. So, by further modifying their phase profiles, we design another two logarithmic phase masks that exhibit more stable defocused MTF. However, with the defocus-induced phase effect considered, we find that the performance of the two masks proposed in this Letter is better than the Sherif mask, but worse than our previously proposed phase mask, according to the Hilbert space angle.

  4. ActiWiz – optimizing your nuclide inventory at proton accelerators with a computer code

    CERN Document Server

    Vincke, Helmut

    2014-01-01

    When operating an accelerator one always faces unwanted, but inevitable beam losses. These result in activation of adjacent material, which in turn has an obvious impact on safety and handling constraints. One of the key parameters responsible for activation is the chemical composition of the material which often can be optimized in that respect. In order to facilitate this task also for non-expert users the ActiWiz software has been developed at CERN. Based on a large amount of generic FLUKA Monte Carlo simulations the software applies a specifically developed risk assessment model to provide support to decision makers especially during the design phase as well as common operational work in the domain of radiation protection.

  5. The integrated code system CASCADE-3D for advanced core design and safety analysis

    International Nuclear Information System (INIS)

    Neufert, A.; Van de Velde, A.

    1999-01-01

    The new program system CASCADE-3D (Core Analysis and Safety Codes for Advanced Design Evaluation) links some of Siemens advanced code packages for in-core fuel management and accident analysis: SAV95, PANBOX/COBRA and RELAP5. Consequently by using CASCADE-3D the potential of modern fuel assemblies and in-core fuel management strategies can be much better utilized because safety margins which had been reduced due to conservative methods are now predicted more accurately. By this innovative code system the customers can now take full advantage of the recent progress in fuel assembly design and in-core fuel management.(author)

  6. An enhancement of selection and crossover operations in real-coded genetic algorithm for large-dimensionality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Kwak, Noh Sung; Lee, Jongsoo [Yonsei University, Seoul (Korea, Republic of)

    2016-01-15

    The present study aims to implement a new selection method and a novel crossover operation in a real-coded genetic algorithm. The proposed selection method facilitates the establishment of a successively evolved population by combining several subpopulations: an elitist subpopulation, an off-spring subpopulation and a mutated subpopulation. A probabilistic crossover is performed based on the measure of probabilistic distance between the individuals. The concept of ‘allowance’ is suggested to describe the level of variance in the crossover operation. A number of nonlinear/non-convex functions and engineering optimization problems are explored to verify the capacities of the proposed strategies. The results are compared with those obtained from other genetic and nature-inspired algorithms.

  7. Biodiesel production from Jatropha curcas: Integrated process optimization

    International Nuclear Information System (INIS)

    Huerga, Ignacio R.; Zanuttini, María Soledad; Gross, Martín S.; Querini, Carlos A.

    2014-01-01

    Highlights: • The oil obtained from Jatropha curcas fruits has high variability in its properties. • A process for biodiesel production has been developed for small scale projects. • Oil neutralization with the glycerine phase has important advantages. • The glycerine phase and the meal are adequate to produce biogas. - Abstract: Energy obtained from renewable sources has increased its participation in the energy matrix worldwide, and it is expected to maintain this tendency. Both in large and small scales, there have been numerous developments and research with the aim of generating fuels and energy using different raw materials such as alternative crops, algae and lignocellulosic residues. In this work, Jatropha curcas plantation from the North West of Argentina was studied, with the objective of developing integrated processes for low and medium sizes farms. In these cases, glycerine purification and meal detoxification processes represent a very high cost, and usually are not included in the project. Consequently, alternative uses for these products are proposed. This study includes the evaluation of the Jatropha curcas crop during two years, evaluating the yields and oil properties. The solids left after the oil extraction were evaluated as solid fuels, the glycerine and the meal were used to generate biogas, and the oil was used to produce biodiesel. The oil pretreatment was carried out with the glycerine obtained in the biodiesel production process, thus neutralizing the free fatty acid, and decreasing the phosphorous and water content

  8. An integrated optimization for organic Rankine cycle based on entransy theory and thermodynamics

    International Nuclear Information System (INIS)

    Li, Tailu; Fu, Wencheng; Zhu, Jialing

    2014-01-01

    The organic Rankine cycle has been one of the essential heat-work conversion technologies nowadays. Lots of effectual optimization methods are focused on the promotion of the system efficiency, which are mainly relied on engineering experience and numerical simulations rather than theoretical analysis. A theoretical integrated optimization method was established based on the entransy theory and thermodynamics, with the ratio of the net power output to the ratio of the total thermal conductance to the thermal conductance in the condenser as the objective function. The system parameters besides the optimal pinch point temperature difference were obtained. The results show that the mass flow rate of the working fluid is inversely proportional to the evaporating temperature. An optimal evaporating temperature maximizes the net power output, and the maximal net power output corresponds to the maximal entransy loss and the change points of the heat source outlet temperature and the change rates for the entropy generation and the entransy dissipation. Moreover, the net power output and the total thermal conductance are inversely proportional to the pinch point temperature difference, contradicting with each other. Under the specified condition, the optimal operating parameters are ascertained, with the optimal pinch point temperature difference of 5 K. - Highlights: • We establish an integrated optimization model for organic Rankine cycle. • The model combines the entransy theory with thermodynamics. • The maximal net power output corresponds to the maximal entransy loss. • The pinch point temperature difference is optimized to be 5 K

  9. Integrating multi-objective optimization with computational fluid dynamics to optimize boiler combustion process of a coal fired power plant

    International Nuclear Information System (INIS)

    Liu, Xingrang; Bansal, R.C.

    2014-01-01

    Highlights: • A coal fired power plant boiler combustion process model based on real data. • We propose multi-objective optimization with CFD to optimize boiler combustion. • The proposed method uses software CORBA C++ and ANSYS Fluent 14.5 with AI. • It optimizes heat flux transfers and maintains temperature to avoid ash melt. - Abstract: The dominant role of electricity generation and environment consideration have placed strong requirements on coal fired power plants, requiring them to improve boiler combustion efficiency and decrease carbon emission. Although neural network based optimization strategies are often applied to improve the coal fired power plant boiler efficiency, they are limited by some combustion related problems such as slagging. Slagging can seriously influence heat transfer rate and decrease the boiler efficiency. In addition, it is difficult to measure slag build-up. The lack of measurement for slagging can restrict conventional neural network based coal fired boiler optimization, because no data can be used to train the neural network. This paper proposes a novel method of integrating non-dominated sorting genetic algorithm (NSGA II) based multi-objective optimization with computational fluid dynamics (CFD) to decrease or even avoid slagging inside a coal fired boiler furnace and improve boiler combustion efficiency. Compared with conventional neural network based boiler optimization methods, the method developed in the work can control and optimize the fields of flue gas properties such as temperature field inside a boiler by adjusting the temperature and velocity of primary and secondary air in coal fired power plant boiler control systems. The temperature in the vicinity of water wall tubes of a boiler can be maintained within the ash melting temperature limit. The incoming ash particles cannot melt and bond to surface of heat transfer equipment of a boiler. So the trend of slagging inside furnace is controlled. Furthermore, the

  10. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    Science.gov (United States)

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Integration of numerical analysis tools for automated numerical optimization of a transportation package design

    International Nuclear Information System (INIS)

    Witkowski, W.R.; Eldred, M.S.; Harding, D.C.

    1994-01-01

    The use of state-of-the-art numerical analysis tools to determine the optimal design of a radioactive material (RAM) transportation container is investigated. The design of a RAM package's components involves a complex coupling of structural, thermal, and radioactive shielding analyses. The final design must adhere to very strict design constraints. The current technique used by cask designers is uncoupled and involves designing each component separately with respect to its driving constraint. With the use of numerical optimization schemes, the complex couplings can be considered directly, and the performance of the integrated package can be maximized with respect to the analysis conditions. This can lead to more efficient package designs. Thermal and structural accident conditions are analyzed in the shape optimization of a simplified cask design. In this paper, details of the integration of numerical analysis tools, development of a process model, nonsmoothness difficulties with the optimization of the cask, and preliminary results are discussed

  12. ITS Version 6 : the integrated TIGER series of coupled electron/photon Monte Carlo transport codes.

    Energy Technology Data Exchange (ETDEWEB)

    Franke, Brian Claude; Kensek, Ronald Patrick; Laub, Thomas William

    2008-04-01

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of lineartime-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and internal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 6, the latest version of ITS, contains (1) improvements to the ITS 5.0 codes, and (2) conversion to Fortran 90. The general user friendliness of the software has been enhanced through memory allocation to reduce the need for users to modify and recompile the code.

  13. Evaluation of effective J-integral value for 3-D TWC pipe in ABAQUS code

    International Nuclear Information System (INIS)

    Yang, J. S.; You, K. W.; Sung, K. B.; Jung, W. T.; Kim, B. N.

    1999-01-01

    This paper suggests a simple method to estimate the effective J-integral values in applying Leak-Before-Break (LBB) technology to nuclear piping system. In this paper, the effective J-integral estimates were calculated using energy domain integral approach with ABAQUS computer program. In this case, there existed a apparent variation of J-integral values along the crack line through the thickness of pipe. For this reason, several case studies have been performed to evaluate the effective J-integral value. From the results, it was concluded that the simple method suggested in this paper can be effectively used in estimating the effective J-integral value

  14. Stochastic algorithm for channel optimized vector quantization: application to robust narrow-band speech coding

    International Nuclear Information System (INIS)

    Bouzid, M.; Benkherouf, H.; Benzadi, K.

    2011-01-01

    In this paper, we propose a stochastic joint source-channel scheme developed for efficient and robust encoding of spectral speech LSF parameters. The encoding system, named LSF-SSCOVQ-RC, is an LSF encoding scheme based on a reduced complexity stochastic split vector quantizer optimized for noisy channel. For transmissions over noisy channel, we will show first that our LSF-SSCOVQ-RC encoder outperforms the conventional LSF encoder designed by the split vector quantizer. After that, we applied the LSF-SSCOVQ-RC encoder (with weighted distance) for the robust encoding of LSF parameters of the 2.4 Kbits/s MELP speech coder operating over a noisy/noiseless channel. The simulation results will show that the proposed LSF encoder, incorporated in the MELP, ensure better performances than the original MELP MSVQ of 25 bits/frame; especially when the transmission channel is highly disturbed. Indeed, we will show that the LSF-SSCOVQ-RC yields significant improvement to the LSFs encoding performances by ensuring reliable transmissions over noisy channel.

  15. Optimization of multi-objective integrated process planning and scheduling problem using a priority based optimization algorithm

    Science.gov (United States)

    Ausaf, Muhammad Farhan; Gao, Liang; Li, Xinyu

    2015-12-01

    For increasing the overall performance of modern manufacturing systems, effective integration of process planning and scheduling functions has been an important area of consideration among researchers. Owing to the complexity of handling process planning and scheduling simultaneously, most of the research work has been limited to solving the integrated process planning and scheduling (IPPS) problem for a single objective function. As there are many conflicting objectives when dealing with process planning and scheduling, real world problems cannot be fully captured considering only a single objective for optimization. Therefore considering multi-objective IPPS (MOIPPS) problem is inevitable. Unfortunately, only a handful of research papers are available on solving MOIPPS problem. In this paper, an optimization algorithm for solving MOIPPS problem is presented. The proposed algorithm uses a set of dispatching rules coupled with priority assignment to optimize the IPPS problem for various objectives like makespan, total machine load, total tardiness, etc. A fixed sized external archive coupled with a crowding distance mechanism is used to store and maintain the non-dominated solutions. To compare the results with other algorithms, a C-matric based method has been used. Instances from four recent papers have been solved to demonstrate the effectiveness of the proposed algorithm. The experimental results show that the proposed method is an efficient approach for solving the MOIPPS problem.

  16. Development of a Deterministic Optimization Model for Design of an Integrated Utility and Hydrogen Supply Network

    International Nuclear Information System (INIS)

    Hwangbo, Soonho; Lee, In-Beum; Han, Jeehoon

    2014-01-01

    Lots of networks are constructed in a large scale industrial complex. Each network meet their demands through production or transportation of materials which are needed to companies in a network. Network directly produces materials for satisfying demands in a company or purchase form outside due to demand uncertainty, financial factor, and so on. Especially utility network and hydrogen network are typical and major networks in a large scale industrial complex. Many studies have been done mainly with focusing on minimizing the total cost or optimizing the network structure. But, few research tries to make an integrated network model by connecting utility network and hydrogen network. In this study, deterministic mixed integer linear programming model is developed for integrating utility network and hydrogen network. Steam Methane Reforming process is necessary for combining two networks. After producing hydrogen from Steam-Methane Reforming process whose raw material is steam vents from utility network, produced hydrogen go into hydrogen network and fulfill own needs. Proposed model can suggest optimized case in integrated network model, optimized blueprint, and calculate optimal total cost. The capability of the proposed model is tested by applying it to Yeosu industrial complex in Korea. Yeosu industrial complex has the one of the biggest petrochemical complex and various papers are based in data of Yeosu industrial complex. From a case study, the integrated network model suggests more optimal conclusions compared with previous results obtained by individually researching utility network and hydrogen network

  17. Developing Optimal Procedure of Emergency Outside Cooling Water Injection for APR1400 Extended SBO Scenario Using MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jong Rok; Oh, Seung Jong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    In this study, we examined optimum operator actions to mitigate extended SBO using MARS code. Particularly, this paper focuses on analyzing outside core cooling water injection scenario, and aimed to develop optimal extended SBO procedure. Supplying outside emergency cooling water is the key feature of flexible strategy in extended SBO situation. An optimum strategy to maintain core cooling is developed for typical extended SBO. MARS APR1400 best estimate model was used to find optimal procedure. Also RCP seal leakage effect was considered importantly. Recent Fukushima accident shows the importance of mitigation capability against extended SBO scenarios. In Korea, all nuclear power plants incorporated various measures against Fukushima-like events. For APR1400 NPP, outside connectors are installed to inject cooling water using fire trucks or portable pumps. Using these connectors, outside cooling water can be provided to reactor, steam generators (SG), containment spray system, and spent fuel pool. In U. S., similar approach is chosen to provide a diverse and flexible means to prevent fuel damage (core and SFP) in external event conditions resulting in extended loss of AC power and loss of ultimate heat sink. Hence, hardware necessary to cope with extended SBO is already available for APR1400. However, considering the complex and stressful condition encountered by operators during extended SBO, it is important to develop guidelines/procedures to best cope with the event.

  18. Optimal Placement of A Heat Pump in An Integrated Power and Heat Energy System

    DEFF Research Database (Denmark)

    Klyapovskiy, Sergey; You, Shi; Bindner, Henrik W.

    2017-01-01

    With the present trend towards Smart Grids and Smart Energy Systems it is important to look for the opportunities for integrated development between different energy sectors, such as electricity, heating, gas and transportation. This paper investigates the problem of optimal placement of a heat...... pump – a component that links electric and heating utilities together. The system used to demonstrate the integrated planning approach has two neighboring 10kV feeders and several distribution substations with loads that require central heating from the heat pump. The optimal location is found...

  19. The duration of uncertain times: audiovisual information about intervals is integrated in a statistically optimal fashion.

    Directory of Open Access Journals (Sweden)

    Jess Hartcher-O'Brien

    Full Text Available Often multisensory information is integrated in a statistically optimal fashion where each sensory source is weighted according to its precision. This integration scheme isstatistically optimal because it theoretically results in unbiased perceptual estimates with the highest precisionpossible.There is a current lack of consensus about how the nervous system processes multiple sensory cues to elapsed time.In order to shed light upon this, we adopt a computational approach to pinpoint the integration strategy underlying duration estimationof audio/visual stimuli. One of the assumptions of our computational approach is that the multisensory signals redundantly specify the same stimulus property. Our results clearly show that despite claims to the contrary, perceived duration is the result of an optimal weighting process, similar to that adopted for estimates of space. That is, participants weight the audio and visual information to arrive at the most precise, single duration estimate possible. The work also disentangles how different integration strategies - i.e. consideringthe time of onset/offset ofsignals - might alter the final estimate. As such we provide the first concrete evidence of an optimal integration strategy in human duration estimates.

  20. An Integrated GIS, optimization and simulation framework for optimal PV size and location in campus area environments

    International Nuclear Information System (INIS)

    Kucuksari, Sadik; Khaleghi, Amirreza M.; Hamidi, Maryam; Zhang, Ye; Szidarovszky, Ferenc; Bayraksan, Guzin; Son, Young-Jun

    2014-01-01

    Highlights: • The optimal size and locations for PV units for campus environments are achieved. • The GIS module finds the suitable rooftops and their panel capacity. • The optimization module maximizes the long-term profit of PV installations. • The simulation module evaluates the voltage profile of the distribution network. • The proposed work has been successfully demonstrated for a real university campus. - Abstract: Finding the optimal size and locations for Photovoltaic (PV) units has been a major challenge for distribution system planners and researchers. In this study, a framework is proposed to integrate Geographical Information Systems (GIS), mathematical optimization, and simulation modules to obtain the annual optimal placement and size of PV units for the next two decades in a campus area environment. First, a GIS module is developed to find the suitable rooftops and their panel capacity considering the amount of solar radiation, slope, elevation, and aspect. The optimization module is then used to maximize the long-term net profit of PV installations considering various costs of investment, inverter replacement, operation, and maintenance as well as savings from consuming less conventional energy. A voltage profile of the electricity distribution network is then investigated in the simulation module. In the case of voltage limit violation by intermittent PV generations or load fluctuations, two mitigation strategies, reallocation of the PV units or installation of a local storage unit, are suggested. The proposed framework has been implemented in a real campus area, and the results show that it can effectively be used for long-term installation planning of PV panels considering both the cost and power quality