WorldWideScience

Sample records for matrix code including

  1. R-matrix analysis code (RAC)

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Qi Huiquan

    1990-01-01

    A comprehensive R-matrix analysis code has been developed. It is based on the multichannel and multilevel R-matrix theory and runs in VAX computer with FORTRAN-77. With this code many kinds of experimental data for one nuclear system can be fitted simultaneously. The comparisions between code RAC and code EDA of LANL are made. The data show both codes produced the same calculation results when one set of R-matrix parameters was used. The differential cross section of 10 B (n, α) 7 Li for E n = 0.4 MeV and the polarization of 16 O (n,n) 16 O for E n = 2.56 MeV are presented

  2. List Decoding of Matrix-Product Codes from nested codes: an application to Quasi-Cyclic codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Høholdt, Tom; Ruano, Diego

    2012-01-01

    A list decoding algorithm for matrix-product codes is provided when $C_1,..., C_s$ are nested linear codes and $A$ is a non-singular by columns matrix. We estimate the probability of getting more than one codeword as output when the constituent codes are Reed-Solomon codes. We extend this list...... decoding algorithm for matrix-product codes with polynomial units, which are quasi-cyclic codes. Furthermore, it allows us to consider unique decoding for matrix-product codes with polynomial units....

  3. Construction and decoding of matrix-product codes from nested codes

    DEFF Research Database (Denmark)

    Hernando, Fernando; Lally, Kristine; Ruano, Diego

    2009-01-01

    We consider matrix-product codes [C1 ... Cs] · A, where C1, ..., Cs  are nested linear codes and matrix A has full rank. We compute their minimum distance and provide a decoding algorithm when A is a non-singular by columns matrix. The decoding algorithm decodes up to half of the minimum distance....

  4. Encoding of QC-LDPC Codes of Rank Deficient Parity Matrix

    Directory of Open Access Journals (Sweden)

    Mohammed Kasim Mohammed Al-Haddad

    2016-05-01

    Full Text Available the encoding of long low density parity check (LDPC codes presents a challenge compared to its decoding. The Quasi Cyclic (QC LDPC codes offer the advantage for reducing the complexity for both encoding and decoding due to its QC structure. Most QC-LDPC codes have rank deficient parity matrix and this introduces extra complexity over the codes with full rank parity matrix. In this paper an encoding scheme of QC-LDPC codes is presented that is suitable for codes with full rank parity matrix and rank deficient parity matrx. The extra effort required by the codes with rank deficient parity matrix over the codes of full rank parity matrix is investigated.

  5. A unified form of exact-MSR codes via product-matrix frameworks

    KAUST Repository

    Lin, Sian Jheng

    2015-02-01

    Regenerating codes represent a class of block codes applicable for distributed storage systems. The [n, k, d] regenerating code has data recovery capability while possessing arbitrary k out of n code fragments, and supports the capability for code fragment regeneration through the use of other arbitrary d fragments, for k ≤ d ≤ n - 1. Minimum storage regenerating (MSR) codes are a subset of regenerating codes containing the minimal size of each code fragment. The first explicit construction of MSR codes that can perform exact regeneration (named exact-MSR codes) for d ≥ 2k - 2 has been presented via a product-matrix framework. This paper addresses some of the practical issues on the construction of exact-MSR codes. The major contributions of this paper include as follows. A new product-matrix framework is proposed to directly include all feasible exact-MSR codes for d ≥ 2k - 2. The mechanism for a systematic version of exact-MSR code is proposed to minimize the computational complexities for the process of message-symbol remapping. Two practical forms of encoding matrices are presented to reduce the size of the finite field.

  6. A unified form of exact-MSR codes via product-matrix frameworks

    KAUST Repository

    Lin, Sian Jheng; Chung, Weiho; Han, Yunghsiangsam; Al-Naffouri, Tareq Y.

    2015-01-01

    Regenerating codes represent a class of block codes applicable for distributed storage systems. The [n, k, d] regenerating code has data recovery capability while possessing arbitrary k out of n code fragments, and supports the capability for code fragment regeneration through the use of other arbitrary d fragments, for k ≤ d ≤ n - 1. Minimum storage regenerating (MSR) codes are a subset of regenerating codes containing the minimal size of each code fragment. The first explicit construction of MSR codes that can perform exact regeneration (named exact-MSR codes) for d ≥ 2k - 2 has been presented via a product-matrix framework. This paper addresses some of the practical issues on the construction of exact-MSR codes. The major contributions of this paper include as follows. A new product-matrix framework is proposed to directly include all feasible exact-MSR codes for d ≥ 2k - 2. The mechanism for a systematic version of exact-MSR code is proposed to minimize the computational complexities for the process of message-symbol remapping. Two practical forms of encoding matrices are presented to reduce the size of the finite field.

  7. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  8. LABAN-PEL: a two-dimensional, multigroup diffusion, high-order response matrix code

    International Nuclear Information System (INIS)

    Mueller, E.Z.

    1991-06-01

    The capabilities of LABAN-PEL is described. LABAN-PEL is a modified version of the two-dimensional, high-order response matrix code, LABAN, written by Lindahl. The new version extends the capabilities of the original code with regard to the treatment of neutron migration by including an option to utilize full group-to-group diffusion coefficient matrices. In addition, the code has been converted from single to double precision and the necessary routines added to activate its multigroup capability. The coding has also been converted to standard FORTRAN-77 to enhance the portability of the code. Details regarding the input data requirements and calculational options of LABAN-PEL are provided. 13 refs

  9. RELAP-7 Code Assessment Plan and Requirement Traceability Matrix

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.

    2016-10-01

    The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, and technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.

  10. Validation matrix for the assessment of thermal-hydraulic codes for VVER LOCA and transients. A report by the OECD support group on the VVER thermal-hydraulic code validation matrix

    International Nuclear Information System (INIS)

    2001-06-01

    This report deals with an internationally agreed experimental test facility matrix for the validation of best estimate thermal-hydraulic computer codes applied for the analysis of VVER reactor primary systems in accident and transient conditions. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities that supplement the CSNI CCVMs and are suitable for reproducing these aspects are selected. Secondly, a list of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of VVER Thermal-Hydraulic Code Validation Matrix follows the logic of the CSNI Code Validation Matrices (CCVM). Similar to the CCVM it is an attempt to collect together in a systematic way the best sets of available test data for VVER specific code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated in countries operating VVER reactors over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case. (authors)

  11. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  12. R-Matrix Codes for Charged-particle Induced Reactionsin the Resolved Resonance Region

    Energy Technology Data Exchange (ETDEWEB)

    Leeb, Helmut [Technical Univ. of Wien, Vienna (Austria); Dimitriou, Paraskevi [Intl Atomic Energy Agency (IAEA), Vienna (Austria); Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-01-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 5 to 7 December 2016, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The meeting was a follow-up to the R-matrix Codes meeting held in December 2015, and served the purpose of monitoring progress in: the development of a translation code to enable exchange of input/output parameters between the various codes in different formats, fitting procedures and treatment of uncertainties, the evaluation methodology, and finally dissemination. The details of the presentations and technical discussions, as well as additional actions that were proposed to achieve all the goals of the meeting are summarized in this report.

  13. Matrix formulation of pebble circulation in the pebbed code

    International Nuclear Information System (INIS)

    Gougar, H.D.; Terry, W.K.; Ougouag, A.M.

    2002-01-01

    The PEBBED technique provides a foundation for equilibrium fuel cycle analysis and optimization in pebble-bed cores in which the fuel elements are continuously flowing and, if desired, recirculating. In addition to the modern analysis techniques used in or being developed for the code, PEBBED incorporates a novel nuclide-mixing algorithm that allows for sophisticated recirculation patterns using a matrix generated from basic core parameters. Derived from a simple partitioning of the pebble flow, the elements of the recirculation matrix are used to compute the spatially averaged density of each nuclide at the entry plane from the nuclide densities of pebbles emerging from the discharge conus. The order of the recirculation matrix is a function of the flexibility and sophistication of the fuel handling mechanism. This formulation for coupling pebble flow and neutronics enables core design and fuel cycle optimization to be performed by the manipulation of a few key core parameters. The formulation is amenable to modern optimization techniques. (author)

  14. CSNI Integral test facility validation matrix for the assessment of thermal-hydraulic codes for LWR LOCA and transients

    International Nuclear Information System (INIS)

    1996-07-01

    This report deals with an internationally agreed integral test facility (ITF) matrix for the validation of best estimate thermal-hydraulic computer codes. Firstly, the main physical phenomena that occur during the considered accidents are identified, test types are specified, and test facilities suitable for reproducing these aspects are selected. Secondly, a life of selected experiments carried out in these facilities has been set down. The criteria to achieve the objectives are outlined. The construction of such a matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement, including quantitative assessment of uncertainties in the modelling of phenomena by the codes. In addition to this objective, it is an attempt to record information which has been generated around the world over the last 20 years so that it is more accessible to present and future workers in that field than would otherwise be the case

  15. Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding

    Science.gov (United States)

    Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.

    2016-03-01

    In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.

  16. Pipe elbow stiffness coefficients including shear and bend flexibility factors for use in direct stiffness codes

    International Nuclear Information System (INIS)

    Perry, R.F.

    1977-01-01

    Historically, developments of computer codes used for piping analysis were based upon the flexibility method of structural analysis. Because of the specialized techniques employed in this method, the codes handled systems composed of only piping elements. Over the past ten years, the direct stiffness method has gained great popularity because of its systematic solution procedure regardless of the type of structural elements composing the system. A great advantage is realized with a direct stiffness code that combines piping elements along with other structural elements such as beams, plates, and shells, in a single model. One common problem, however, has been the lack of an accurate pipe elbow element that would adequately represent the effects of transverse shear and bend flexibility factors. The purpose of the present paper is to present a systematic derivation of the required 12x12 stiffness matrix and load vectors for a three dimensional pipe elbow element which includes the effects of transverse shear and pipe bend flexibility according to the ASME Boiler and Pressure Vessel Code, Section III. The results are presented analytically and as FORTRAN subroutines to be directly incorporated into existing direct stiffness codes. (Auth.)

  17. Development of a two-dimensional simulation code (koad) including atomic processes for beam direct energy conversion

    International Nuclear Information System (INIS)

    Yamamoto, Y.; Yoshikawa, K.; Hattori, Y.

    1987-01-01

    A two-dimensional simulation code for the beam direct energy conversion called KVAD (Kyoto University Advanced DART) including various loss mechanisms has been developed, and shown excellent agreement with the authors' experiments using the He + beams. The beam direct energy converter (BDC) is the device to recover the kinetic energy of unneutralized ions in the neutral beam injection (NBI) system directly into electricity. The BDC is very important and essential not only to the improvements of NBI system efficiency, but also to the relaxation of high heat flux problems on the beam dump with increase of injection energies. So far no simulation code could have successfully predicted BDC experimental results. The KUAD code applies, an optimized algorithm for vector processing, the finite element method (FEM) for potential calculation, and a semi-automatic method for spatial segmentations. Since particle trajectories in the KVAD code are analytically solved, very high speed tracings of the particle could be achieved by introducing an adjacent element matrix to identify the neighboring triangle elements and electrodes. Ion space charges are also analytically calculated by the Cloud in Cell (CIC) method, as well as electron space charges. Power losses due to atomic processes can be also evaluated in the KUAD code

  18. Fast space-varying convolution using matrix source coding with applications to camera stray light reduction.

    Science.gov (United States)

    Wei, Jianing; Bouman, Charles A; Allebach, Jan P

    2014-05-01

    Many imaging applications require the implementation of space-varying convolution for accurate restoration and reconstruction of images. Here, we use the term space-varying convolution to refer to linear operators whose impulse response has slow spatial variation. In addition, these space-varying convolution operators are often dense, so direct implementation of the convolution operator is typically computationally impractical. One such example is the problem of stray light reduction in digital cameras, which requires the implementation of a dense space-varying deconvolution operator. However, other inverse problems, such as iterative tomographic reconstruction, can also depend on the implementation of dense space-varying convolution. While space-invariant convolution can be efficiently implemented with the fast Fourier transform, this approach does not work for space-varying operators. So direct convolution is often the only option for implementing space-varying convolution. In this paper, we develop a general approach to the efficient implementation of space-varying convolution, and demonstrate its use in the application of stray light reduction. Our approach, which we call matrix source coding, is based on lossy source coding of the dense space-varying convolution matrix. Importantly, by coding the transformation matrix, we not only reduce the memory required to store it; we also dramatically reduce the computation required to implement matrix-vector products. Our algorithm is able to reduce computation by approximately factoring the dense space-varying convolution operator into a product of sparse transforms. Experimental results show that our method can dramatically reduce the computation required for stray light reduction while maintaining high accuracy.

  19. Developing HYDMN code to include the transient of MNSR

    International Nuclear Information System (INIS)

    Al-Barhoum, M.

    2000-11-01

    A description of the programs added to HYDMN code (a code for thermal-hydraulic steady state of MNSR) to include the transient of the same MNSR is presented. The code asks the initial conditions for the power (in k W) and the cold initial core inlet temperature (in degrees centigrade). A time-dependent study of the coolant inlet and outlet temperature, its speed, pool and tank temperatures is done for MNSR in general and for the Syrian MNSR in particular. The study solves the differential equations taken from reference (1) by using some numerical methods found in reference (3). The code becomes this way independent of any external information source. (Author)

  20. Matrix formulations of radiative transfer including the polarization effect in a coupled atmosphere-ocean system

    International Nuclear Information System (INIS)

    Ota, Yoshifumi; Higurashi, Akiko; Nakajima, Teruyuki; Yokota, Tatsuya

    2010-01-01

    A vector radiative transfer model has been developed for a coupled atmosphere-ocean system. The radiative transfer scheme is based on the discrete ordinate and matrix operator methods. The reflection/transmission matrices and source vectors are obtained for each atmospheric or oceanic layer through the discrete ordinate solution. The vertically inhomogeneous system is constructed using the matrix operator method, which combines the radiative interaction between the layers. This radiative transfer scheme is flexible for a vertically inhomogeneous system including the oceanic layers as well as the ocean surface. Compared with the benchmark results, the computational error attributable to the radiative transfer scheme has been less than 0.1% in the case of eight discrete ordinate directions. Furthermore, increasing the number of discrete ordinate directions has produced computations with higher accuracy. Based on our radiative transfer scheme, simulations of sun glint radiation have been presented for wavelengths of 670 nm and 1.6 μm. Results of simulations have shown reasonable characteristics of the sun glint radiation such as the strongly peaked, but slightly smoothed radiation by the rough ocean surface and depolarization through multiple scattering by the aerosol-loaded atmosphere. The radiative transfer scheme of this paper has been implemented to the numerical model named Pstar as one of the OpenCLASTR/STAR radiative transfer code systems, which are widely applied to many radiative transfer problems, including the polarization effect.

  1. Efficient diagonalization of the sparse matrices produced within the framework of the UK R-matrix molecular codes

    Science.gov (United States)

    Galiatsatos, P. G.; Tennyson, J.

    2012-11-01

    The most time consuming step within the framework of the UK R-matrix molecular codes is that of the diagonalization of the inner region Hamiltonian matrix (IRHM). Here we present the method that we follow to speed up this step. We use shared memory machines (SMM), distributed memory machines (DMM), the OpenMP directive based parallel language, the MPI function based parallel language, the sparse matrix diagonalizers ARPACK and PARPACK, a variation for real symmetric matrices of the official coordinate sparse matrix format and finally a parallel sparse matrix-vector product (PSMV). The efficient application of the previous techniques rely on two important facts: the sparsity of the matrix is large enough (more than 98%) and in order to get back converged results we need a small only part of the matrix spectrum.

  2. Improved Riccati Transfer Matrix Method for Free Vibration of Non-Cylindrical Helical Springs Including Warping

    Directory of Open Access Journals (Sweden)

    A.M. Yu

    2012-01-01

    Full Text Available Free vibration equations for non-cylindrical (conical, barrel, and hyperboloidal types helical springs with noncircular cross-sections, which consist of 14 first-order ordinary differential equations with variable coefficients, are theoretically derived using spatially curved beam theory. In the formulation, the warping effect upon natural frequencies and vibrating mode shapes is first studied in addition to including the rotary inertia, the shear and axial deformation influences. The natural frequencies of the springs are determined by the use of improved Riccati transfer matrix method. The element transfer matrix used in the solution is calculated using the Scaling and Squaring method and Pad'e approximations. Three examples are presented for three types of springs with different cross-sectional shapes under clamped-clamped boundary condition. The accuracy of the proposed method has been compared with the FEM results using three-dimensional solid elements (Solid 45 in ANSYS code. Numerical results reveal that the warping effect is more pronounced in the case of non-cylindrical helical springs than that of cylindrical helical springs, which should be taken into consideration in the free vibration analysis of such springs.

  3. Laser direct marking applied to rasterizing miniature Data Matrix Code on aluminum alloy

    Science.gov (United States)

    Li, Xia-Shuang; He, Wei-Ping; Lei, Lei; Wang, Jian; Guo, Gai-Fang; Zhang, Teng-Yun; Yue, Ting

    2016-03-01

    Precise miniaturization of 2D Data Matrix (DM) Codes on Aluminum alloy formed by raster mode laser direct part marking is demonstrated. The characteristic edge over-burn effects, which render vector mode laser direct part marking inadequate for producing precise and readable miniature codes, are minimized with raster mode laser marking. To obtain the control mechanism for the contrast and print growth of miniature DM code by raster laser marking process, the temperature field model of long pulse laser interaction with material is established. From the experimental results, laser average power and Q frequency have an important effect on the contrast and print growth of miniature DM code, and the threshold of laser average power and Q frequency for an identifiable miniature DM code are respectively 3.6 W and 110 kHz, which matches the model well within normal operating conditions. In addition, the empirical model of correlation occurring between laser marking parameters and module size is also obtained, and the optimal processing parameter values for an identifiable miniature DM code of different but certain data size are given. It is also found that an increase of the repeat scanning number effectively improves the surface finish of bore, the appearance consistency of modules, which has benefit to reading. The reading quality of miniature DM code is greatly improved using ultrasonic cleaning in water by avoiding the interference of color speckles surrounding modules.

  4. In-vessel core degradation code validation matrix update 1996-1999. Report by an OECD/NEA group of experts

    International Nuclear Information System (INIS)

    2001-02-01

    In 1991 the Committee on the Safety of Nuclear Installations (CSNI) issued a State-of-the-Art Report (SOAR) on In-Vessel Core Degradation in Light Water Reactor (LWR) Severe Accidents. Based on the recommendations of this report a Validation Matrix for severe accident modelling codes was produced. Experiments performed up to the end of 1993 were considered for this validation matrix. To include recent experiments and to enlarge the scope, an update was formally inaugurated in January 1999 by the Task Group on Degraded Core Cooling, a sub-group of Principal Working Group 2 (PWG-2) on Coolant System Behaviour, and a selection of writing group members was commissioned. The present report documents the results of this study. The objective of the Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The emphasis is on integral experiments, where interactions amongst key phenomena as well as the phenomena themselves are explored; however separate-effects experiments are also considered especially where these extend the parameter ranges to cover those expected in postulated LWR severe accident transients. As well as covering PWR and BWR designs of Western origin, the scope of the review has been extended to Eastern European (VVER) types. Similarly, the coverage of phenomena has been extended, starting as before from the initial heat-up but now proceeding through the in-core stage to include introduction of melt into the lower plenum and further to core coolability and retention to the lower plenum, with possible external cooling. Items of a purely thermal hydraulic nature involving no core degradation are excluded, having been covered in other validation matrix studies. Concerning fission product behaviour, the effect

  5. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  6. New sparse matrix solver in the KIKO3D 3-dimensional reactor dynamics code

    International Nuclear Information System (INIS)

    Panka, I.; Kereszturi, A.; Hegedus, C.

    2005-01-01

    The goal of this paper is to present a more effective method Bi-CGSTAB for accelerating the large sparse matrix equation solution in the KIKO3D code. This equation system is obtained by using the factorization of the improved quasi static (IQS) method for the time dependent nodal kinetic equations. In the old methodology standard large sparse matrix techniques were considered, where Gauss-Seidel preconditioning and a GMRES-type solver were applied. The validation of KIKO3D using Bi-CGSTAB has been performed by solving of a VVER-1000 kinetic benchmark problem. Additionally, the convergence characteristics were investigated in given macro time steps of Control Rod Ejection transients. The results have been obtained by the old GMRES and new Bi-CGSTAB methods are compared. (author)

  7. Java application for the superposition T-matrix code to study the optical properties of cosmic dust aggregates

    Science.gov (United States)

    Halder, P.; Chakraborty, A.; Deb Roy, P.; Das, H. S.

    2014-09-01

    In this paper, we report the development of a java application for the Superposition T-matrix code, JaSTA (Java Superposition T-matrix App), to study the light scattering properties of aggregate structures. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precession superposition codes for multi-sphere clusters in random orientation developed by Mackowski and Mischenko (1996). It consists of a graphical user interface (GUI) in the front hand and a database of related data in the back hand. Both the interactive GUI and database package directly enable a user to model by self-monitoring respective input parameters (namely, wavelength, complex refractive indices, grain size, etc.) to study the related optical properties of cosmic dust (namely, extinction, polarization, etc.) instantly, i.e., with zero computational time. This increases the efficiency of the user. The database of JaSTA is now created for a few sets of input parameters with a plan to create a large database in future. This application also has an option where users can compile and run the scattering code directly for aggregates in GUI environment. The JaSTA aims to provide convenient and quicker data analysis of the optical properties which can be used in different fields like planetary science, atmospheric science, nano science, etc. The current version of this software is developed for the Linux and Windows platform to study the light scattering properties of small aggregates which will be extended for larger aggregates using parallel codes in future. Catalogue identifier: AETB_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETB_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 571570 No. of bytes in distributed program

  8. Depletion methodology in the 3-D whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Zee, Sung Quun

    2005-02-01

    Three dimensional whole-core transport code DeCART has been developed to include a characteristics of the numerical reactor to replace partly the experiment. This code adopts the deterministic method in simulating the neutron behavior with the least assumption and approximation. This neutronic code is also coupled with the thermal hydraulic code CFD and the thermo mechanical code to simulate the combined effects. Depletion module has been implemented in DeCART code to predict the depleted composition in the fuel. The exponential matrix method of ORIGEN-2 has been used for the depletion calculation. The library of including decay constants, yield matrix and others has been used and greatly simplified for the calculation efficiency. This report summarizes the theoretical backgrounds and includes the verification of the depletion module in DeCART by performing the benchmark calculations.

  9. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  10. ORIGEN-2.2, Isotope Generation and Depletion Code Matrix Exponential Method

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of problem or function: ORIGEN is a computer code system for calculating the buildup, decay, and processing of radioactive materials. ORIGEN2 is a revised version of ORIGEN and incorporates updates of the reactor models, cross sections, fission product yields, decay data, and decay photon data, as well as the source code. ORIGEN-2.1 replaces ORIGEN and includes additional libraries for standard and extended-burnup PWR and BWR calculations, which are documented in ORNL/TM-11018. ORIGEN2.1 was first released in August 1991 and was replaced with ORIGEN2 Version 2.2 in June 2002. Version 2.2 was the first update to ORIGEN2 in over 10 years and was stimulated by a user discovering a discrepancy in the mass of fission products calculated using ORIGEN2 V2.1. Code modifications, as well as reducing the irradiation time step to no more than 100 days/step reduced the discrepancy from ∼10% to 0.16%. The bug does not noticeably affect the fission product mass in typical ORIGEN2 calculations involving reactor fuels because essentially all of the fissions come from actinides that have explicit fission product yield libraries. Thus, most previous ORIGEN2 calculations that were otherwise set up properly should not be affected. 2 - Method of solution: ORIGEN uses a matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients. ORIGEN2 has been variably dimensioned to allow the user to tailor the size of the executable module to the problem size and/or the available computer space. Dimensioned arrays have been set large enough to handle almost any size problem, using virtual memory capabilities available on most mainframe and 386/486 based PCS. The user is provided with much of the framework necessary to put some of the arrays to several different uses, call for the subroutines that perform the desired operations, and provide a mechanism to execute multiple ORIGEN2 problems with a single

  11. Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code

    Science.gov (United States)

    Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy

    2018-03-01

    The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.

  12. Certification plan for safety and PRA codes

    International Nuclear Information System (INIS)

    Toffer, H.; Crowe, R.D.; Ades, M.J.

    1990-05-01

    A certification plan for computer codes used in Safety Analyses and Probabilistic Risk Assessment (PRA) for the operation of the Savannah River Site (SRS) reactors has been prepared. An action matrix, checklists, and a time schedule have been included in the plan. These items identify what is required to achieve certification of the codes. A list of Safety Analysis and Probabilistic Risk Assessment (SA ampersand PRA) computer codes covered by the certification plan has been assembled. A description of each of the codes was provided in Reference 4. The action matrix for the configuration control plan identifies code specific requirements that need to be met to achieve the certification plan's objectives. The checklist covers the specific procedures that are required to support the configuration control effort and supplement the software life cycle procedures based on QAP 20-1 (Reference 7). A qualification checklist for users establishes the minimum prerequisites and training for achieving levels of proficiency in using configuration controlled codes for critical parameter calculations

  13. Parallel R-matrix computation

    International Nuclear Information System (INIS)

    Heggarty, J.W.

    1999-06-01

    For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in

  14. RELAP5/MOD3 code manual: Summaries and reviews of independent code assessment reports. Volume 7, Revision 1

    International Nuclear Information System (INIS)

    Moore, R.L.; Sloan, S.M.; Schultz, R.R.; Wilson, G.E.

    1996-10-01

    Summaries of RELAP5/MOD3 code assessments, a listing of the assessment matrix, and a chronology of the various versions of the code are given. Results from these code assessments have been used to formulate a compilation of some of the strengths and weaknesses of the code. These results are documented in the report. Volume 7 was designed to be updated periodically and to include the results of the latest code assessments as they become available. Consequently, users of Volume 7 should ensure that they have the latest revision available

  15. Block diagonalization for algebra's associated with block codes

    NARCIS (Netherlands)

    D. Gijswijt (Dion)

    2009-01-01

    htmlabstractFor a matrix *-algebra B, consider the matrix *-algebra A consisting of the symmetric tensors in the n-fold tensor product of B. Examples of such algebras in coding theory include the Bose-Mesner algebra and Terwilliger algebra of the (non)binary Hamming cube, and algebras arising in

  16. A Slater parameter optimisation interface for the CIV3 atomic structure code and its possible use with the R-matrix close coupling collision code

    International Nuclear Information System (INIS)

    Fawcett, B.C.; Hibbert, A.

    1989-11-01

    Details are here provided of amendments to the atomic structure code CIV3 which allow the optional adjustment of Slater parameters and average energies of configurations so that they result in improved energy levels and eigenvectors. It is also indicated how, in principle, the resultant improved eigenvectors can be utilised by the R-matrix collision code, thus providing an optimised target for close coupling collision strength calculations. An analogous computational method was recently reported for distorted wave collision strength calculations and applied to Fe XIII. The general method is suitable for the computation of collision strengths for complex ions and in some cases can then provide a basis for collision strength calculations in ions where ab initio computations break down or result in unnecessarily large errors. (author)

  17. The response matrix method for the representation of the border conditions in the three-dimensional difussion codes

    International Nuclear Information System (INIS)

    Grant, C.R.

    1981-01-01

    It could take a considerable amount of memory and processing time to represent a reactor in its simulation by means of a diffusion code and considering areas in which nuclear and geometrical properties are invariant, such as reflector, water columns, etc. To avoid an explicit representation of these zones, a method employing a matrix was developed consisting in expressing the net currents of each group as a function of the total flux. Estimates are made for different geometries, introducing the PUMA difussion code of materials. Several tests made proved a very sound reliability of the results obtained in 2 and 5 groups. (author) [es

  18. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  19. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  20. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  1. Elaboration of a computer code for the solution of a two-dimensional two-energy group diffusion problem using the matrix response method

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1980-12-01

    An analytical procedure to solve the neutron diffusion equation in two dimensions and two energy groups was developed. The response matrix method was used coupled with an expansion of the neutron flux in finite Fourier series. A computer code 'MRF2D' was elaborated to implement the above mentioned procedure for PWR reactor core calculations. Different core symmetry options are allowed by the code, which is also flexible enough to allow for improvements by means of algorithm optimization. The code performance was compared with a corner mesh finite difference code named TVEDIM by using a International Atomic Energy Agency (IAEA) standard problem. Computer processing time 12,7% smaller is required by the MRF2D code to reach the same precision on criticality eigenvalue. (Author) [pt

  2. RELAP5-3D Code Includes ATHENA Features and Models

    International Nuclear Information System (INIS)

    Riemke, Richard A.; Davis, Cliff B.; Schultz, Richard R.

    2006-01-01

    Version 2.3 of the RELAP5-3D computer program includes all features and models previously available only in the ATHENA version of the code. These include the addition of new working fluids (i.e., ammonia, blood, carbon dioxide, glycerol, helium, hydrogen, lead-bismuth, lithium, lithium-lead, nitrogen, potassium, sodium, and sodium-potassium) and a magnetohydrodynamic model that expands the capability of the code to model many more thermal-hydraulic systems. In addition to the new working fluids along with the standard working fluid water, one or more noncondensable gases (e.g., air, argon, carbon dioxide, carbon monoxide, helium, hydrogen, krypton, nitrogen, oxygen, SF 6 , xenon) can be specified as part of the vapor/gas phase of the working fluid. These noncondensable gases were in previous versions of RELAP5-3D. Recently four molten salts have been added as working fluids to RELAP5-3D Version 2.4, which has had limited release. These molten salts will be in RELAP5-3D Version 2.5, which will have a general release like RELAP5-3D Version 2.3. Applications that use these new features and models are discussed in this paper. (authors)

  3. The association between patient-therapist MATRIX congruence and treatment outcome.

    Science.gov (United States)

    Mendlovic, Shlomo; Saad, Amit; Roll, Uri; Ben Yehuda, Ariel; Tuval-Mashiah, Rivka; Atzil-Slonim, Dana

    2018-03-14

    The present study aimed to examine the association between patient-therapist micro-level congruence/incongruence ratio and psychotherapeutic outcome. Nine good- and nine poor-outcome psychodynamic treatments (segregated by comparing pre- and post-treatment BDI-II) were analyzed (N = 18) moment by moment using the MATRIX (total number of MATRIX codes analyzed = 11,125). MATRIX congruence was defined as similar adjacent MATRIX codes. the congruence/incongruence ratio tended to increase as the treatment progressed only in good-outcome treatments. Progression of MATRIX codes' congruence/incongruence ratio is associated with good outcome of psychotherapy.

  4. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  5. Snapshot Mueller matrix polarimetry by wavelength polarization coding and application to the study of switching dynamics in a ferroelectric liquid crystal cell.

    Directory of Open Access Journals (Sweden)

    Le Jeune B.

    2010-06-01

    Full Text Available This paper describes a snapshot Mueller matrix polarimeter by wavelength polarization coding. This device is aimed at encoding polarization states in the spectral domain through use of a broadband source and high-order retarders. This allows one to measure a full Mueller matrix from a single spectrum whose acquisition time only depends on the detection system aperture. The theoretical fundamentals of this technique are developed prior to validation by experiments. The setup calibration is described as well as optimization and stabilization procedures. Then, it is used to study, by time-resolved Mueller matrix polarimetry, the switching dynamics in a ferroelectric liquid crystal cell.

  6. The European source-term evaluation code ASTEC: status and applications, including CANDU plant applications

    International Nuclear Information System (INIS)

    Van Dorsselaere, J.P.; Giordano, P.; Kissane, M.P.; Montanelli, T.; Schwinges, B.; Ganju, S.; Dickson, L.

    2004-01-01

    Research on light-water reactor severe accidents (SA) is still required in a limited number of areas in order to confirm accident-management plans. Thus, 49 European organizations have linked their SA research in a durable way through SARNET (Severe Accident Research and management NETwork), part of the European 6th Framework Programme. One goal of SARNET is to consolidate the integral code ASTEC (Accident Source Term Evaluation Code, developed by IRSN and GRS) as the European reference tool for safety studies; SARNET efforts include extending the application scope to reactor types other than PWR (including VVER) such as BWR and CANDU. ASTEC is used in IRSN's Probabilistic Safety Analysis level 2 of 900 MWe French PWRs. An earlier version of ASTEC's SOPHAEROS module, including improvements by AECL, is being validated as the Canadian Industry Standard Toolset code for FP-transport analysis in the CANDU Heat Transport System. Work with ASTEC has also been performed by Bhabha Atomic Research Centre, Mumbai, on IPHWR containment thermal hydraulics. (author)

  7. Code-To-Code Benchmarking Of The Porflow And GoldSim Contaminant Transport Models Using A Simple 1-D Domain - 11191

    International Nuclear Information System (INIS)

    Hiergesell, R.; Taylor, G.

    2010-01-01

    An investigation was conducted to compare and evaluate contaminant transport results of two model codes, GoldSim and Porflow, using a simple 1-D string of elements in each code. Model domains were constructed to be identical with respect to cell numbers and dimensions, matrix material, flow boundary and saturation conditions. One of the codes, GoldSim, does not simulate advective movement of water; therefore the water flux term was specified as a boundary condition. In the other code, Porflow, a steady-state flow field was computed and contaminant transport was simulated within that flow-field. The comparisons were made solely in terms of the ability of each code to perform contaminant transport. The purpose of the investigation was to establish a basis for, and to validate follow-on work that was conducted in which a 1-D GoldSim model developed by abstracting information from Porflow 2-D and 3-D unsaturated and saturated zone models and then benchmarked to produce equivalent contaminant transport results. A handful of contaminants were selected for the code-to-code comparison simulations, including a non-sorbing tracer and several long- and short-lived radionuclides exhibiting both non-sorbing to strongly-sorbing characteristics with respect to the matrix material, including several requiring the simulation of in-growth of daughter radionuclides. The same diffusion and partitioning coefficients associated with each contaminant and the half-lives associated with each radionuclide were incorporated into each model. A string of 10-elements, having identical spatial dimensions and properties, were constructed within each code. GoldSim's basic contaminant transport elements, Mixing cells, were utilized in this construction. Sand was established as the matrix material and was assigned identical properties (e.g. bulk density, porosity, saturated hydraulic conductivity) in both codes. Boundary conditions applied included an influx of water at the rate of 40 cm/yr at one

  8. Intracoin - International Nuclide Transport Code Intercomparison Study

    International Nuclear Information System (INIS)

    1984-09-01

    The purpose of the project is to obtain improved knowledge of the influence of various strategies for radionuclide transport modelling for the safety assessment of final repositories for nuclear waste. This is a report of the first phase of the project which was devoted to a comparison of the numerical accuracy of the computer codes used in the study. The codes can be divided into five groups, namely advection-dispersion models, models including matrix diffusion and chemical effects and finally combined models. The results are presented as comparisons of calculations since the objective of level 1 was code verification. (G.B.)

  9. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  10. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  11. APC: A new code for Atmospheric Polarization Computations

    International Nuclear Information System (INIS)

    Korkin, Sergey V.; Lyapustin, Alexei I.; Rozanov, Vladimir V.

    2013-01-01

    A new polarized radiative transfer code Atmospheric Polarization Computations (APC) is described. The code is based on separation of the diffuse light field into anisotropic and smooth (regular) parts. The anisotropic part is computed analytically. The smooth regular part is computed numerically using the discrete ordinates method. Vertical stratification of the atmosphere, common types of bidirectional surface reflection and scattering by spherical particles or spheroids are included. A particular consideration is given to computation of the bidirectional polarization distribution function (BPDF) of the waved ocean surface. -- Highlights: •A new code, APC, has been developed. •The code was validated against well-known codes. •The BPDF for an arbitrary Mueller matrix is computed

  12. Coding theory on the m-extension of the Fibonacci p-numbers

    International Nuclear Information System (INIS)

    Basu, Manjusri; Prasad, Bandhu

    2009-01-01

    In this paper, we introduce a new Fibonacci G p,m matrix for the m-extension of the Fibonacci p-numbers where p (≥0) is integer and m (>0). Thereby, we discuss various properties of G p,m matrix and the coding theory followed from the G p,m matrix. In this paper, we establish the relations among the code elements for all values of p (nonnegative integer) and m(>0). We also show that the relation, among the code matrix elements for all values of p and m=1, coincides with the relation among the code matrix elements for all values of p [Basu M, Prasad B. The generalized relations among the code elements for Fibonacci coding theory. Chaos, Solitons and Fractals (2008). doi: 10.1016/j.chaos.2008.09.030]. In general, correct ability of the method increases as p increases but it is independent of m.

  13. 3-D FEM Modeling of fiber/matrix interface debonding in UD composites including surface effects

    International Nuclear Information System (INIS)

    Pupurs, A; Varna, J

    2012-01-01

    Fiber/matrix interface debond growth is one of the main mechanisms of damage evolution in unidirectional (UD) polymer composites. Because for polymer composites the fiber strain to failure is smaller than for the matrix multiple fiber breaks occur at random positions when high mechanical stress is applied to the composite. The energy released due to each fiber break is usually larger than necessary for the creation of a fiber break therefore a partial debonding of fiber/matrix interface is typically observed. Thus the stiffness reduction of UD composite is contributed both from the fiber breaks and from the interface debonds. The aim of this paper is to analyze the debond growth in carbon fiber/epoxy and glass fiber/epoxy UD composites using fracture mechanics principles by calculation of energy release rate G II . A 3-D FEM model is developed for calculation of energy release rate for fiber/matrix interface debonds at different locations in the composite including the composite surface region where the stress state differs from the one in the bulk composite. In the model individual partially debonded fiber is surrounded by matrix region and embedded in a homogenized composite.

  14. Overview of CSNI separate effects tests validation matrix

    Energy Technology Data Exchange (ETDEWEB)

    Aksan, N. [Paul Scherrer Institute, Villigen (Switzerland); Auria, F.D. [Univ. of Pisa (Italy); Glaeser, H. [Gesellschaft fuer anlagen und Reaktorsicherheit, (GRS), Garching (Germany)] [and others

    1995-09-01

    An internationally agreed separate effects test (SET) Validation Matrix for thermal-hydraulic system codes has been established by a sub-group of the Task Group on Thermal Hydraulic System Behaviour as requested by the OECD/NEA Committee on Safety of Nuclear Installations (SCNI) Principal Working Group No. 2 on Coolant System Behaviour. The construction of such a Matrix is an attempt to collect together in a systematic way the best sets of openly available test data for code validation, assessment and improvement and also for quantitative code assessment with respect to quantification of uncertainties to the modeling of individual phenomena by the codes. The methodology, that has been developed during the process of establishing CSNI-SET validation matrix, was an important outcome of the work on SET matrix. In addition, all the choices which have been made from the 187 identified facilities covering the 67 phenomena will be investigated together with some discussions on the data base.

  15. Fisher Matrix Preloaded — FISHER4CAST

    Science.gov (United States)

    Bassett, Bruce A.; Fantaye, Yabebal; Hlozek, Renée; Kotze, Jacques

    The Fisher Matrix is the backbone of modern cosmological forecasting. We describe the Fisher4Cast software: A general-purpose, easy-to-use, Fisher Matrix framework. It is open source, rigorously designed and tested and includes a Graphical User Interface (GUI) with automated LATEX file creation capability and point-and-click Fisher ellipse generation. Fisher4Cast was designed for ease of extension and, although written in Matlab, is easily portable to open-source alternatives such as Octave and Scilab. Here we use Fisher4Cast to present new 3D and 4D visualizations of the forecasting landscape and to investigate the effects of growth and curvature on future cosmological surveys. Early releases have been available at since mid-2008. The current release of the code is Version 2.2 which is described here. For ease of reference a Quick Start guide and the code used to produce the figures in this paper are included, in the hope that it will be useful to the cosmology and wider scientific communities.

  16. Application of consistent fluid added mass matrix to core seismic

    International Nuclear Information System (INIS)

    Koo, K. H.; Lee, J. H.

    2003-01-01

    In this paper, the application algorithm of a consistent fluid added mass matrix including the coupling terms to the core seismic analysis is developed and installed at SAC-CORE3.0 code. As an example, we assumed the 7-hexagon system of the LMR core and carried out the vibration modal analysis and the nonlinear time history seismic response analysis using SAC-CORE3.0. Used consistent fluid added mass matrix is obtained by using the finite element program of the FAMD(Fluid Added Mass and Damping) code. From the results of the vibration modal analysis, the core duct assemblies reveal strongly coupled vibration modes, which are so different from the case of in-air condition. From the results of the time history seismic analysis, it was verified that the effects of the coupled terms of the consistent fluid added mass matrix are significant in impact responses and the dynamic responses

  17. Gyrokinetic Vlasov code including full three-dimensional geometry of experiments

    International Nuclear Information System (INIS)

    Nunami, Masanori; Watanabe, Tomohiko; Sugama, Hideo

    2010-03-01

    A new gyrokinetic Vlasov simulation code, GKV-X, is developed for investigating the turbulent transport in magnetic confinement devices with non-axisymmetric configurations. Effects of the magnetic surface shapes in a three-dimensional equilibrium obtained from the VMEC code are accurately incorporated. Linear simulations of the ion temperature gradient instabilities and the zonal flows in the Large Helical Device (LHD) configuration are carried out by the GKV-X code for a benchmark test against the GKV code. The frequency, the growth rate, and the mode structure of the ion temperature gradient instability are influenced by the VMEC geometrical data such as the metric tensor components of the Boozer coordinates for high poloidal wave numbers, while the difference between the zonal flow responses obtained by the GKV and GKV-X codes is found to be small in the core LHD region. (author)

  18. Design LDPC Codes without Cycles of Length 4 and 6

    Directory of Open Access Journals (Sweden)

    Kiseon Kim

    2008-04-01

    Full Text Available We present an approach for constructing LDPC codes without cycles of length 4 and 6. Firstly, we design 3 submatrices with different shifting functions given by the proposed schemes, then combine them into the matrix specified by the proposed approach, and, finally, expand the matrix into a desired parity-check matrix using identity matrices and cyclic shift matrices of the identity matrices. The simulation result in AWGN channel verifies that the BER of the proposed code is close to those of Mackay's random codes and Tanner's QC codes, and the good BER performance of the proposed can remain at high code rates.

  19. Soft decoding a self-dual (48, 24; 12) code

    Science.gov (United States)

    Solomon, G.

    1993-01-01

    A self-dual (48,24;12) code comes from restricting a binary cyclic (63,18;36) code to a 6 x 7 matrix, adding an eighth all-zero column, and then adjoining six dimensions to this extended 6 x 8 matrix. These six dimensions are generated by linear combinations of row permutations of a 6 x 8 matrix of weight 12, whose sums of rows and columns add to one. A soft decoding using these properties and approximating maximum likelihood is presented here. This is preliminary to a possible soft decoding of the box (72,36;15) code that promises a 7.7-dB theoretical coding under maximum likelihood.

  20. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  1. Convolutional cylinder-type block-circulant cycle codes

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2013-06-01

    Full Text Available In this paper, we consider a class of column-weight two quasi-cyclic low-density paritycheck codes in which the girth can be large enough, as an arbitrary multiple of 8. Then we devote a convolutional form to these codes, such that their generator matrix can be obtained by elementary row and column operations on the parity-check matrix. Finally, we show that the free distance of the convolutional codes is equal to the minimum distance of their block counterparts.

  2. Transoptr-a second order beam transport design code with automatic internal optimization and general constraints

    International Nuclear Information System (INIS)

    Heighway, E.A.

    1980-07-01

    A second order beam transport design code with parametric optimization is described. The code analyzes the transport of charged particle beams through a user defined magnet system. The magnet system parameters are varied (within user defined limits) until the properties of the transported beam and/or the system transport matrix match those properties requested by the user. The code uses matrix formalism to represent the transport elements and optimization is achieved using the variable metric method. Any constraints that can be expressed algebraically may be included by the user as part of his design. Instruction in the use of the program is given. (auth)

  3. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  4. Response Matrix Method Development Program at Savannah River Laboratory

    International Nuclear Information System (INIS)

    Sicilian, J.M.

    1976-01-01

    The Response Matrix Method Development Program at Savannah River Laboratory (SRL) has concentrated on the development of an effective system of computer codes for the analysis of Savannah River Plant (SRP) reactors. The most significant contribution of this program to date has been the verification of the accuracy of diffusion theory codes as used for routine analysis of SRP reactor operation. This paper documents the two steps carried out in achieving this verification: confirmation of the accuracy of the response matrix technique through comparison with experiment and Monte Carlo calculations; and establishment of agreement between diffusion theory and response matrix codes in situations which realistically approximate actual operating conditions

  5. Digital Data Matrix Scanner Developnent At Marshall Space Flight Center

    Science.gov (United States)

    2004-01-01

    Research at NASA's Marshall Space Flight Center has resulted in a system for reading hidden identification codes using a hand-held magnetic scanner. It's an invention that could help businesses improve inventory management, enhance safety, improve security, and aid in recall efforts if defects are discovered. Two-dimensional Data Matrix symbols consisting of letters and numbers permanently etched on items for identification and resembling a small checkerboard pattern are more efficient and reliable than traditional bar codes, and can store up to 100 times more information. A team led by Fred Schramm of the Marshall Center's Technology Transfer Department, in partnership with PRI,Torrance, California, has developed a hand-held device that can read this special type of coded symbols, even if covered by up to six layers of paint. Before this new technology was available, matrix symbols were read with optical scanners, and only if the codes were visible. This latest improvement in digital Data Matrix technologies offers greater flexibility for businesses and industries already using the marking system. Paint, inks, and pastes containing magnetic properties are applied in matrix symbol patterns to objects with two-dimensional codes, and the codes are read by a magnetic scanner, even after being covered with paint or other coatings. The ability to read hidden matrix symbols promises a wide range of benefits in a number of fields, including airlines, electronics, healthcare, and the automotive industry. Many industries would like to hide information on a part, so it can be read only by the party who put it there. For instance, the automotive industry uses direct parts marking for inventory control, but for aesthetic purposes the marks often need to be invisible. Symbols have been applied to a variety of materials, including metal, plastic, glass, paper, fabric and foam, on everything from electronic parts to pharmaceuticals to livestock. The portability of the hand

  6. Computer code for double beta decay QRPA based calculations

    Energy Technology Data Exchange (ETDEWEB)

    Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)

    2014-11-11

    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  7. User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.

    Science.gov (United States)

    Reddy, C. J.

    2000-01-01

    PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.

  8. Stochastic-Strength-Based Damage Simulation Tool for Ceramic Matrix and Polymer Matrix Composite Structures

    Science.gov (United States)

    Nemeth, Noel N.; Bednarcyk, Brett A.; Pineda, Evan J.; Walton, Owen J.; Arnold, Steven M.

    2016-01-01

    Stochastic-based, discrete-event progressive damage simulations of ceramic-matrix composite and polymer matrix composite material structures have been enabled through the development of a unique multiscale modeling tool. This effort involves coupling three independently developed software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/ Life), and (3) the Abaqus finite element analysis (FEA) program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating unit cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC. Abaqus is used at the global scale to model the overall composite structure. An Abaqus user-defined material (UMAT) interface, referred to here as "FEAMAC/CARES," was developed that enables MAC/GMC and CARES/Life to operate seamlessly with the Abaqus FEA code. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events, which incrementally progress and lead to ultimate structural failure. This report describes the FEAMAC/CARES methodology and discusses examples that illustrate the performance of the tool. A comprehensive example problem, simulating the progressive damage of laminated ceramic matrix composites under various off-axis loading conditions and including a double notched tensile specimen geometry, is described in a separate report.

  9. Overview of the ArbiTER edge plasma eigenvalue code

    Science.gov (United States)

    Baver, Derek; Myra, James; Umansky, Maxim

    2011-10-01

    The Arbitrary Topology Equation Reader, or ArbiTER, is a flexible eigenvalue solver that is currently under development for plasma physics applications. The ArbiTER code builds on the equation parser framework of the existing 2DX code, extending it to include a topology parser. This will give the code the capability to model problems with complicated geometries (such as multiple X-points and scrape-off layers) or model equations with arbitrary numbers of dimensions (e.g. for kinetic analysis). In the equation parser framework, model equations are not included in the program's source code. Instead, an input file contains instructions for building a matrix from profile functions and elementary differential operators. The program then executes these instructions in a sequential manner. These instructions may also be translated into analytic form, thus giving the code transparency as well as flexibility. We will present an overview of how the ArbiTER code is to work, as well as preliminary results from early versions of this code. Work supported by the U.S. DOE.

  10. Positron scattering by atomic hydrogen including positronium formation

    International Nuclear Information System (INIS)

    Higgins, K.; Burke, P.G.

    1993-01-01

    Positron scattering by atomic hydrogen including positronium formation has been formulated using the R-matrix method and a general computer code written. Partial wave elastic and ground state positronium formation cross sections have been calculated for L ≤ 6 using a six-state approximation which includes the ground state and the 2s and 2p pseudostates of both hydrogen and positronium. The elastic scattering results obtained are in good agreement with those derived from a highly accurate calculation based upon the intermediate energy R-matrix approach. As in a previous coupled-channel static calculation, resonance effects are observed at intermediate energies in the S-wave positronium formation cross section. However, in the present results, the dominant resonance arises in the P-wave cross sections at an energy of 2.73 Ryd and with a width of 0.19 Ryd. (author)

  11. Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes

    Science.gov (United States)

    2015-11-01

    Memorandum Simulation of Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes...Weld Mechanical Behavior to Include Welding-Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes by Charles R. Fisher...Welding- Induced Residual Stress and Distortion: Coupling of SYSWELD and Abaqus Codes 5a. CONTRACT NUMBER N/A 5b. GRANT NUMBER N/A 5c

  12. Kinetic models of gene expression including non-coding RNAs

    Energy Technology Data Exchange (ETDEWEB)

    Zhdanov, Vladimir P., E-mail: zhdanov@catalysis.r

    2011-03-15

    In cells, genes are transcribed into mRNAs, and the latter are translated into proteins. Due to the feedbacks between these processes, the kinetics of gene expression may be complex even in the simplest genetic networks. The corresponding models have already been reviewed in the literature. A new avenue in this field is related to the recognition that the conventional scenario of gene expression is fully applicable only to prokaryotes whose genomes consist of tightly packed protein-coding sequences. In eukaryotic cells, in contrast, such sequences are relatively rare, and the rest of the genome includes numerous transcript units representing non-coding RNAs (ncRNAs). During the past decade, it has become clear that such RNAs play a crucial role in gene expression and accordingly influence a multitude of cellular processes both in the normal state and during diseases. The numerous biological functions of ncRNAs are based primarily on their abilities to silence genes via pairing with a target mRNA and subsequently preventing its translation or facilitating degradation of the mRNA-ncRNA complex. Many other abilities of ncRNAs have been discovered as well. Our review is focused on the available kinetic models describing the mRNA, ncRNA and protein interplay. In particular, we systematically present the simplest models without kinetic feedbacks, models containing feedbacks and predicting bistability and oscillations in simple genetic networks, and models describing the effect of ncRNAs on complex genetic networks. Mathematically, the presentation is based primarily on temporal mean-field kinetic equations. The stochastic and spatio-temporal effects are also briefly discussed.

  13. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  14. CEMCAN Software Enhanced for Predicting the Properties of Woven Ceramic Matrix Composites

    Science.gov (United States)

    Murthy, Pappu L. N.; Mital, Subodh K.; DiCarlo, James A.

    2000-01-01

    Major advancements are needed in current high-temperature materials to meet the requirements of future space and aeropropulsion structural components. Ceramic matrix composites (CMC's) are one class of materials that are being evaluated as candidate materials for many high-temperature applications. Past efforts to improve the performance of CMC's focused primarily on improving the properties of the fiber, interfacial coatings, and matrix constituents as individual phases. Design and analysis tools must take into consideration the complex geometries, microstructures, and fabrication processes involved in these composites and must allow the composite properties to be tailored for optimum performance. Major accomplishments during the past year include the development and inclusion of woven CMC micromechanics methodology into the CEMCAN (Ceramic Matrix Composites Analyzer) computer code. The code enables one to calibrate a consistent set of constituent properties as a function of temperature with the aid of experimentally measured data.

  15. New binary linear codes which are dual transforms of good codes

    NARCIS (Netherlands)

    Jaffe, D.B.; Simonis, J.

    1999-01-01

    If C is a binary linear code, one may choose a subset S of C, and form a new code CST which is the row space of the matrix having the elements of S as its columns. One way of picking S is to choose a subgroup H of Aut(C) and let S be some H-stable subset of C. Using (primarily) this method for

  16. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  17. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  18. The finite element response matrix method

    International Nuclear Information System (INIS)

    Nakata, H.; Martin, W.R.

    1983-02-01

    A new technique is developed with an alternative formulation of the response matrix method implemented with the finite element scheme. Two types of response matrices are generated from the Galerkin solution to the weak form of the diffusion equation subject to an arbitrary current and source. The piecewise polynomials are defined in two levels, the first for the local (assembly) calculations and the second for the global (core) response matrix calculations. This finite element response matrix technique was tested in two 2-dimensional test problems, 2D-IAEA benchmark problem and Biblis benchmark problem, with satisfatory results. The computational time, whereas the current code is not extensively optimized, is of the same order of the well estabilished coarse mesh codes. Furthermore, the application of the finite element technique in an alternative formulation of response matrix method permits the method to easily incorporate additional capabilities such as treatment of spatially dependent cross-sections, arbitrary geometrical configurations, and high heterogeneous assemblies. (Author) [pt

  19. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  20. UEP Concepts in Modulation and Coding

    Directory of Open Access Journals (Sweden)

    Werner Henkel

    2010-01-01

    Full Text Available First unequal error protection (UEP proposals date back to the 1960's (Masnick and Wolf; 1967, but now with the introduction of scalable video, UEP develops to a key concept for the transport of multimedia data. The paper presents an overview of some new approaches realizing UEP properties in physical transport, especially multicarrier modulation, or with LDPC and Turbo codes. For multicarrier modulation, UEP bit-loading together with hierarchical modulation is described allowing for an arbitrary number of classes, arbitrary SNR margins between the classes, and arbitrary number of bits per class. In Turbo coding, pruning, as a counterpart of puncturing is presented for flexible bit-rate adaptations, including tables with optimized pruning patterns. Bit- and/or check-irregular LDPC codes may be designed to provide UEP to its code bits. However, irregular degree distributions alone do not ensure UEP, and other necessary properties of the parity-check matrix for providing UEP are also pointed out. Pruning is also the means for constructing variable-rate LDPC codes for UEP, especially controlling the check-node profile.

  1. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  2. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael; Duursma, Iwan; Dau, Hoang; Hassibi, Babak

    2017-01-01

    We construct balanced and sparse generator matrices for Tamo and Barg's Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  3. Balanced and sparse Tamo-Barg codes

    KAUST Repository

    Halbawi, Wael

    2017-08-29

    We construct balanced and sparse generator matrices for Tamo and Barg\\'s Locally Recoverable Codes (LRCs). More specifically, for a cyclic Tamo-Barg code of length n, dimension k and locality r, we show how to deterministically construct a generator matrix where the number of nonzeros in any two columns differs by at most one, and where the weight of every row is d + r - 1, where d is the minimum distance of the code. Since LRCs are designed mainly for distributed storage systems, the results presented in this work provide a computationally balanced and efficient encoding scheme for these codes. The balanced property ensures that the computational effort exerted by any storage node is essentially the same, whilst the sparse property ensures that this effort is minimal. The work presented in this paper extends a similar result previously established for Reed-Solomon (RS) codes, where it is now known that any cyclic RS code possesses a generator matrix that is balanced as described, but is sparsest, meaning that each row has d nonzeros.

  4. Contaminant transport in fracture networks with heterogeneous rock matrices. The Picnic code

    International Nuclear Information System (INIS)

    Barten, Werner; Robinson, Peter C.

    2001-02-01

    timescales. To account for one-dimensional matrix diffusion into homogeneous planar or cylindrical rock layers, analytical relations in the Laplace domain are used. To deal with one-dimensional or two-dimensional matrix diffusion into heterogeneous rock matrices, a finite-element method is embedded. The capability of the code for handling two-dimensional matrix diffusion is - to our knowledge - unique in fracture network modelling. To ensure the reliability of the code, which merges methods from graph theory, Laplace transformation, finite-element methods, analytical and algebraic transformations and a convolution to calculate complex radionuclide transport processes over a large and diverse application range, implementation of the code and careful verification have been alternated for iterative improvement and especially the elimination of bugs. The internal mathematical structure of PICNIC forms the basis of the verification strategy. The code is verified in a series of seven steps with increasing complexity of the rock matrix. Calculations for single nuclides and nuclide decay chains are carefully tested and analysed for radionuclide transport in single legs, in pathways and in networks. Different sources and boundary conditions are considered. Quantitative estimates of the accuracy of the code are derived from comparisons with analytical solutions, cross-comparisons with other codes and different types of self -consistency tests, including extended testing of different refinements of the embedded finite- element method for different rock matrix geometries. The geosphere barrier efficiency is a good single indicator of the code accuracy. Application ranges with reduced accuracy of the code are also considered. For one-dimensional matrix diffusion into homogeneous and heterogeneous rock matrices, cross-comparisons with other codes are performed. For two-dimensional matrix diffusion, however, no code for cross-comparison is available. Consequently, the verification for

  5. Contaminant transport in fracture networks with heterogeneous rock matrices. The Picnic code

    Energy Technology Data Exchange (ETDEWEB)

    Barten, Werner [Paul Scherrer Inst., CH-5232 Villigen PSI (Switzerland); Robinson, Peter C. [QuantiSci Limited, Henley-on-Thames (United Kingdom)

    2001-02-01

    different timescales. To account for one-dimensional matrix diffusion into homogeneous planar or cylindrical rock layers, analytical relations in the Laplace domain are used. To deal with one-dimensional or two-dimensional matrix diffusion into heterogeneous rock matrices, a finite-element method is embedded. The capability of the code for handling two-dimensional matrix diffusion is - to our knowledge - unique in fracture network modelling. To ensure the reliability of the code, which merges methods from graph theory, Laplace transformation, finite-element methods, analytical and algebraic transformations and a convolution to calculate complex radionuclide transport processes over a large and diverse application range, implementation of the code and careful verification have been alternated for iterative improvement and especially the elimination of bugs. The internal mathematical structure of PICNIC forms the basis of the verification strategy. The code is verified in a series of seven steps with increasing complexity of the rock matrix. Calculations for single nuclides and nuclide decay chains are carefully tested and analysed for radionuclide transport in single legs, in pathways and in networks. Different sources and boundary conditions are considered. Quantitative estimates of the accuracy of the code are derived from comparisons with analytical solutions, cross-comparisons with other codes and different types of self -consistency tests, including extended testing of different refinements of the embedded finite- element method for different rock matrix geometries. The geosphere barrier efficiency is a good single indicator of the code accuracy. Application ranges with reduced accuracy of the code are also considered. For one-dimensional matrix diffusion into homogeneous and heterogeneous rock matrices, cross-comparisons with other codes are performed. For two-dimensional matrix diffusion, however, no code for cross-comparison is available. Consequently, the

  6. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  7. Reaction matrix calculation of 4He including Δ degrees of freedom

    International Nuclear Information System (INIS)

    Wakamatsu, Masashi.

    1979-06-01

    The effects of the Δ(3-3 resonance) components on the binding energy of 4 He are studied within the framework of the reaction matrix theory. In this approach, the Δ configurations in 4 He are introduced in terms of the NΔ transition potential by solving the reaction matrix equation and thus it goes beyond perturbation theory with the NΔ transition potential. Not only the two-body cluster energy but also the three-body cluster energy containing Δ configurations are calculated. (author)

  8. Widening the Scope of R-matrix Methods

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Ian J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dimitriou, Paraskevi [IAEA, Vienna (Austria); DeBoer, Richard J. [Nieuwland Science Hall, Notre Dame, IN (United States); Kunieda, Satoshi [Nuclear Data Center (JAEA), Tokai (Japan); Paris, Mark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Thompson, Ian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Trkov, Andrej [IAEA, Vienna (Austria)

    2016-03-01

    A Consultant’s Meeting was held at the IAEA Headquarters, from 7 to 9 December 2015, to discuss the status of R-matrix codes currently used in calculations of charged-particle induced reaction cross sections at low energies. The ultimate goal was to initiate an international effort, coordinated by the IAEA, to evaluate charged-particle induced reactions in the resolved-resonance region. Participants reviewed the capabilities of the codes, the different implementations of R-matrix theory and translatability of the R-matrix parameters, the evaluation methods and suitable data formats for broader dissemination. The details of the presentations and technical discussions, as well as the actions that were proposed to achieve the goal of the meeting are summarized in this report.

  9. Evaluation of the MMCLIFE 3.0 code in predicting crack growth in titanium aluminide composites

    International Nuclear Information System (INIS)

    Harmon, D.; Larsen, J.M.

    1999-01-01

    Crack growth and fatigue life predictions made with the MMCLIFE 3.0 code are compared to test data for unidirectional, continuously reinforced SCS-6/Ti-14Al-21Nb (wt pct) composite laminates. The MMCLIFE 3.0 analysis package is a design tool capable of predicting strength and fatigue performance in metal matrix composite (MMC) laminates. The code uses a combination of micromechanic lamina and macromechanic laminate analyses to predict stresses and uses linear elastic fracture mechanics to predict crack growth. The crack growth analysis includes a fiber bridging model to predict the growth of matrix flaws in 0 degree laminates and is capable of predicting the effects of interfacial shear stress and thermal residual stresses. The code has also been modified to include edge-notch flaws in addition to center-notch flaws. The model was correlated with constant amplitude, isothermal data from crack growth tests conducted on 0- and 90 degree SCS-6/Ti-14-21 laminates. Spectrum fatigue tests were conducted, which included dwell times and frequency effects. Strengths and areas for improvement for the analysis are discussed

  10. The assessment of containment codes by experiments simulating severe accident scenarios

    International Nuclear Information System (INIS)

    Karwat, H.

    1992-01-01

    Hitherto, a generally applicable validation matrix for codes simulating the containment behaviour under severe accident conditions did not exist. Past code applications have shown that most problems may be traced back to inaccurate thermalhydraulic parameters governing gas- or aerosol-distribution events. A provisional code-validation matrix is proposed, based on a careful selection of containment experiments performed during recent years in relevant test facilities under various operating conditions. The matrix focuses on the thermalhydraulic aspects of the containment behaviour after severe accidents as a first important step. It may be supplemented in the future by additional suitable tests

  11. Construction of self-dual codes in the Rosenbloom-Tsfasman metric

    Science.gov (United States)

    Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin

    2017-12-01

    Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.

  12. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Ilow Jacek

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of information packets to construct redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  13. Updated users' guide for SAMMY: multilevel R-matrix fits to neutron data using Bayes' equations. Revision 1

    International Nuclear Information System (INIS)

    Larson, N.M.

    1985-04-01

    In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has undergone significant modifications: (1) User-friendly options have been incorporated to streamline common operations and to protect a run from common user errors. (2) The Reich-Moore formalism has been extended to include an optional logarithmic parameterization of the external R-matrix, for which any or all parameters may be varied. (3) The ability to vary sample thickness, effective temperature, matching radius, and/or resolution-broadening parameters has been incorporated. (4) To avoid loss of information (i.e., computer round-off errors) between runs, the ''covariance file'' now includes precise values for all variables. (5) Unused but correlated variables may be included in the analysis. Because of these and earlier changes, the 1980 SAMMY manual is now obsolete. This report is intended to be complete documentation for the current version of SAMMY. In August of 1984 the users' guide for version P of the multilevel multichannel R-matrix code SAMMY was published. Recently, major changes within SAMMY have led to the creation of version O, which is documented in this report. Among these changes are: (1) an alternative matrix-manipulation method for use in certain special cases; (2) division of theoretical cross-section generation and broadening operations into separate segments of the code; (3) an option to use the multilevel Breit-Wigner approximation to generate theoretical cross sections; (4) new input options; (5) renaming all temporary files as SAM...DAT; (6) more sophisticated use of temporary files to maximize the number of data points that may be analyzed in a single run; and (7) significant internal restructing of the code in preparation for changes described here and for planned future changes

  14. An implicit Smooth Particle Hydrodynamic code

    Energy Technology Data Exchange (ETDEWEB)

    Knapp, Charles E. [Univ. of New Mexico, Albuquerque, NM (United States)

    2000-05-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  15. New intracellular activities of matrix metalloproteinases shine in the moonlight.

    Science.gov (United States)

    Jobin, Parker G; Butler, Georgina S; Overall, Christopher M

    2017-11-01

    Adaption of a single protein to perform multiple independent functions facilitates functional plasticity of the proteome allowing a limited number of protein-coding genes to perform a multitude of cellular processes. Multifunctionality is achievable by post-translational modifications and by modulating subcellular localization. Matrix metalloproteinases (MMPs), classically viewed as degraders of the extracellular matrix (ECM) responsible for matrix protein turnover, are more recently recognized as regulators of a range of extracellular bioactive molecules including chemokines, cytokines, and their binders. However, growing evidence has convincingly identified select MMPs in intracellular compartments with unexpected physiological and pathological roles. Intracellular MMPs have both proteolytic and non-proteolytic functions, including signal transduction and transcription factor activity thereby challenging their traditional designation as extracellular proteases. This review highlights current knowledge of subcellular location and activity of these "moonlighting" MMPs. Intracellular roles herald a new era of MMP research, rejuvenating interest in targeting these proteases in therapeutic strategies. This article is part of a Special Issue entitled: Matrix Metalloproteinases edited by Rafael Fridman. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Modeling cometary photopolarimetric characteristics with Sh-matrix method

    Science.gov (United States)

    Kolokolova, L.; Petrov, D.

    2017-12-01

    Cometary dust is dominated by particles of complex shape and structure, which are often considered as fractal aggregates. Rigorous modeling of light scattering by such particles, even using parallelized codes and NASA supercomputer resources, is very computer time and memory consuming. We are presenting a new approach to modeling cometary dust that is based on the Sh-matrix technique (e.g., Petrov et al., JQSRT, 112, 2012). This method is based on the T-matrix technique (e.g., Mishchenko et al., JQSRT, 55, 1996) and was developed after it had been found that the shape-dependent factors could be separated from the size- and refractive-index-dependent factors and presented as a shape matrix, or Sh-matrix. Size and refractive index dependences are incorporated through analytical operations on the Sh-matrix to produce the elements of T-matrix. Sh-matrix method keeps all advantages of the T-matrix method, including analytical averaging over particle orientation. Moreover, the surface integrals describing the Sh-matrix elements themselves can be solvable analytically for particles of any shape. This makes Sh-matrix approach an effective technique to simulate light scattering by particles of complex shape and surface structure. In this paper, we present cometary dust as an ensemble of Gaussian random particles. The shape of these particles is described by a log-normal distribution of their radius length and direction (Muinonen, EMP, 72, 1996). Changing one of the parameters of this distribution, the correlation angle, from 0 to 90 deg., we can model a variety of particles from spheres to particles of a random complex shape. We survey the angular and spectral dependencies of intensity and polarization resulted from light scattering by such particles, studying how they depend on the particle shape, size, and composition (including porous particles to simulate aggregates) to find the best fit to the cometary observations.

  17. Computing Challenges in Coded Mask Imaging

    Science.gov (United States)

    Skinner, Gerald

    2009-01-01

    This slide presaentation reviews the complications and challenges in developing computer systems for Coded Mask Imaging telescopes. The coded mask technique is used when there is no other way to create the telescope, (i.e., when there are wide fields of view, high energies for focusing or low energies for the Compton/Tracker Techniques and very good angular resolution.) The coded mask telescope is described, and the mask is reviewed. The coded Masks for the INTErnational Gamma-Ray Astrophysics Laboratory (INTEGRAL) instruments are shown, and a chart showing the types of position sensitive detectors used for the coded mask telescopes is also reviewed. Slides describe the mechanism of recovering an image from the masked pattern. The correlation with the mask pattern is described. The Matrix approach is reviewed, and other approaches to image reconstruction are described. Included in the presentation is a review of the Energetic X-ray Imaging Survey Telescope (EXIST) / High Energy Telescope (HET), with information about the mission, the operation of the telescope, comparison of the EXIST/HET with the SWIFT/BAT and details of the design of the EXIST/HET.

  18. Performance analysis of wavelength/spatial coding system with fixed in-phase code matrices in OCDMA network

    Science.gov (United States)

    Tsai, Cheng-Mu; Liang, Tsair-Chun

    2011-12-01

    This paper proposes a wavelength/spatial (W/S) coding system with fixed in-phase code (FIPC) matrix in the optical code-division multiple-access (OCDMA) network. A scheme is presented to form the FIPC matrix which is applied to construct the W/S OCDMA network. The encoder/decoder in the W/S OCDMA network is fully able to eliminate the multiple-access-interference (MAI) at the balanced photo-detectors (PD), according to fixed in-phase cross correlation. The phase-induced intensity noise (PIIN) related to the power square is markedly suppressed in the receiver by spreading the received power into each PD while the net signal power is kept the same. Simulation results show that the W/S OCDMA network based on the FIPC matrices cannot only completely remove the MAI but effectively suppress the PIIN to upgrade the network performance.

  19. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  20. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    Science.gov (United States)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  1. Design of Packet-Based Block Codes with Shift Operators

    Directory of Open Access Journals (Sweden)

    Jacek Ilow

    2010-01-01

    Full Text Available This paper introduces packet-oriented block codes for the recovery of lost packets and the correction of an erroneous single packet. Specifically, a family of systematic codes is proposed, based on a Vandermonde matrix applied to a group of k information packets to construct r redundant packets, where the elements of the Vandermonde matrix are bit-level right arithmetic shift operators. The code design is applicable to packets of any size, provided that the packets within a block of k information packets are of uniform length. In order to decrease the overhead associated with packet padding using shift operators, non-Vandermonde matrices are also proposed for designing packet-oriented block codes. An efficient matrix inversion procedure for the off-line design of the decoding algorithm is presented to recover lost packets. The error correction capability of the design is investigated as well. The decoding algorithm, based on syndrome decoding, to correct a single erroneous packet in a group of n=k+r received packets is presented. The paper is equipped with examples of codes using different parameters. The code designs and their performance are tested using Monte Carlo simulations; the results obtained exhibit good agreement with the corresponding theoretical results.

  2. Ceramic matrix composite article and process of fabricating a ceramic matrix composite article

    Science.gov (United States)

    Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert

    2016-01-12

    A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.

  3. I2D: code for conversion of ISOTXS structured data to DTF and ANISN structured tables

    International Nuclear Information System (INIS)

    Resnik, W.M. II.

    1977-06-01

    The I2D code converts neutron cross-section data written in the standard interface file format called ISOTXS to a matrix structured format commonly called DTF tables. Several BCD and binary output options are available including FIDO (ANISN) format. The I2D code adheres to the guidelines established by the Committee on Computer Code Coordination for standardized code development. Since some machine dependency is inherent regardless of the degree of standardization, provisions have been made in the I2D code for easy implementation on either short-word machines (IBM) or on long-word machines (CDC). 3 figures, 5 tables

  4. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  5. BWR transient analysis using neutronic / thermal hydraulic coupled codes including uncertainty quantification

    International Nuclear Information System (INIS)

    Hartmann, C.; Sanchez, V.; Tietsch, W.; Stieglitz, R.

    2012-01-01

    The KIT is involved in the development and qualification of best estimate methodologies for BWR transient analysis in cooperation with industrial partners. The goal is to establish the most advanced thermal hydraulic system codes coupled with 3D reactor dynamic codes to be able to perform a more realistic evaluation of the BWR behavior under accidental conditions. For this purpose a computational chain based on the lattice code (SCALE6/GenPMAXS), the coupled neutronic/thermal hydraulic code (TRACE/PARCS) as well as a Monte Carlo based uncertainty and sensitivity package (SUSA) has been established and applied to different kind of transients of a Boiling Water Reactor (BWR). This paper will describe the multidimensional models of the plant elaborated for TRACE and PARCS to perform the investigations mentioned before. For the uncertainty quantification of the coupled code TRACE/PARCS and specifically to take into account the influence of the kinetics parameters in such studies, the PARCS code has been extended to facilitate the change of model parameters in such a way that the SUSA package can be used in connection with TRACE/PARCS for the U and S studies. This approach will be presented in detail. The results obtained for a rod drop transient with TRACE/PARCS using the SUSA-methodology showed clearly the importance of some kinetic parameters on the transient progression demonstrating that the coupling of a best-estimate coupled codes with uncertainty and sensitivity tools is very promising and of great importance for the safety assessment of nuclear reactors. (authors)

  6. Linear codes associated to determinantal varieties

    DEFF Research Database (Denmark)

    Beelen, Peter; Ghorpade, Sudhir R.; Hasan, Sartaj Ul

    2015-01-01

    We consider a class of linear codes associated to projective algebraic varieties defined by the vanishing of minors of a fixed size of a generic matrix. It is seen that the resulting code has only a small number of distinct weights. The case of varieties defined by the vanishing of 2×2 minors is ...

  7. Integrating indicators in a national accounting matrix including environmental accounts (NAMEA)

    International Nuclear Information System (INIS)

    De Haan, M.; Keuning, S.J.; Bosch, P.R.

    1993-01-01

    Five environmental indicators are conceptually and numerically integrated into a National Accounting Matrix including Environmental Accounts (NAMEA) for 1989. As a consequence, these estimates are directly comparable with outcomes of major macro-economic aggregates in the conventional accounts. In the NAMEA, emissions of all kinds of polluting agents are recorded by industry and by consumption purpose. Subsequently, these agents are grouped into five environmental themes: greenhouse effect, ozone layer depletion, acidification, eutrophication and waste accumulation. The contributions of agents to certain themes are expressed in theme-related environmental stress equivalents. Per theme, these stress equivalents are confronted with policy norms set by the Netherlands government for the year 2000. This results in a statistical framework at a meso-level from which integrated economic and environmental indicators are derived. The NAMEA may also serve as a data base and analytical device for modelling interactions between the national economy and changes in the environment. 13 tabs., 2 app., 32 refs

  8. 10Gbps 2D MGC OCDMA Code over FSO Communication System

    Science.gov (United States)

    Professor Urmila Bhanja, Associate, Dr.; Khuntia, Arpita; Alamasety Swati, (Student

    2017-08-01

    Currently, wide bandwidth signal dissemination along with low latency is a leading requisite in various applications. Free space optical wireless communication has introduced as a realistic technology for bridging the gap in present high data transmission fiber connectivity and as a provisional backbone for rapidly deployable wireless communication infrastructure. The manuscript highlights on the implementation of 10Gbps SAC-OCDMA FSO communications using modified two dimensional Golomb code (2D MGC) that possesses better auto correlation, minimum cross correlation and high cardinality. A comparison based on pseudo orthogonal (PSO) matrix code and modified two dimensional Golomb code (2D MGC) is developed in the proposed SAC OCDMA-FSO communication module taking different parameters into account. The simulative outcome signifies that the communication radius is bounded by the multiple access interference (MAI). In this work, a comparison is made in terms of bit error rate (BER), and quality factor (Q) based on modified two dimensional Golomb code (2D MGC) and PSO matrix code. It is observed that the 2D MGC yields better results compared to the PSO matrix code. The simulation results are validated using optisystem version 14.

  9. Minimizing embedding impact in steganography using trellis-coded quantization

    Science.gov (United States)

    Filler, Tomáš; Judas, Jan; Fridrich, Jessica

    2010-01-01

    In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact of making an embedding change at that element (single-letter distortion). The problem is to embed a given payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past. Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners, we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive experimental results for a large set of relative payloads and for different distortion profiles, including the wet paper channel.

  10. Fiber Bragg grating for spectral phase optical code-division multiple-access encoding and decoding

    Science.gov (United States)

    Fang, Xiaohui; Wang, Dong-Ning; Li, Shichen

    2003-08-01

    A new method for realizing spectral phase optical code-division multiple-access (OCDMA) coding based on step chirped fiber Bragg gratings (SCFBGs) is proposed and the corresponding encoder/decoder is presented. With this method, a mapping code is introduced for the m-sequence address code and the phase shift can be inserted into the subgratings of the SCFBG according to the mapping code. The transfer matrix method together with Fourier transform is used to investigate the characteristics of the encoder/decoder. The factors that influence the correlation property of the encoder/decoder, including index modulation and bandwidth of the subgrating, are identified. The system structure is simple and good correlation output can be obtained. The performance of the OCDMA system based on SCFBGs has been analyzed.

  11. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  12. Balanced Reed-Solomon codes for all parameters

    KAUST Repository

    Halbawi, Wael; Liu, Zihan; Hassibi, Babak

    2016-01-01

    We construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes with any length n and dimension k. By sparsest, we mean that each row has the least possible number of nonzeros, while balanced means that the number of nonzeros in any two columns differs by at most one. Codes allowing such encoding schemes are useful in distributed settings where computational load-balancing is critical. The problem was first studied by Dau et al. who showed, using probabilistic arguments, that there always exists an MDS code over a sufficiently large field such that its generator matrix is both sparsest and balanced. Motivated by the need for an explicit construction with efficient decoding, the authors of the current paper showed that the generator matrix of a cyclic Reed-Solomon code of length n and dimension k can always be transformed to one that is both sparsest and balanced, when n and k are such that k/n (n-k+1) is an integer. In this paper, we lift this condition and construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes for any set of parameters.

  13. Coded aperture optimization using Monte Carlo simulations

    International Nuclear Information System (INIS)

    Martineau, A.; Rocchisani, J.M.; Moretti, J.L.

    2010-01-01

    Coded apertures using Uniformly Redundant Arrays (URA) have been unsuccessfully evaluated for two-dimensional and three-dimensional imaging in Nuclear Medicine. The images reconstructed from coded projections contain artifacts and suffer from poor spatial resolution in the longitudinal direction. We introduce a Maximum-Likelihood Expectation-Maximization (MLEM) algorithm for three-dimensional coded aperture imaging which uses a projection matrix calculated by Monte Carlo simulations. The aim of the algorithm is to reduce artifacts and improve the three-dimensional spatial resolution in the reconstructed images. Firstly, we present the validation of GATE (Geant4 Application for Emission Tomography) for Monte Carlo simulations of a coded mask installed on a clinical gamma camera. The coded mask modelling was validated by comparison between experimental and simulated data in terms of energy spectra, sensitivity and spatial resolution. In the second part of the study, we use the validated model to calculate the projection matrix with Monte Carlo simulations. A three-dimensional thyroid phantom study was performed to compare the performance of the three-dimensional MLEM reconstruction with conventional correlation method. The results indicate that the artifacts are reduced and three-dimensional spatial resolution is improved with the Monte Carlo-based MLEM reconstruction.

  14. Balanced Reed-Solomon codes for all parameters

    KAUST Repository

    Halbawi, Wael

    2016-10-27

    We construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes with any length n and dimension k. By sparsest, we mean that each row has the least possible number of nonzeros, while balanced means that the number of nonzeros in any two columns differs by at most one. Codes allowing such encoding schemes are useful in distributed settings where computational load-balancing is critical. The problem was first studied by Dau et al. who showed, using probabilistic arguments, that there always exists an MDS code over a sufficiently large field such that its generator matrix is both sparsest and balanced. Motivated by the need for an explicit construction with efficient decoding, the authors of the current paper showed that the generator matrix of a cyclic Reed-Solomon code of length n and dimension k can always be transformed to one that is both sparsest and balanced, when n and k are such that k/n (n-k+1) is an integer. In this paper, we lift this condition and construct balanced and sparsest generator matrices for cyclic Reed-Solomon codes for any set of parameters.

  15. Response matrix method for large LMFBR analysis

    International Nuclear Information System (INIS)

    King, M.J.

    1977-06-01

    The feasibility of using response matrix techniques for computational models of large LMFBRs is examined. Since finite-difference methods based on diffusion theory have generally found a place in fast-reactor codes, a brief review of their general matrix foundation is given first in order to contrast it to the general strategy of response matrix methods. Then, in order to present the general method of response matrix technique, two illustrative examples are given. Matrix algorithms arising in the application to large LMFBRs are discussed, and the potential of the response matrix method is explored for a variety of computational problems. Principal properties of the matrices involved are derived with a view to application of numerical methods of solution. The Jacobi iterative method as applied to the current-balance eigenvalue problem is discussed

  16. Overlaid Alice: a statistical model computer code including fission and preequilibrium models

    International Nuclear Information System (INIS)

    Blann, M.

    1976-01-01

    The most recent edition of an evaporation code originally written previously with frequent updating and improvement. This version replaces the version Alice described previously. A brief summary is given of the types of calculations which can be done. A listing of the code and the results of several sample calculations are presented

  17. High Girth Column-Weight-Two LDPC Codes Based on Distance Graphs

    Directory of Open Access Journals (Sweden)

    Gabofetswe Malema

    2007-01-01

    Full Text Available LDPC codes of column weight of two are constructed from minimal distance graphs or cages. Distance graphs are used to represent LDPC code matrices such that graph vertices that represent rows and edges are columns. The conversion of a distance graph into matrix form produces an adjacency matrix with column weight of two and girth double that of the graph. The number of 1's in each row (row weight is equal to the degree of the corresponding vertex. By constructing graphs with different vertex degrees, we can vary the rate of corresponding LDPC code matrices. Cage graphs are used as examples of distance graphs to design codes with different girths and rates. Performance of obtained codes depends on girth and structure of the corresponding distance graphs.

  18. COSY 5.0 - the fifth order code for corpuscular optical systems

    International Nuclear Information System (INIS)

    Berz, M.; Hoffmann, H.C.; Wollnik, H.

    1987-01-01

    COSY 5.0 is a new computer code for the design of corpuscular optical systems based on the principle of transfer matrices. The particle optical calculations include all image aberrations through fifth order. COSY 5.0 uses canonical coordinates and exploits the symplectic condition to increase the speed of computation. COSY 5.0 contains a library for the computation of matrix elements of all commonly used corpuscular optical elements such as electric and magnetic multipoles and sector fields. The corresponding formulas were generated algebraically by the computer code HAMILTON. Care was taken that the optimization of optical elements is achieved with minimal numerical effort. Finally COSY 5.0 has a very general mnemonic input code resembling a higher programming language. (orig.)

  19. Input data required for specific performance assessment codes

    International Nuclear Information System (INIS)

    Seitz, R.R.; Garcia, R.S.; Starmer, R.J.; Dicke, C.A.; Leonard, P.R.; Maheras, S.J.; Rood, A.S.; Smith, R.W.

    1992-02-01

    The Department of Energy's National Low-Level Waste Management Program at the Idaho National Engineering Laboratory generated this report on input data requirements for computer codes to assist States and compacts in their performance assessments. This report gives generators, developers, operators, and users some guidelines on what input data is required to satisfy 22 common performance assessment codes. Each of the codes is summarized and a matrix table is provided to allow comparison of the various input required by the codes. This report does not determine or recommend which codes are preferable

  20. Constructing LDPC Codes from Loop-Free Encoding Modules

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher; Thorpe, Jeremy; Andrews, Kenneth

    2009-01-01

    A method of constructing certain low-density parity-check (LDPC) codes by use of relatively simple loop-free coding modules has been developed. The subclasses of LDPC codes to which the method applies includes accumulate-repeat-accumulate (ARA) codes, accumulate-repeat-check-accumulate codes, and the codes described in Accumulate-Repeat-Accumulate-Accumulate Codes (NPO-41305), NASA Tech Briefs, Vol. 31, No. 9 (September 2007), page 90. All of the affected codes can be characterized as serial/parallel (hybrid) concatenations of such relatively simple modules as accumulators, repetition codes, differentiators, and punctured single-parity check codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. These codes can also be characterized as hybrid turbolike codes that have projected graph or protograph representations (for example see figure); these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The present method comprises two related submethods for constructing LDPC codes from simple loop-free modules with circulant permutations. The first submethod is an iterative encoding method based on the erasure-decoding algorithm. The computations required by this method are well organized because they involve a parity-check matrix having a block-circulant structure. The second submethod involves the use of block-circulant generator matrices. The encoders of this method are very similar to those of recursive convolutional codes. Some encoders according to this second submethod have been implemented in a small field-programmable gate array that operates at a speed of 100 megasymbols per second. By use of density evolution (a computational- simulation technique for analyzing performances of LDPC codes), it has been shown through some examples that as the block size goes to infinity, low iterative decoding thresholds close to

  1. Construction of LDPC codes over GF(q) with modified progressive edge growth

    Institute of Scientific and Technical Information of China (English)

    CHEN Xin; MEN Ai-dong; YANG Bo; QUAN Zi-yi

    2009-01-01

    A parity check matrix construction method for constructing a low-density parity-check (LDPC) codes over GF(q) (q>2) based on the modified progressive edge growth (PEG) algorithm is introduced. First, the nonzero locations of the parity check matrix are selected using the PEG algorithm. Then the nonzero elements are defined by avoiding the definition of subcode. A proof is given to show the good minimum distance property of constructed GF(q)-LDPC codes. Simulations are also presented to illustrate the good error performance of the designed codes.

  2. ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification

    International Nuclear Information System (INIS)

    Kodeli, Ivo

    2007-01-01

    1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be

  3. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  4. Codeword Structure Analysis for LDPC Convolutional Codes

    Directory of Open Access Journals (Sweden)

    Hua Zhou

    2015-12-01

    Full Text Available The codewords of a low-density parity-check (LDPC convolutional code (LDPC-CC are characterised into structured and non-structured. The number of the structured codewords is dominated by the size of the polynomial syndrome former matrix H T ( D , while the number of the non-structured ones depends on the particular monomials or polynomials in H T ( D . By evaluating the relationship of the codewords between the mother code and its super codes, the low weight non-structured codewords in the super codes can be eliminated by appropriately choosing the monomials or polynomials in H T ( D , resulting in improved distance spectrum of the mother code.

  5. Contribution of 194.1 keV Resonance to 17O(p, alpha) 14N Reaction Rate using R Matrix Code

    International Nuclear Information System (INIS)

    Chafa, A.; Messili, F.Z.; Barhoumi, S.

    2009-01-01

    Knowledge of the 17 O(p, alpha ) 14 N reaction rates is required for evaluating elemental abundances in a number of hydrogen - burning stellar sites. This reaction is specifically very important for nucleosynthesis of the rare oxygen isotope 17 O. Classical novae are thought to be a major source of 17 O in the Galaxy and produce the short-live radioisotope 18 F whose + decay is followed by a gamma ray emission which could be observed with satellites such as the Integral observatory. As the 17 O(p, alpha) 14 N and 17 O(p, alpha ) 18 F reactions govern the destruction of 17 O and the formation of 1 '8F, their rates are decisive in determining the final abundances of these isotopes. Stellar temperatures of primary importance for nucleosynthesis are typically in the ranges T = 0.01-0.1 GK for red giant, AGB, and massive stars, and T 0.01-0.4 GK for classical nova explosions In recent work, we observed, for the first time, a resonance a 183.3 keV corresponding to level in 18 F at Ex 5789.8 ± 0.3 keV. A new astrophysical parameters of this resonance are found. In this work we study this reaction using numerical code based on R matrix method including the new values of level energy and parameters of 183.3 keV resonance in order to show his contribution to 17 O(p, alpha) 14 N reaction rates. We also use old parameters values of this resonance given in Keiser work for comparison. We show that this resonance predominate the reaction rates in all range of stellar temperature for classical nova explosions. This is in good agreement with our work with experimental method. We also study cross section and differential cross section 17 O(p, alpha ) 14 N reaction with R matrix method

  6. ICAN Computer Code Adapted for Building Materials

    Science.gov (United States)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  7. An RNA-Seq strategy to detect the complete coding and non-coding transcriptome including full-length imprinted macro ncRNAs.

    Directory of Open Access Journals (Sweden)

    Ru Huang

    Full Text Available Imprinted macro non-protein-coding (nc RNAs are cis-repressor transcripts that silence multiple genes in at least three imprinted gene clusters in the mouse genome. Similar macro or long ncRNAs are abundant in the mammalian genome. Here we present the full coding and non-coding transcriptome of two mouse tissues: differentiated ES cells and fetal head using an optimized RNA-Seq strategy. The data produced is highly reproducible in different sequencing locations and is able to detect the full length of imprinted macro ncRNAs such as Airn and Kcnq1ot1, whose length ranges between 80-118 kb. Transcripts show a more uniform read coverage when RNA is fragmented with RNA hydrolysis compared with cDNA fragmentation by shearing. Irrespective of the fragmentation method, all coding and non-coding transcripts longer than 8 kb show a gradual loss of sequencing tags towards the 3' end. Comparisons to published RNA-Seq datasets show that the strategy presented here is more efficient in detecting known functional imprinted macro ncRNAs and also indicate that standardization of RNA preparation protocols would increase the comparability of the transcriptome between different RNA-Seq datasets.

  8. Modular ORIGEN-S for multi-physics code systems

    Energy Technology Data Exchange (ETDEWEB)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C., E-mail: yesilyurtg@ornl.gov, E-mail: clarnokt@ornl.gov, E-mail: gauldi@ornl.gov [Oak Ridge National Laboratory, TN (United States); Galloway, Jack, E-mail: jack@galloways.net [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2011-07-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  9. Modular ORIGEN-S for multi-physics code systems

    International Nuclear Information System (INIS)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack

    2011-01-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  10. Updated users' guide for SAMMY: Multilevel R-matrix fits to neutron data using Bayes' equation

    International Nuclear Information System (INIS)

    Larson, N.M.

    1989-06-01

    In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has undergone significant modifications: user-friendly options have been incorporated to streamline common operations and to protect a run from common user errors; the Reich-Moore formalism has been extended to include an optional logarithmic parameterization of the external R-matrix, for which any or all parameters may be varied; the ability to vary sample thickness, effective temperature, matching radius, and/or resolution-broadening parameters has been incorporated; to avoid loss of information (i.e., computer round-off errors) between runs, the ''covariance file'' now includes precise values for all variables; and unused but correlated variables may be included in the analysis. Because of these and earlier changes, the 1980 SAMMY manual is now hopelessly obsolete. This report is intended to be complete documentation for the current version of SAMMY. Its publication in looseleaf form will permit updates to the manual to be made concurrently with updates to the code itself, thus eliminating most of the time lag between update and documentation. 28 refs., 54 tabs

  11. Updated user's guide for SAMMY: multilevel R-matrix fits to neutron data using Bayes' equation

    International Nuclear Information System (INIS)

    Larson, N.M.

    1996-01-01

    In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has undergone significant modifications: (1) User-friendly options have been incorporated to streamline common operations and to protect a run from common user errors, (2) The Reich-Moore formalism has been extended to include an optional logarithmic parameterization of the external R-matrix, for which any or all parameters may be varied, (3) the ability to vary sample thickness, effective temperature, matching radius, and/or resolution-broadening parameters has been incorporated, (4) to avoid loss of information (i.e. computer round-off errors) between runs, the ''covariance file'' now includes precise values for al variables, (5) Unused but correlated variables may be included in the analysis. Because of these and earlier changes, the 1980 SAMMY manual is now hopelessly obsolete. This report is intended to be complete documentation for the current version of SAMMY. Its publication in looseleaf form will permit updates to the manual to be made concurrently with updates to the code itself, thus eliminating most of the time lag between update and documentation

  12. Parallel Computation of the Jacobian Matrix for Nonlinear Equation Solvers Using MATLAB

    Science.gov (United States)

    Rose, Geoffrey K.; Nguyen, Duc T.; Newman, Brett A.

    2017-01-01

    Demonstrating speedup for parallel code on a multicore shared memory PC can be challenging in MATLAB due to underlying parallel operations that are often opaque to the user. This can limit potential for improvement of serial code even for the so-called embarrassingly parallel applications. One such application is the computation of the Jacobian matrix inherent to most nonlinear equation solvers. Computation of this matrix represents the primary bottleneck in nonlinear solver speed such that commercial finite element (FE) and multi-body-dynamic (MBD) codes attempt to minimize computations. A timing study using MATLAB's Parallel Computing Toolbox was performed for numerical computation of the Jacobian. Several approaches for implementing parallel code were investigated while only the single program multiple data (spmd) method using composite objects provided positive results. Parallel code speedup is demonstrated but the goal of linear speedup through the addition of processors was not achieved due to PC architecture.

  13. TRAC-P validation test matrix. Revision 1.0

    International Nuclear Information System (INIS)

    Hughes, E.D.; Boyack, B.E.

    1997-01-01

    This document briefly describes the elements of the Nuclear Regulatory Commission's (NRC's) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC's software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort

  14. Performance optimization of spectral amplitude coding OCDMA system using new enhanced multi diagonal code

    Science.gov (United States)

    Imtiaz, Waqas A.; Ilyas, M.; Khan, Yousaf

    2016-11-01

    This paper propose a new code to optimize the performance of spectral amplitude coding-optical code division multiple access (SAC-OCDMA) system. The unique two-matrix structure of the proposed enhanced multi diagonal (EMD) code and effective correlation properties, between intended and interfering subscribers, significantly elevates the performance of SAC-OCDMA system by negating multiple access interference (MAI) and associated phase induce intensity noise (PIIN). Performance of SAC-OCDMA system based on the proposed code is thoroughly analyzed for two detection techniques through analytic and simulation analysis by referring to bit error rate (BER), signal to noise ratio (SNR) and eye patterns at the receiving end. It is shown that EMD code while using SDD technique provides high transmission capacity, reduces the receiver complexity, and provides better performance as compared to complementary subtraction detection (CSD) technique. Furthermore, analysis shows that, for a minimum acceptable BER of 10-9 , the proposed system supports 64 subscribers at data rates of up to 2 Gbps for both up-down link transmission.

  15. In-vessel core degradation code validation matrix

    International Nuclear Information System (INIS)

    Haste, T.J.; Adroguer, B.; Gauntt, R.O.; Martinez, J.A.; Ott, L.J.; Sugimoto, J.; Trambauer, K.

    1996-01-01

    The objective of the current Validation Matrix is to define a basic set of experiments, for which comparison of the measured and calculated parameters forms a basis for establishing the accuracy of test predictions, covering the full range of in-vessel core degradation phenomena expected in light water reactor severe accident transients. The scope of the review covers PWR and BWR designs of Western origin: the coverage of phenomena extends from the initial heat-up through to the introduction of melt into the lower plenum. Concerning fission product behaviour, the effect of core degradation on fission product release is considered. The report provides brief overviews of the main LWR severe accident sequences and of the dominant phenomena involved. The experimental database is summarised. These data are cross-referenced against a condensed set of the phenomena and test condition headings presented earlier, judging the results against a set of selection criteria and identifying key tests of particular value. The main conclusions and recommendations are listed. (K.A.)

  16. S.E.T., CSNI Separate Effects Test Facility Validation Matrix

    International Nuclear Information System (INIS)

    1997-01-01

    1 - Description of test facility: The SET matrix of experiments is suitable for the developmental assessment of thermal-hydraulics transient system computer codes by selecting individual tests from selected facilities, relevant to each phenomena. Test facilities differ from one another in geometrical dimensions, geometrical configuration and operating capabilities or conditions. Correlation between SET facility and phenomena were calculated on the basis of suitability for model validation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant and is sufficiently instrumented); limited suitability for model variation (which means that a facility is designed in such a way as to stimulate the phenomena assumed to occur in a plant but has problems associated with imperfect scaling, different test fluids or insufficient instrumentation); and unsuitability for model validation. 2 - Description of test: Whereas integral experiments are usually designed to follow the behaviour of a reactor system in various off-normal or accident transients, separate effects tests focus on the behaviour of a single component, or on the characteristics of one thermal-hydraulic phenomenon. The construction of a separate effects test matrix is an attempt to collect together the best sets of openly available test data for code validation, assessment and improvement, from the wide range of experiments that have been carried out world-wide in the field of thermal hydraulics. In all, 2094 tests are included in the SET matrix

  17. A flexible R package for nonnegative matrix factorization

    Directory of Open Access Journals (Sweden)

    Seoighe Cathal

    2010-07-01

    Full Text Available Abstract Background Nonnegative Matrix Factorization (NMF is an unsupervised learning technique that has been applied successfully in several fields, including signal processing, face recognition and text mining. Recent applications of NMF in bioinformatics have demonstrated its ability to extract meaningful information from high-dimensional data such as gene expression microarrays. Developments in NMF theory and applications have resulted in a variety of algorithms and methods. However, most NMF implementations have been on commercial platforms, while those that are freely available typically require programming skills. This limits their use by the wider research community. Results Our objective is to provide the bioinformatics community with an open-source, easy-to-use and unified interface to standard NMF algorithms, as well as with a simple framework to help implement and test new NMF methods. For that purpose, we have developed a package for the R/BioConductor platform. The package ports public code to R, and is structured to enable users to easily modify and/or add algorithms. It includes a number of published NMF algorithms and initialization methods and facilitates the combination of these to produce new NMF strategies. Commonly used benchmark data and visualization methods are provided to help in the comparison and interpretation of the results. Conclusions The NMF package helps realize the potential of Nonnegative Matrix Factorization, especially in bioinformatics, providing easy access to methods that have already yielded new insights in many applications. Documentation, source code and sample data are available from CRAN.

  18. GATO: an MHD stability code for axisymmetric plasmas with internal separatrices

    International Nuclear Information System (INIS)

    Bernard, L.C.; Helton, F.J.; Moore, R.W.

    1981-07-01

    The GATO code computes the growth rate of ideal magnetohydrodynamic instabilities in axisymmetric geometries with internal separatrices such as doublet and expanded spheromak. The basic method, which uses a variational principle and a Galerkin procedure to obtain a matrix eigenvalue problem, is common to the ERATO and PEST codes. A new coordinate system has been developed to handle the internal separatrix. Efficient algorithms have been developed to solve the matrix eigenvalue problem for matrices of rank as large as 40,000. Further improvement is expected using graph theoretical techniques to reorder the matrices. Using judicious mesh repartition, the marginal point can be determined with great precision. The code has been extensively used to optimize doublet and general tokamak plasmas

  19. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  20. TRAC-P validation test matrix. Revision 1.0

    Energy Technology Data Exchange (ETDEWEB)

    Hughes, E.D.; Boyack, B.E.

    1997-09-05

    This document briefly describes the elements of the Nuclear Regulatory Commission`s (NRC`s) software quality assurance program leading to software (code) qualification and identifies a test matrix for qualifying Transient Reactor Analysis Code (TRAC)-Pressurized Water Reactor Version (-P), or TRAC-P, to the NRC`s software quality assurance requirements. Code qualification is the outcome of several software life-cycle activities, specifically, (1) Requirements Definition, (2) Design, (3) Implementation, and (4) Qualification Testing. The major objective of this document is to define the TRAC-P Qualification Testing effort.

  1. Syrio. A program for the calculation of the inverse of a matrix

    International Nuclear Information System (INIS)

    Garcia de Viedma Alonso, L.

    1963-01-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  2. The SWAN coupling code: user's guide

    International Nuclear Information System (INIS)

    Litaudon, X.; Moreau, D.

    1988-11-01

    Coupling of slow waves in a plasma near the lower hybrid frequency is well known and linear theory with density step followed by a constant gradient can be used with some confidence. With the aid of the computer code SWAN, which stands for 'Slow Wave Antenna', the following parameters can be numerically calculated: n parallel power spectrum, directivity (weighted by the current drive efficiency), reflection coefficients (amplitude and phase) both before and after the E-plane junctions, scattering matrix at the plasma interface, scattering matrix at the E-plane junctions, maximum electric fields in secondary waveguides and location where it occurs, effect of passive waveguides on each side of the antenna, and the effect of a finite magnetic field in front of the antenna (for homogeneous plasma). This manual gives the basic information on the main assumptions of the coupling theory and on the use and general structure of the code itself. It answers the questions what are the main assumptions of the physical model? how to execute a job? what are the input parameters of the code? and what are the output results and where are they written? (author)

  3. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  4. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Science.gov (United States)

    Barekatain, Behrang; Khezrimotlagh, Dariush; Aizaini Maarof, Mohd; Ghaeini, Hamid Reza; Salleh, Shaharuddin; Quintana, Alfonso Ariza; Akbari, Behzad; Cabrera, Alicia Triviño

    2013-01-01

    In recent years, Random Network Coding (RNC) has emerged as a promising solution for efficient Peer-to-Peer (P2P) video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  5. Use of computational fluid dynamics codes for safety analysis of nuclear reactor systems, including containment. Summary report of a technical meeting

    International Nuclear Information System (INIS)

    2003-11-01

    Safety analysis is an important tool for justifying the safety of nuclear power plants. Typically, this type of analysis is performed by means of system computer codes with one dimensional approximation for modelling real plant systems. However, in the nuclear area there are issues for which traditional treatment using one dimensional system codes is considered inadequate for modelling local flow and heat transfer phenomena. There is therefore increasing interest in the application of three dimensional computational fluid dynamics (CFD) codes as a supplement to or in combination with system codes. There are a number of both commercial (general purpose) CFD codes as well as special codes for nuclear safety applications available. With further progress in safety analysis techniques, the increasing use of CFD codes for nuclear applications is expected. At present, the main objective with respect to CFD codes is generally to improve confidence in the available analysis tools and to achieve a more reliable approach to safety relevant issues. An exchange of views and experience can facilitate and speed up progress in the implementation of this objective. Both the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organisation for Economic Co-operation and Development (OECD/NEA) believed that it would be advantageous to provide a forum for such an exchange. Therefore, within the framework of the Working Group on the Analysis and Management of Accidents of the NEA's Committee on the Safety of Nuclear Installations, the IAEA and the NEA agreed to jointly organize the Technical Meeting on the Use of Computational Fluid Dynamics Codes for Safety Analysis of Reactor Systems, including Containment. The meeting was held in Pisa, Italy, from 11 to 14 November 2002. The publication constitutes the report of the Technical Meeting. It includes short summaries of the presentations that were made and of the discussions as well as conclusions and

  6. Network Coding Parallelization Based on Matrix Operations for Multicore Architectures

    DEFF Research Database (Denmark)

    Wunderlich, Simon; Cabrera, Juan; Fitzek, Frank

    2015-01-01

    such as the Raspberry Pi2 with four cores in the order of up to one full magnitude. The speed increase gain is even higher than the number of cores of the Raspberry Pi2 since the newly introduced approach exploits the cache architecture way better than by-the-book matrix operations. Copyright © 2015 by the Institute...

  7. A random-matrix theory of the number sense.

    Science.gov (United States)

    Hannagan, T; Nieder, A; Viswanathan, P; Dehaene, S

    2017-02-19

    Number sense, a spontaneous ability to process approximate numbers, has been documented in human adults, infants and newborns, and many other animals. Species as distant as monkeys and crows exhibit very similar neurons tuned to specific numerosities. How number sense can emerge in the absence of learning or fine tuning is currently unknown. We introduce a random-matrix theory of self-organized neural states where numbers are coded by vectors of activation across multiple units, and where the vector codes for successive integers are obtained through multiplication by a fixed but random matrix. This cortical implementation of the 'von Mises' algorithm explains many otherwise disconnected observations ranging from neural tuning curves in monkeys to looking times in neonates and cortical numerotopy in adults. The theory clarifies the origin of Weber-Fechner's Law and yields a novel and empirically validated prediction of multi-peak number neurons. Random matrices constitute a novel mechanism for the emergence of brain states coding for quantity.This article is part of a discussion meeting issue 'The origins of numerical abilities'. © 2017 The Author(s).

  8. Image Coding Based on Address Vector Quantization.

    Science.gov (United States)

    Feng, Yushu

    Image coding is finding increased application in teleconferencing, archiving, and remote sensing. This thesis investigates the potential of Vector Quantization (VQ), a relatively new source coding technique, for compression of monochromatic and color images. Extensions of the Vector Quantization technique to the Address Vector Quantization method have been investigated. In Vector Quantization, the image data to be encoded are first processed to yield a set of vectors. A codeword from the codebook which best matches the input image vector is then selected. Compression is achieved by replacing the image vector with the index of the code-word which produced the best match, the index is sent to the channel. Reconstruction of the image is done by using a table lookup technique, where the label is simply used as an address for a table containing the representative vectors. A code-book of representative vectors (codewords) is generated using an iterative clustering algorithm such as K-means, or the generalized Lloyd algorithm. A review of different Vector Quantization techniques are given in chapter 1. Chapter 2 gives an overview of codebook design methods including the Kohonen neural network to design codebook. During the encoding process, the correlation of the address is considered and Address Vector Quantization is developed for color image and monochrome image coding. Address VQ which includes static and dynamic processes is introduced in chapter 3. In order to overcome the problems in Hierarchical VQ, Multi-layer Address Vector Quantization is proposed in chapter 4. This approach gives the same performance as that of the normal VQ scheme but the bit rate is about 1/2 to 1/3 as that of the normal VQ method. In chapter 5, a Dynamic Finite State VQ based on a probability transition matrix to select the best subcodebook to encode the image is developed. In chapter 6, a new adaptive vector quantization scheme, suitable for color video coding, called "A Self -Organizing

  9. GATO: An MHD stability code for axisymmetric plasmas with internal separatrices

    International Nuclear Information System (INIS)

    Bernard, L.C.; Helton, F.J.; Moore, R.W.

    1981-01-01

    The GATO code computes the growth rate of ideal magnetohydrodynamic instabilities in axisymmetric geometries with internal separatrices such as doublet and expanded spheromak. The basic method, which uses a variational principle and a Galerkin procedure to obtain a matrix eigenvalue problem, is common to the ERATO and PEST codes. A new coordinate system has been developed to handle the internal separatrix. Efficient algorithms have been developed to solve the matrix eigenvalue problem for matrices of rank as large as 40 000. Further improvement is expected using graph theoretical techniques to reorder the matrices. Using judicious mesh repartition, the marginal point can be determined with great precision. The code has been extensively used to optimize doublet and general tokamak plasmas. (orig.)

  10. Linear tree codes and the problem of explicit constructions

    Czech Academy of Sciences Publication Activity Database

    Pudlák, Pavel

    2016-01-01

    Roč. 490, February 1 (2016), s. 124-144 ISSN 0024-3795 R&D Projects: GA ČR GBP202/12/G061 Institutional support: RVO:67985840 Keywords : tree code * error correcting code * triangular totally nonsingular matrix Subject RIV: BA - General Mathematics Impact factor: 0.973, year: 2016 http://www.sciencedirect.com/science/article/pii/S002437951500645X

  11. Method of forming a ceramic matrix composite and a ceramic matrix component

    Science.gov (United States)

    de Diego, Peter; Zhang, James

    2017-05-30

    A method of forming a ceramic matrix composite component includes providing a formed ceramic member having a cavity, filling at least a portion of the cavity with a ceramic foam. The ceramic foam is deposited on a barrier layer covering at least one internal passage of the cavity. The method includes processing the formed ceramic member and ceramic foam to obtain a ceramic matrix composite component. Also provided is a method of forming a ceramic matrix composite blade and a ceramic matrix composite component.

  12. Optimizing the ATLAS code with different profilers

    CERN Document Server

    Kama, S; The ATLAS collaboration

    2013-01-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 4M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like PIN, PAPI, and GOODA; as well as techniques such as library interposing. In this talk we will mainly focus on PIN tools and GOODA. PIN is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance...

  13. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  14. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  15. On the numerical verification of industrial codes

    International Nuclear Information System (INIS)

    Montan, Sethy Akpemado

    2013-01-01

    Numerical verification of industrial codes, such as those developed at EDF R and D, is required to estimate the precision and the quality of computed results, even more for code running in HPC environments where millions of instructions are performed each second. These programs usually use external libraries (MPI, BLACS, BLAS, LAPACK). In this context, it is required to have a tool as non intrusive as possible to avoid rewriting the original code. In this regard, the CADNA library, which implements the Discrete Stochastic Arithmetic, appears to be one of a promising approach for industrial applications. In the first part of this work, we are interested in an efficient implementation of the BLAS routine DGEMM (General Matrix Multiply) implementing Discrete Stochastic Arithmetic. The implementation of a basic algorithm for matrix product using stochastic types leads to an overhead greater than 1000 for a matrix of 1024 * 1024 compared to the standard version and commercial versions of xGEMM. Here, we detail different solutions to reduce this overhead and the results we have obtained. A new routine Dgemm- CADNA have been designed. This routine has allowed to reduce the overhead from 1100 to 35 compare to optimized BLAS implementations (GotoBLAS). Then, we focus on the numerical verification of Telemac-2D computed results. Performing a numerical validation with the CADNA library shows that more than 30% of the numerical instabilities occurring during an execution come from the dot product function. A more accurate implementation of the dot product with compensated algorithms is presented in this work. We show that implementing these kinds of algorithms, in order to improve the accuracy of computed results does not alter the code performance. (author)

  16. Literature survey of matrix diffusion theory and of experiments and data including natural analogues

    International Nuclear Information System (INIS)

    Ohlsson, Yvonne; Neretnieks, I.

    1995-08-01

    Diffusion theory in general and matrix diffusion in particular has been outlined, and experimental work has been reviewed. Literature diffusion data has been systematized in the form of tables and data has been compared and discussed. Strong indications of surface diffusion and anion exclusion have been found, and natural analogue studies and in-situ experiments suggest pore connectivity in the scale of meters. Matrix diffusion, however, mostly seem to be confined to zones of higher porosity extending only a few centimeters into the rock. Surface coating material do not seem to hinder sorption or diffusion into the rock. 54 refs, 18 tabs

  17. Video over DSL with LDGM Codes for Interactive Applications

    Directory of Open Access Journals (Sweden)

    Laith Al-Jobouri

    2016-05-01

    Full Text Available Digital Subscriber Line (DSL network access is subject to error bursts, which, for interactive video, can introduce unacceptable latencies if video packets need to be re-sent. If the video packets are protected against errors with Forward Error Correction (FEC, calculation of the application-layer channel codes themselves may also introduce additional latency. This paper proposes Low-Density Generator Matrix (LDGM codes rather than other popular codes because they are more suitable for interactive video streaming, not only for their computational simplicity but also for their licensing advantage. The paper demonstrates that a reduction of up to 4 dB in video distortion is achievable with LDGM Application Layer (AL FEC. In addition, an extension to the LDGM scheme is demonstrated, which works by rearranging the columns of the parity check matrix so as to make it even more resilient to burst errors. Telemedicine and video conferencing are typical target applications.

  18. MATIN: a random network coding based framework for high quality peer-to-peer live video streaming.

    Directory of Open Access Journals (Sweden)

    Behrang Barekatain

    Full Text Available In recent years, Random Network Coding (RNC has emerged as a promising solution for efficient Peer-to-Peer (P2P video multicasting over the Internet. This probably refers to this fact that RNC noticeably increases the error resiliency and throughput of the network. However, high transmission overhead arising from sending large coefficients vector as header has been the most important challenge of the RNC. Moreover, due to employing the Gauss-Jordan elimination method, considerable computational complexity can be imposed on peers in decoding the encoded blocks and checking linear dependency among the coefficients vectors. In order to address these challenges, this study introduces MATIN which is a random network coding based framework for efficient P2P video streaming. The MATIN includes a novel coefficients matrix generation method so that there is no linear dependency in the generated coefficients matrix. Using the proposed framework, each peer encapsulates one instead of n coefficients entries into the generated encoded packet which results in very low transmission overhead. It is also possible to obtain the inverted coefficients matrix using a bit number of simple arithmetic operations. In this regard, peers sustain very low computational complexities. As a result, the MATIN permits random network coding to be more efficient in P2P video streaming systems. The results obtained from simulation using OMNET++ show that it substantially outperforms the RNC which uses the Gauss-Jordan elimination method by providing better video quality on peers in terms of the four important performance metrics including video distortion, dependency distortion, End-to-End delay and Initial Startup delay.

  19. Theoretical treatment of molecular photoionization based on the R-matrix method

    International Nuclear Information System (INIS)

    Tashiro, Motomichi

    2012-01-01

    The R-matrix method was implemented to treat molecular photoionization problem based on the UK R-matrix codes. This method was formulated to treat photoionization process long before, however, its application has been mostly limited to photoionization of atoms. Application of the method to valence photoionization as well as inner-shell photoionization process will be presented.

  20. A 222 energy bins response matrix for a "6Lil scintillator Bss system

    International Nuclear Information System (INIS)

    Lacerda, M. A. S.; Vega C, H. R.; Mendez V, R.; Lorente F, A.; Ibanez F, S.; Gallego D, E.

    2016-10-01

    A new response matrix was calculated for a Bonner Sphere Spectrometer (Bss) with a "6Lil(Eu) scintillator. We utilized the Monte Carlo N-particle radiation transport code MCNPX, version 2.7.0, with Endf/B-VII.0 nuclear data library to calculate the responses for 6 spheres and the bare detector, for energies varying from 9.441 E(-10) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 222 energy groups. A Bss, like the modeled in this work, was utilized to measure the neutron spectrum generated by the "2"4"1AmBe source of the Universidad Politecnica de Madrid. From the count rates obtained with this Bss system we unfolded neutron spectrum utilizing the BUNKIUT code for 31 energy bins (UTA-4 response matrix) and the MAXED code with the new calculated response functions. We compared spectra obtained with these Bss system / unfold codes with that obtained from measurements performed with a Bss system constituted of 12 spheres with a spherical "3He Sp-9 counter (Centronic Ltd., UK) and MAXED code with the system-specific response functions (Bss-CIEMAT). A relatively good agreement was observed between our response matrix and that calculated by other authors. In general, we observed an improvement in the agreement as the energy increases. However, higher discrepancies were observed for energies close to 1-E(-8) MeV and, mainly, for energies above 20 MeV. These discrepancies were mainly attributed to the differences in cross-section libraries employed. The ambient dose equivalent (H (10)) calculated with the "6Lil-MAXED showed a good agreement with values measured with the neutron area monitor Bert hold Lb 6411 and within 12% the value obtained with another Bss system (Bss-CIEMAT). The response matrix calculated in this work can be utilized together with the MAXED code to generate neutron spectra with a good energy resolution up to 20 MeV. Some additional tests are being done to validate this response matrix and improve the results for energies

  1. Use of the algebraic coding theory in nuclear electronics

    International Nuclear Information System (INIS)

    Nikityuk, N.M.

    1990-01-01

    New results of studies of the development and use of the syndrome coding method in nuclear electronics are described. Two aspects of using the syndrome coding method are considered for sequential coding devices and for the creation of fast parallel data compression devices. Specific examples of the creation of time-to-digital converters based on circular counters are described. Several time intervals can be coded very fast and with a high resolution by means of these converters. The effective coding matrix which can be used for light signal coding. The rule of constructing such coding matrices for arbitrary number of channels and multiplicity n is given. The methods for solving ambiguities in silicon detectors and for creating the special-purpose processors for high-energy spectrometers are given. 21 refs.; 9 figs.; 3 tabs

  2. Multithreading for synchronization tolerance in matrix factorization

    International Nuclear Information System (INIS)

    Buttari, Alfredo; Dongarra, Jack; Husbands, Parry; Kurzak, Jakub; Yelick, Katherine

    2007-01-01

    Physical constraints such as power, leakage and pin bandwidth are currently driving the HPC industry to produce systems with unprecedented levels of concurrency. In these parallel systems, synchronization and memory operations are becoming considerably more expensive than before. In this work we study parallel matrix factorization codes and conclude that they need to be re-engineered to avoid unnecessary (and expensive) synchronization. We propose the use of multithreading combined with intelligent schedulers and implement representative algorithms in this style. Our results indicate that this strategy can significantly outperform traditional codes

  3. On Field Size and Success Probability in Network Coding

    DEFF Research Database (Denmark)

    Geil, Hans Olav; Matsumoto, Ryutaroh; Thomsen, Casper

    2008-01-01

    Using tools from algebraic geometry and Gröbner basis theory we solve two problems in network coding. First we present a method to determine the smallest field size for which linear network coding is feasible. Second we derive improved estimates on the success probability of random linear network...... coding. These estimates take into account which monomials occur in the support of the determinant of the product of Edmonds matrices. Therefore we finally investigate which monomials can occur in the determinant of the Edmonds matrix....

  4. Evaluation of the implementation of the R-matrix formalism with reference to the astrophysically important {sup 18}F(p,α){sup 15}O reaction

    Energy Technology Data Exchange (ETDEWEB)

    Mountford, D.J., E-mail: d.j.mountford86@gmail.com [SUPA, School of Physics and Astronomy, University of Edinburgh, EH9 3JZ (United Kingdom); Boer, R.J. de [Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Descouvemont, P. [Physique Nucléaire Théorique et Physique Mathématique, C.P. 229, Université Libre de Bruxelles (ULB), B 1050 Brussels (Belgium); Murphy, A. St. J. [SUPA, School of Physics and Astronomy, University of Edinburgh, EH9 3JZ (United Kingdom); Uberseder, E.; Wiescher, M. [Department of Physics, University of Notre Dame, Notre Dame, Indiana 46556 (United States)

    2014-12-11

    Background. The R-Matrix formalism is a crucial tool in the study of nuclear astrophysics reactions, and many codes have been written to implement the relevant mathematics. One such code makes use of Visual Basic macros. A further open-source code, AZURE, written in the FORTRAN programming language is available from the JINA collaboration and a C++ version, AZURE2, has recently become available. Purpose The detailed mathematics and extensive programming required to implement broadly applicable R-Matrix codes make comparisons between different codes highly desirable in order to check for errors. This paper presents a comparison of the three codes based around data and recent results of the astrophysically important {sup 18}F(p,α){sup 15}O reaction. Methods Using the same analysis techniques as in the work of Mountford et al. parameters are extracted from the two JINA codes, and the resulting cross-sections are compared. This includes both refitting data with each code and making low-energy extrapolations. Results All extracted parameters are shown to be broadly consistent between the three codes and the resulting calculations are in good agreement barring a known low-energy problem in the original AZURE code. Conclusion The three codes are shown to be broadly consistent with each other and equally valid in the study of astrophysical reactions, although one must be careful when considering low lying, narrow resonances which can be problematic when integrating.

  5. Neutron response matrix for unfolding NE-213 measurements to 21 MeV

    International Nuclear Information System (INIS)

    Ingersoll, D.T.; Wehring, B.W.; Johnson, R.H.

    1976-01-01

    A neutron response matrix from measured neutron responses of NE-213 in the energy range of 0.2 to 22 MeV is presented. An interpolation scheme was used to construct an 81-column matrix from the data of Verbinski, Burrus, Love, Zobel, and Hill. As a test of the new response matrix, the Cf-252 neutron spectrum was measured and unfolded using the new response matrix and the FORIST unfolding code. The spectrum agrees well with previous measurements at lower energies, while providing new information above 8 MeV

  6. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  7. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  8. Allele coding in genomic evaluation

    DEFF Research Database (Denmark)

    Standen, Ismo; Christensen, Ole Fredslund

    2011-01-01

    Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker...... effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous...... genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call...

  9. Stress and Damage in Polymer Matrix Composite Materials Due to Material Degradation at High Temperatures

    Science.gov (United States)

    McManus, Hugh L.; Chamis, Christos C.

    1996-01-01

    This report describes analytical methods for calculating stresses and damage caused by degradation of the matrix constituent in polymer matrix composite materials. Laminate geometry, material properties, and matrix degradation states are specified as functions of position and time. Matrix shrinkage and property changes are modeled as functions of the degradation states. The model is incorporated into an existing composite mechanics computer code. Stresses, strains, and deformations at the laminate, ply, and micro levels are calculated, and from these calculations it is determined if there is failure of any kind. The rationale for the model (based on published experimental work) is presented, its integration into the laminate analysis code is outlined, and example results are given, with comparisons to existing material and structural data. The mechanisms behind the changes in properties and in surface cracking during long-term aging of polyimide matrix composites are clarified. High-temperature-material test methods are also evaluated.

  10. Methodology, status and plans for development and assessment of the code ATHLET

    Energy Technology Data Exchange (ETDEWEB)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH Forschungsgelaende, Garching (Germany)

    1997-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The code has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.

  11. Methodology, status and plans for development and assessment of the code ATHLET

    International Nuclear Information System (INIS)

    Teschendorff, V.; Austregesilo, H.; Lerchl, G.

    1997-01-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The code has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities

  12. Variable weight spectral amplitude coding for multiservice OCDMA networks

    Science.gov (United States)

    Seyedzadeh, Saleh; Rahimian, Farzad Pour; Glesk, Ivan; Kakaee, Majid H.

    2017-09-01

    The emergence of heterogeneous data traffic such as voice over IP, video streaming and online gaming have demanded networks with capability of supporting quality of service (QoS) at the physical layer with traffic prioritisation. This paper proposes a new variable-weight code based on spectral amplitude coding for optical code-division multiple-access (OCDMA) networks to support QoS differentiation. The proposed variable-weight multi-service (VW-MS) code relies on basic matrix construction. A mathematical model is developed for performance evaluation of VW-MS OCDMA networks. It is shown that the proposed code provides an optimal code length with minimum cross-correlation value when compared to other codes. Numerical results for a VW-MS OCDMA network designed for triple-play services operating at 0.622 Gb/s, 1.25 Gb/s and 2.5 Gb/s are considered.

  13. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  14. Configuration control based on risk matrix for radiotherapy treatment

    International Nuclear Information System (INIS)

    Montes de Oca Quinnones, Joe; Torres Valle, Antonio

    2015-01-01

    The incorporation of the science and technique breakthroughs in the application of the radiotherapy represents a challenge so that, the appearance of equipment failure or human mistakes that triggers unfavorable consequences for patients, public, or the occupationally exposed workers; it is also diversified forcing to incorporate besides, as part of the efforts, new techniques for the evaluation of risk and the detection of the weak points that can lead to these consequences. In order to evaluate the risks of the radiotherapy practices there is the SEVRRA code, based on the method of Risk Matrix. The system SEVRRA is the most frequently used code in the applications of risk studies in radiotherapy treatment. On the other hand, starting from the development of tools to control the dangerous configurations in nuclear power plants, it has been developed the SECURE code, which in its application variant of Risk Matrix, has gain a comfortable interface man-machine to make risk analyses to the radiotherapy treatment, molding in this way a lot of combinations of scenarios. These capacities outstandingly facilitate the studies and risk optimization applications in these practices. In the system SECURE-Risk Matrix are incorporated graphic and analytical capacities, which make more flexible the analyses and the subsequent documentation of all the results. The paper shows the the application of the proposed system to an integral risk study for the process of radiotherapy treatment with linear accelerator. (Author)

  15. Safety in nuclear power plant operation, including commissioning and decommissioning. A code of practice

    International Nuclear Information System (INIS)

    1978-01-01

    Safe operation of a nuclear power plant postulates satisfactory siting, design, construction and commissioning, together with proper management and operation of the plant. This Code of Practice deals with the safety aspects of management, commissioning, operation and decommissioning of the plant. It forms part of the Agency's programme, referred to as the NUSS programme, for establishing Codes of Practice and Safety Guides relating to land-based stationary thermal neutron power plants. It has been prepared for the use of those responsible for the operation of stationary nuclear power plants, the main function of which is the generation of electrical and/or thermal power, and for the use of those responsible for regulating the operation of such plants. It is not intended for application to reactors used solely for experimental or research purposes. The provisions in the Code are designed to provide assurance that operational activities are carried out without undue radiological hazard to the general public and to persons on the site. It should be understood that the provisions in the Code set forth minimum requirements which shall be met in order to achieve safe operation of a nuclear power plant

  16. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  17. One-dimensional transport code for one-group problems in plane geometry

    International Nuclear Information System (INIS)

    Bareiss, E.H.; Chamot, C.

    1970-09-01

    Equations and results are given for various methods of solution of the one-dimensional transport equation for one energy group in plane geometry with inelastic scattering and an isotropic source. After considerable investigation, a matrix method of solution was found to be faster and more stable than iteration procedures. A description of the code is included which allows for up to 24 regions, 250 points, and 16 angles such that the product of the number of angles and the number of points is less than 600

  18. A 222 energy bins response matrix for a {sup 6}Lil scintillator Bss system

    Energy Technology Data Exchange (ETDEWEB)

    Lacerda, M. A. S. [Centro de Desenvolvimento da Tecnologia Nuclear, Laboratorio de Calibracao de Dosimetros, Av. Pte. Antonio Carlos 6627, 31270-901 Pampulha, Belo Horizonte, Minas Gerais (Brazil); Vega C, H. R. [Universidad Autonoma de Zacatecas, Unidad Academica de Estudios Nucleares, Cipres No. 10, Fracc. La Penuela, 98068 Zacatecas, Zac. (Mexico); Mendez V, R. [Centro de Investigaciones Energeticas, Medioambientales y Tecnologicas, Laboratorio de Patrones Neutronicos, Av. Complutense 22, 28040 Madrid (Spain); Lorente F, A.; Ibanez F, S.; Gallego D, E., E-mail: masl@cdtn.br [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, 28006 Madrid (Spain)

    2016-10-15

    A new response matrix was calculated for a Bonner Sphere Spectrometer (Bss) with a {sup 6}Lil(Eu) scintillator. We utilized the Monte Carlo N-particle radiation transport code MCNPX, version 2.7.0, with Endf/B-VII.0 nuclear data library to calculate the responses for 6 spheres and the bare detector, for energies varying from 9.441 E(-10) MeV to 105.9 MeV, with 20 equal-log(E)-width bins per energy decade, totalizing 222 energy groups. A Bss, like the modeled in this work, was utilized to measure the neutron spectrum generated by the {sup 241}AmBe source of the Universidad Politecnica de Madrid. From the count rates obtained with this Bss system we unfolded neutron spectrum utilizing the BUNKIUT code for 31 energy bins (UTA-4 response matrix) and the MAXED code with the new calculated response functions. We compared spectra obtained with these Bss system / unfold codes with that obtained from measurements performed with a Bss system constituted of 12 spheres with a spherical {sup 3}He Sp-9 counter (Centronic Ltd., UK) and MAXED code with the system-specific response functions (Bss-CIEMAT). A relatively good agreement was observed between our response matrix and that calculated by other authors. In general, we observed an improvement in the agreement as the energy increases. However, higher discrepancies were observed for energies close to 1-E(-8) MeV and, mainly, for energies above 20 MeV. These discrepancies were mainly attributed to the differences in cross-section libraries employed. The ambient dose equivalent (H (10)) calculated with the {sup 6}Lil-MAXED showed a good agreement with values measured with the neutron area monitor Bert hold Lb 6411 and within 12% the value obtained with another Bss system (Bss-CIEMAT). The response matrix calculated in this work can be utilized together with the MAXED code to generate neutron spectra with a good energy resolution up to 20 MeV. Some additional tests are being done to validate this response matrix and improve the

  19. Parallel Computing Characteristics of CUPID code under MPI and Hybrid environment

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jae Ryong; Yoon, Han Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Jeon, Byoung Jin; Choi, Hyoung Gwon [Seoul National Univ. of Science and Technology, Seoul (Korea, Republic of)

    2014-05-15

    In this paper, a characteristic of parallel algorithm is presented for solving an elliptic type equation of CUPID via domain decomposition method using the MPI and the parallel performance is estimated in terms of a scalability which shows the speedup ratio. In addition, the time-consuming pattern of major subroutines is studied. Two different grid systems are taken into account: 40,000 meshes for coarse system and 320,000 meshes for fine system. Since the matrix of the CUPID code differs according to whether the flow is single-phase or two-phase, the effect of matrix shape is evaluated. Finally, the effect of the preconditioner for matrix solver is also investigated. Finally, the hybrid (OpenMP+MPI) parallel algorithm is introduced and discussed in detail for solving pressure solver. Component-scale thermal-hydraulics code, CUPID has been developed for two-phase flow analysis, which adopts a three-dimensional, transient, three-field model, and parallelized to fulfill a recent demand for long-transient and highly resolved multi-phase flow behavior. In this study, the parallel performance of the CUPID code was investigated in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the neighboring domain. For managing the sparse matrix effectively, the CSR storage format is used. To take into account the characteristics of the pressure matrix which turns to be asymmetric for two-phase flow, both single-phase and two-phase calculations were run. In addition, the effect of the matrix size and preconditioning was also investigated. The fine mesh calculation shows better scalability than the coarse mesh because the number of coarse mesh does not need to decompose the computational domain excessively. The fine mesh can be present good scalability when dividing geometry with considering the ratio between computation and communication time. For a given mesh, single-phase flow

  20. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  1. Encoders for block-circulant LDPC codes

    Science.gov (United States)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2009-01-01

    Methods and apparatus to encode message input symbols in accordance with an accumulate-repeat-accumulate code with repetition three or four are disclosed. Block circulant matrices are used. A first method and apparatus make use of the block-circulant structure of the parity check matrix. A second method and apparatus use block-circulant generator matrices.

  2. GROGi-F. Modified version of GROGi 2 nuclear evaporation computer code including fission decay channel

    International Nuclear Information System (INIS)

    Delagrange, H.

    1977-01-01

    This report is the user manual of the GR0GI-F code, modified version of the GR0GI-2 code. It calculates the cross sections for heavy ion induced fission. Fission probabilities are calculated via the Bohr-Wheeler formalism

  3. ARKAS: A three-dimensional finite element code for the analysis of core distortions and mechanical behaviour

    International Nuclear Information System (INIS)

    Nakagawa, M.

    1984-01-01

    Computer program ARKAS has been developed for the purpose of predicting core distortions and mechanical behaviour in a cluster of subassemblies under steady state conditions in LMFBR cores. This report describes the analytical models and numerical procedures employed in the code together with some typical results of the analysis made on large LMFBR cores. ARKAS is programmed in the FORTRAN-IV language and is capable of treating up to 260 assemblies in a cluster with flexible boundary conditions including mirror and rotational symmetry. The nonlinearity of the problem due to contact and separation is solved by the step iterative procedure based on the Newton-Raphson method. In each step iterative procedure, the linear matrix equation must be reconstructed and then solved directly. To save computer time and memory, the substructure method is adopted in the step of reconstructing the linear matrix equation, and in the step of solving the linear matrix equation, the block successive over-relaxation method is adopted. The program ARKAS computes, at every time step, 3-dimensional displacements and rotations of the subassemblies in the core and the interduct forces including at the nozzle tips and nozzle bases with friction effects. The code also has an ability to deal with the refueling and shuffling of subassemblies and to calculate the values of withdrawal forces. For the qualitative validation of the code, sample calculations were performed on the several bundle arrays. In these calculations, contact and separation processes under the influences of friction forces, off-center loading, duct rotations and torsion, thermal expansion and irradiation induced swelling and creep were analyzed. These results are quite reasonable in the light of the expected behaviour. This work was performed under the sponsorship of Toshiba Corporation

  4. PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices

    International Nuclear Information System (INIS)

    Dunn, M.E.

    2000-01-01

    PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI

  5. Parallel Computing Characteristics of Two-Phase Thermal-Hydraulics code, CUPID

    International Nuclear Information System (INIS)

    Lee, Jae Ryong; Yoon, Han Young

    2013-01-01

    Parallelized CUPID code has proved to be able to reproduce multi-dimensional thermal hydraulic analysis by validating with various conceptual problems and experimental data. In this paper, the characteristics of the parallelized CUPID code were investigated. Both single- and two phase simulation are taken into account. Since the scalability of a parallel simulation is known to be better for fine mesh system, two types of mesh system are considered. In addition, the dependency of the preconditioner for matrix solver was also compared. The scalability for the single-phase flow is better than that for two-phase flow due to the less numbers of iterations for solving pressure matrix. The CUPID code was investigated the parallel performance in terms of scalability. The CUPID code was parallelized with domain decomposition method. The MPI library was adopted to communicate the information at the interface cells. As increasing the number of mesh, the scalability is improved. For a given mesh, single-phase flow simulation with diagonal preconditioner shows the best speedup. However, for the two-phase flow simulation, the ILU preconditioner is recommended since it reduces the overall simulation time

  6. Computer access security code system

    Science.gov (United States)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  7. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra

    2013-01-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  8. Low-Rank Sparse Coding for Image Classification

    KAUST Repository

    Zhang, Tianzhu

    2013-12-01

    In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.

  9. A code for structural analysis of fuel assemblies

    International Nuclear Information System (INIS)

    Hayashi, I.M.V.; Perrotta, J.A.

    1988-08-01

    It's presented the code ELCOM for the matrix analysis of tubular structures coupled by rigid spacers, typical of PWR's fuel elements. The code ELCOM makes a static structural analysis, where the displacements and internal forces are obtained for each tubular structure at the joints with the spacers, and also, the natural frequencies and vibrational modes of an equilavent integrated structure are obtained. The ELCOM result is compared to a PWR fuel element structural analysis obtained in published paper. (author) [pt

  10. TWO-DIMENSIONAL STELLAR EVOLUTION CODE INCLUDING ARBITRARY MAGNETIC FIELDS. II. PRECISION IMPROVEMENT AND INCLUSION OF TURBULENCE AND ROTATION

    International Nuclear Information System (INIS)

    Li Linghuai; Sofia, Sabatino; Basu, Sarbani; Demarque, Pierre; Ventura, Paolo; Penza, Valentina; Bi Shaolan

    2009-01-01

    In the second paper of this series we pursue two objectives. First, in order to make the code more sensitive to small effects, we remove many approximations made in Paper I. Second, we include turbulence and rotation in the two-dimensional framework. The stellar equilibrium is described by means of a set of five differential equations, with the introduction of a new dependent variable, namely the perturbation to the radial gravity, that is found when the nonradial effects are considered in the solution of the Poisson equation. Following the scheme of the first paper, we write the equations in such a way that the two-dimensional effects can be easily disentangled. The key concept introduced in this series is the equipotential surface. We use the underlying cause-effect relation to develop a recurrence relation to calculate the equipotential surface functions for uniform rotation, differential rotation, rotation-like toroidal magnetic fields, and turbulence. We also develop a more precise code to numerically solve the two-dimensional stellar structure and evolution equations based on the equipotential surface calculations. We have shown that with this formulation we can achieve the precision required by observations by appropriately selecting the convergence criterion. Several examples are presented to show that the method works well. Since we are interested in modeling the effects of a dynamo-type field on the detailed envelope structure and global properties of the Sun, the code has been optimized for short timescales phenomena (down to 1 yr). The time dependence of the code has so far been tested exclusively to address such problems.

  11. Assessment of ICARE/CATHARE V1 Severe Accident Code

    International Nuclear Information System (INIS)

    Chatelard, Patrick; Fleurot, Joelle; Marchand, Olivier; Drai, Patrick

    2006-01-01

    The ICARE/CATHARE code system has been developed by the French 'Institut de Radioprotection et de Surete Nucleaire' (IRSN) in the last decade for the detailed evaluation of Severe Accident (SA) consequences in a primary system. It is composed of the coupling of the core degradation IRSN code ICARE2 and of the thermal-hydraulics French code CATHARE2. It has been extensively used to support the level 2 Probabilistic Safety Assessment (PSA-2) of the 900 MWe PWR. This paper presents the synthesis of the ICARE/CATHARE V1 assessment which was conducted in the frame of the 'International ICARE/CATHARE Users' Club', under the management of IRSN. The ICARE/CATHARE V1 validation matrix is composed of more than 60 experiments, distributed in few thermal-hydraulics non-regression tests (to handle the front end phase of a severe accident), numerous Separate-Effect Tests, about 30 Integral Tests covering both the early and the late degradation phases, as well as a 'circuit' experiment including hydraulics loops. Finally, the simulation of the TMI-2 accident was also added to assess the code against real conditions. This validation task was aimed at assessing the ICARE/CATHARE V1 capabilities (including the stand-alone ICARE2 V3mod1 version) and also at proposing recommendations for an optimal use of this version ('Users' Guidelines'). Thus, with a correct account for the recommended guidelines, it appeared that the last ICARE/CATHARE V1 version could be reasonably used to perform best-estimate reactor studies up to a large corium slumping into the lower head. (authors)

  12. Acceleration of criticality analysis solution convergence by matrix eigenvector for a system with weak neutron interaction

    Energy Technology Data Exchange (ETDEWEB)

    Nomura, Yasushi; Takada, Tomoyuki; Kuroishi, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Kadotani, Hiroyuki [Shizuoka Sangyo Univ., Iwata, Shizuoka (Japan)

    2003-03-01

    In the case of Monte Carlo calculation to obtain a neutron multiplication factor for a system of weak neutron interaction, there might be some problems concerning convergence of the solution. Concerning this difficulty in the computer code calculations, theoretical derivation was made from the general neutron transport equation and consideration was given for acceleration of solution convergence by using the matrix eigenvector in this report. Accordingly, matrix eigenvector calculation scheme was incorporated together with procedure to make acceleration of convergence into the continuous energy Monte Carlo code MCNP. Furthermore, effectiveness of acceleration of solution convergence by matrix eigenvector was ascertained with the results obtained by applying to the two OECD/NEA criticality analysis benchmark problems. (author)

  13. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  14. A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance

    Science.gov (United States)

    Bell, E. V.; Henry, A.; Pivo, G.

    2017-12-01

    What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their

  15. Discourse Matrix in Filipino-English Code-Switching: Students' Attitudes and Feelings

    Science.gov (United States)

    dela Rosa, Rona

    2016-01-01

    Undeniably, one language may be considered more valuable than other languages. Hence, most bilingual communities suffer from language imbalances. The present study attempts to identify the factors of code-switching during classroom presentations. Its functions were identified through analysing conversational contexts in which it occurs. Through…

  16. Sub-quadratic decoding of one-point hermitian codes

    DEFF Research Database (Denmark)

    Nielsen, Johan Sebastian Rosenkilde; Beelen, Peter

    2015-01-01

    We present the first two sub-quadratic complexity decoding algorithms for one-point Hermitian codes. The first is based on a fast realization of the Guruswami-Sudan algorithm using state-of-the-art algorithms from computer algebra for polynomial-ring matrix minimization. The second is a power...... decoding algorithm: an extension of classical key equation decoding which gives a probabilistic decoding algorithm up to the Sudan radius. We show how the resulting key equations can be solved by the matrix minimization algorithms from computer algebra, yielding similar asymptotic complexities....

  17. Extending the range of real time density matrix renormalization group simulations

    Science.gov (United States)

    Kennes, D. M.; Karrasch, C.

    2016-03-01

    We discuss a few simple modifications to time-dependent density matrix renormalization group (DMRG) algorithms which allow to access larger time scales. We specifically aim at beginners and present practical aspects of how to implement these modifications within any standard matrix product state (MPS) based formulation of the method. Most importantly, we show how to 'combine' the Schrödinger and Heisenberg time evolutions of arbitrary pure states | ψ 〉 and operators A in the evaluation of 〈A〉ψ(t) = 〈 ψ | A(t) | ψ 〉 . This includes quantum quenches. The generalization to (non-)thermal mixed state dynamics 〈A〉ρ(t) =Tr [ ρA(t) ] induced by an initial density matrix ρ is straightforward. In the context of linear response (ground state or finite temperature T > 0) correlation functions, one can extend the simulation time by a factor of two by 'exploiting time translation invariance', which is efficiently implementable within MPS DMRG. We present a simple analytic argument for why a recently-introduced disentangler succeeds in reducing the effort of time-dependent simulations at T > 0. Finally, we advocate the python programming language as an elegant option for beginners to set up a DMRG code.

  18. Nodal coupling by response matrix principles

    International Nuclear Information System (INIS)

    Ancona, A.; Becker, M.; Beg, M.D.; Harris, D.R.; Menezes, A.D.; VerPlanck, D.M.; Pilat, E.

    1977-01-01

    The response matrix approach has been used in viewing a reactor node in isolation and in characterizing the node by reflection and trans-emission factors. These are then used to generate invariant imbedding parameters, which in turn are used in a nodal reactor simulator code to compute core power distributions in two and three dimensions. Various nodal techniques are analyzed and converted into a single invariant imbedding formalism

  19. Blind Recognition of Binary BCH Codes for Cognitive Radios

    Directory of Open Access Journals (Sweden)

    Jing Zhou

    2016-01-01

    Full Text Available A novel algorithm of blind recognition of Bose-Chaudhuri-Hocquenghem (BCH codes is proposed to solve the problem of Adaptive Coding and Modulation (ACM in cognitive radio systems. The recognition algorithm is based on soft decision situations. The code length is firstly estimated by comparing the Log-Likelihood Ratios (LLRs of the syndromes, which are obtained according to the minimum binary parity check matrixes of different primitive polynomials. After that, by comparing the LLRs of different minimum polynomials, the code roots and generator polynomial are reconstructed. When comparing with some previous approaches, our algorithm yields better performance even on very low Signal-Noise-Ratios (SNRs with lower calculation complexity. Simulation results show the efficiency of the proposed algorithm.

  20. Thermal-hydraulic analysis of PWR core including intermediate flow mixers with the THYC code

    International Nuclear Information System (INIS)

    Mur, J.; Meignin, J.C.

    1997-07-01

    Departure from nucleate boiling (DNB) is one of the major limiting factors of pressurized water reactors (PWRs). Safety requires that occurrence of DNB should be precluded under normal or incidental operating conditions. The thermal-hydraulic THYC code developed by EDF is described. The code is devoted to heat and mass transfer in nuclear components. Critical Heat Flux (CHF) is predicted from local thermal-hydraulic parameters such as pressure, mass flow rate, and quality. A three stage methodology to evaluate thermal margins in order to perform standard core design is described. (K.A.)

  1. Thermal-hydraulic analysis of PWR core including intermediate flow mixers with the THYC code

    Energy Technology Data Exchange (ETDEWEB)

    Mur, J. [Electricite de France (EDF), 78 - Chatou (France); Meignin, J.C. [Electricite de France (EDF), 69 - Villeurbanne (France)

    1997-07-01

    Departure from nucleate boiling (DNB) is one of the major limiting factors of pressurized water reactors (PWRs). Safety requires that occurrence of DNB should be precluded under normal or incidental operating conditions. The thermal-hydraulic THYC code developed by EDF is described. The code is devoted to heat and mass transfer in nuclear components. Critical Heat Flux (CHF) is predicted from local thermal-hydraulic parameters such as pressure, mass flow rate, and quality. A three stage methodology to evaluate thermal margins in order to perform standard core design is described. (K.A.) 8 refs.

  2. Updated User's Guide for Sammy: Multilevel R-Matrix Fits to Neutron Data Using Bayes' Equations

    Energy Technology Data Exchange (ETDEWEB)

    Larson, Nancy M [ORNL

    2008-10-01

    In 1980 the multilevel multichannel R-matrix code SAMMY was released for use in analysis of neutron-induced cross section data at the Oak Ridge Electron Linear Accelerator. Since that time, SAMMY has evolved to the point where it is now in use around the world for analysis of many different types of data. SAMMY is not limited to incident neutrons but can also be used for incident protons, alpha particles, or other charged particles; likewise, Coulomb exit hannels can be included. Corrections for a wide variety of experimental conditions are available in the code: Doppler and resolution broadening, multiple-scattering corrections for capture or reaction yields, normalizations and backgrounds, to name but a few. The fitting procedure is Bayes' method, and data and parameter covariance matrices are properly treated within the code. Pre- and post-processing capabilities are also available, including (but not limited to) connections with the Evaluated Nuclear Data Files. Though originally designed for use in the resolved resonance region, SAMMY also includes a treatment for data analysis in the unresolved resonance region.

  3. Insights into the key roles of epigenetics in matrix macromolecules-associated wound healing.

    Science.gov (United States)

    Piperigkou, Zoi; Götte, Martin; Theocharis, Achilleas D; Karamanos, Nikos K

    2017-10-24

    Extracellular matrix (ECM) is a dynamic network of macromolecules, playing a regulatory role in cell functions, tissue regeneration and remodeling. Wound healing is a tissue repair process necessary for the maintenance of the functionality of tissues and organs. This highly orchestrated process is divided into four temporally overlapping phases, including hemostasis, inflammation, proliferation and tissue remodeling. The dynamic interplay between ECM and resident cells exerts its critical role in many aspects of wound healing, including cell proliferation, migration, differentiation, survival, matrix degradation and biosynthesis. Several epigenetic regulatory factors, such as the endogenous non-coding microRNAs (miRNAs), are the drivers of the wound healing response. microRNAs have pivotal roles in regulating ECM composition during wound healing and dermal regeneration. Their expression is associated with the distinct phases of wound healing and they serve as target biomarkers and targets for systematic regulation of wound repair. In this article we critically present the importance of epigenetics with particular emphasis on miRNAs regulating ECM components (i.e. glycoproteins, proteoglycans and matrix proteases) that are key players in wound healing. The clinical relevance of miRNA targeting as well as the delivery strategies designed for clinical applications are also presented and discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Implementation of a digital optical matrix-vector multiplier using a holographic look-up table and residue arithmetic

    Science.gov (United States)

    Habiby, Sarry F.

    1987-01-01

    The design and implementation of a digital (numerical) optical matrix-vector multiplier are presented. The objective is to demonstrate the operation of an optical processor designed to minimize computation time in performing a practical computing application. This is done by using the large array of processing elements in a Hughes liquid crystal light valve, and relying on the residue arithmetic representation, a holographic optical memory, and position coded optical look-up tables. In the design, all operations are performed in effectively one light valve response time regardless of matrix size. The features of the design allowing fast computation include the residue arithmetic representation, the mapping approach to computation, and the holographic memory. In addition, other features of the work include a practical light valve configuration for efficient polarization control, a model for recording multiple exposures in silver halides with equal reconstruction efficiency, and using light from an optical fiber for a reference beam source in constructing the hologram. The design can be extended to implement larger matrix arrays without increasing computation time.

  5. CONIFERS: a neutronics code for reactors with channels

    International Nuclear Information System (INIS)

    Davis, R.S.

    1977-04-01

    CONIFERS is a neutronics code for nuclear reactors whose fuel is in channels that are separated from each other by several neutron mean-free-path lengths of moderator. It can treat accurately situations in which the usual homogenized-cell diffusion equation becomes inaccurate, but is more economical than other advanced methods such as response-matrix and source-sink formalisms. CONIFERS uses exact solutions of the neutron diffusion equation within each cell. It allows for the breakdown of this equation near a channel by means of data that almost any cell code can supply. It uses the results of these cell analyses in a reactor equations set that is as readily solvable as the familiar finite-difference equations set. CONIFERS can model almost any configuration of channels and other structures in two or three dimensions. It can use any number of energy groups and any reactivity scales, including scales based on control operations. It is also flexible from a programming point of view, and has convenient input and output provisions. (author)

  6. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  7. Delay Estimation in Long-Code Asynchronous DS/CDMA Systems Using Multiple Antennas

    Directory of Open Access Journals (Sweden)

    Sirbu Marius

    2004-01-01

    Full Text Available The problem of propagation delay estimation in asynchronous long-code DS-CDMA multiuser systems is addressed. Almost all the methods proposed so far in the literature for propagation delay estimation are derived for short codes and the knowledge of the codes is exploited by the estimators. In long-code CDMA, the spreading code is aperiodic and the methods developed for short codes may not be used or may increase the complexity significantly. For example, in the subspace-based estimators, the aperiodic nature of the code may require subspace tracking. In this paper we propose a novel method for simultaneous estimation of the propagation delays of several active users. A specific multiple-input multiple-output (MIMO system model is constructed in a multiuser scenario. In such model the channel matrix contains information about both the users propagation delays and channel impulse responses. Consequently, estimates of the delays are obtained as a by-product of the channel estimation task. The channel matrix has a special structure that is exploited in estimating the delays. The proposed delay estimation method lends itself to an adaptive implementation. Thus, it may be applied to joint channel and delay estimation in uplink DS-CDMA analogously to the method presented by the authors in 2003. The performance of the proposed method is studied in simulation using realistic time-varying channel model and different SNR levels in the face of near-far effects, and using low spreading factor (high data rates.

  8. Design of a VLSI Decoder for Partially Structured LDPC Codes

    Directory of Open Access Journals (Sweden)

    Fabrizio Vacca

    2008-01-01

    of their parity matrix can be partitioned into two disjoint sets, namely, the structured and the random ones. For the proposed class of codes a constructive design method is provided. To assess the value of this method the constructed codes performance are presented. From these results, a novel decoding method called split decoding is introduced. Finally, to prove the effectiveness of the proposed approach a whole VLSI decoder is designed and characterized.

  9. Stochastic geometry in PRIZMA code

    International Nuclear Information System (INIS)

    Malyshkin, G. N.; Kashaeva, E. A.; Mukhamadiev, R. F.

    2007-01-01

    The paper describes a method used to simulate radiation transport through random media - randomly placed grains in a matrix material. The method models the medium consequently from one grain crossed by particle trajectory to another. Like in the Limited Chord Length Sampling (LCLS) method, particles in grains are tracked in the actual grain geometry, but unlike LCLS, the medium is modeled using only Matrix Chord Length Sampling (MCLS) from the exponential distribution and it is not necessary to know the grain chord length distribution. This helped us extend the method to media with randomly oriented arbitrarily shaped convex grains. Other extensions include multicomponent media - grains of several sorts, and polydisperse media - grains of different sizes. Sort and size distributions of crossed grains were obtained and an algorithm was developed for sampling grain orientations and positions. Special consideration was given to medium modeling at the boundary of the stochastic region. The method was implemented in the universal 3D Monte Carlo code PRIZMA. The paper provides calculated results for a model problem where we determine volume fractions of modeled components crossed by particle trajectories. It also demonstrates the use of biased sampling techniques implemented in PRIZMA for solving a problem of deep penetration in model random media. Described are calculations for the spectral response of a capacitor dose detector whose anode was modeled with account for its stochastic structure. (authors)

  10. Development of M3C code for Monte Carlo reactor physics criticality calculations

    International Nuclear Information System (INIS)

    Kumar, Anek; Kannan, Umasankari; Krishanani, P.D.

    2015-06-01

    The development of Monte Carlo code (M3C) for reactor design entails use of continuous energy nuclear data and Monte Carlo simulations for each of the neutron interaction processes. BARC has started a concentrated effort for developing a new general geometry continuous energy Monte Carlo code for reactor physics calculation indigenously. The code development required a comprehensive understanding of the basic continuous energy cross section sets. The important features of this code are treatment of heterogeneous lattices by general geometry, use of point cross sections along with unionized energy grid approach, thermal scattering model for low energy treatment, capability of handling the microscopic fuel particles dispersed randomly. The capability of handling the randomly dispersed microscopic fuel particles which is very useful for the modeling of High-Temperature Gas-Cooled reactor fuels which are composed of thousands of microscopic fuel particle (TRISO fuel particle), randomly dispersed in a graphite matrix. The Monte Carlo code for criticality calculation is a pioneering effort and has been used to study several types of lattices including cluster geometries. The code has been verified for its accuracy against more than 60 sample problems covering a wide range from simple (like spherical) to complex geometry (like PHWR lattice). Benchmark results show that the code performs quite well for the criticality calculation of the system. In this report, the current status of the code, features of the code, some of the benchmark results for the testing of the code and input preparation etc. are discussed. (author)

  11. Fortran code for generating random probability vectors, unitaries, and quantum states

    Directory of Open Access Journals (Sweden)

    Jonas eMaziero

    2016-03-01

    Full Text Available The usefulness of generating random configurations is recognized in many areas of knowledge. Fortran was born for scientific computing and has been one of the main programming languages in this area since then. And several ongoing projects targeting towards its betterment indicate that it will keep this status in the decades to come. In this article, we describe Fortran codes produced, or organized, for the generation of the following random objects: numbers, probability vectors, unitary matrices, and quantum state vectors and density matrices. Some matrix functions are also included and may be of independent interest.

  12. Generic programming for deterministic neutron transport codes

    International Nuclear Information System (INIS)

    Plagne, L.; Poncot, A.

    2005-01-01

    This paper discusses the implementation of neutron transport codes via generic programming techniques. Two different Boltzmann equation approximations have been implemented, namely the Sn and SPn methods. This implementation experiment shows that generic programming allows us to improve maintainability and readability of source codes with no performance penalties compared to classical approaches. In the present implementation, matrices and vectors as well as linear algebra algorithms are treated separately from the rest of source code and gathered in a tool library called 'Generic Linear Algebra Solver System' (GLASS). Such a code architecture, based on a linear algebra library, allows us to separate the three different scientific fields involved in transport codes design: numerical analysis, reactor physics and computer science. Our library handles matrices with optional storage policies and thus applies both to Sn code, where the matrix elements are computed on the fly, and to SPn code where stored matrices are used. Thus, using GLASS allows us to share a large fraction of source code between Sn and SPn implementations. Moreover, the GLASS high level of abstraction allows the writing of numerical algorithms in a form which is very close to their textbook descriptions. Hence the GLASS algorithms collection, disconnected from computer science considerations (e.g. storage policy), is very easy to read, to maintain and to extend. (authors)

  13. A novel neutron energy spectrum unfolding code using particle swarm optimization

    International Nuclear Information System (INIS)

    Shahabinejad, H.; Sohrabpour, M.

    2017-01-01

    A novel neutron Spectrum Deconvolution using Particle Swarm Optimization (SDPSO) code has been developed to unfold the neutron spectrum from a pulse height distribution and a response matrix. The Particle Swarm Optimization (PSO) imitates the bird flocks social behavior to solve complex optimization problems. The results of the SDPSO code have been compared with those of the standard spectra and recently published Two-steps Genetic Algorithm Spectrum Unfolding (TGASU) code. The TGASU code have been previously compared with the other codes such as MAXED, GRAVEL, FERDOR and GAMCD and shown to be more accurate than the previous codes. The results of the SDPSO code have been demonstrated to match well with those of the TGASU code for both under determined and over-determined problems. In addition the SDPSO has been shown to be nearly two times faster than the TGASU code. - Highlights: • Introducing a novel method for neutron spectrum unfolding. • Implementation of a particle swarm optimization code for neutron unfolding. • Comparing results of the PSO code with those of recently published TGASU code. • Match results of the PSO code with those of TGASU code. • Greater convergence rate of implemented PSO code than TGASU code.

  14. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  15. A new Eulerian-Lagrangian finite element simulator for solute transport in discrete fracture-matrix systems

    Energy Technology Data Exchange (ETDEWEB)

    Birkholzer, J.; Karasaki, K. [Lawrence Berkeley National Lab., CA (United States). Earth Sciences Div.

    1996-07-01

    Fracture network simulators have extensively been used in the past for obtaining a better understanding of flow and transport processes in fractured rock. However, most of these models do not account for fluid or solute exchange between the fractures and the porous matrix, although diffusion into the matrix pores can have a major impact on the spreading of contaminants. In the present paper a new finite element code TRIPOLY is introduced which combines a powerful fracture network simulator with an efficient method to account for the diffusive interaction between the fractures and the adjacent matrix blocks. The fracture network simulator used in TRIPOLY features a mixed Lagrangian-Eulerian solution scheme for the transport in fractures, combined with an adaptive gridding technique to account for sharp concentration fronts. The fracture-matrix interaction is calculated with an efficient method which has been successfully used in the past for dual-porosity models. Discrete fractures and matrix blocks are treated as two different systems, and the interaction is modeled by introducing sink/source terms in both systems. It is assumed that diffusive transport in the matrix can be approximated as a one-dimensional process, perpendicular to the adjacent fracture surfaces. A direct solution scheme is employed to solve the coupled fracture and matrix equations. The newly developed combination of the fracture network simulator and the fracture-matrix interaction module allows for detailed studies of spreading processes in fractured porous rock. The authors present a sample application which demonstrate the codes ability of handling large-scale fracture-matrix systems comprising individual fractures and matrix blocks of arbitrary size and shape.

  16. UNSPEC: revisited (semaphore code)

    International Nuclear Information System (INIS)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  17. Improvement of the computing speed of the FBR fuel pin bundle deformation analysis code 'BAMBOO'

    International Nuclear Information System (INIS)

    Ito, Masahiro; Uwaba, Tomoyuki

    2005-04-01

    JNC has developed a coupled analysis system of a fuel pin bundle deformation analysis code 'BAMBOO' and a thermal hydraulics analysis code ASFRE-IV' for the purpose of evaluating the integrity of a subassembly under the BDI condition. This coupled analysis took much computation time because it needs convergent calculations to obtain numerically stationary solutions for thermal and mechanical behaviors. We improved the computation time of the BAMBOO code analysis to make the coupled analysis practicable. 'BAMBOO' is a FEM code and as such its matrix calculations consume large memory area to temporarily stores intermediate results in the solution of simultaneous linear equations. The code used the Hard Disk Drive (HDD) for the virtual memory area to save Random Access Memory (RAM) of the computer. However, the use of the HDD increased the computation time because Input/Output (I/O) processing with the HDD took much time in data accesses. We improved the code in order that it could conduct I/O processing only with the RAM in matrix calculations and run with in high-performance computers. This improvement considerably increased the CPU occupation rate during the simulation and reduced the total simulation time of the BAMBOO code to about one-seventh of that before the improvement. (author)

  18. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  19. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  20. Distributed CPU multi-core implementation of SIRT with vectorized matrix kernel for micro-CT

    Energy Technology Data Exchange (ETDEWEB)

    Gregor, Jens [Tennessee Univ., Knoxville, TN (United States)

    2011-07-01

    We describe an implementation of SIRT for execution using a cluster of multi-core PCs. Algorithmic techniques are presented for reducing the size and computational cost of a reconstruction including near-optimal relaxation, scalar preconditioning, orthogonalized ordered subsets, and data-driven focus of attention. Implementation wise, a scheme is outlined which provides each core mutex-free access to its local shared memory while also balancing the workload across the cluster, and the system matrix is computed on-the-fly using vectorized code. Experimental results show the efficacy of the approach. (orig.)

  1. FADDEEV: A fortran code for the calculation of the frequency response matrix of multiple-input, multiple-output dynamic systems

    International Nuclear Information System (INIS)

    Owens, D.H.

    1972-06-01

    The KDF9/EGDON programme FADDEEV has been written to investigate a technique for the calculation of the matrix of frequency responses G(jw) describing the response of the output vector y from the multivariable differential/algebraic system S to the drive of the system input vector u. S: Ex = Ax + Bu, y = Cx, G(jw) = C(jw E - A ) -1 B. The programme uses an algorithm due to Faddeev and has been written with emphasis upon: (a) simplicity of programme structure and computational technique which should enable a user to find his way through the programme fairly easily, and hence facilitate its manipulation as a subroutine in a larger code; (b) rapid computational ability, particularly in systems with fairly large number of inputs and outputs and requiring the evaluation of the frequency responses at a large number of frequencies. Transport or time delays must be converted by the user to Pade or Bode approximations prior to input. Conditions under which the algorithm fails to give accurate results are identified, and methods for increasing the accuracy of the calculations are discussed. The conditions for accurate results using FADDEEV indicate that its application is specialized. (author)

  2. The WECHSL-Mod2 code: A computer program for the interaction of a core melt with concrete including the long term behavior

    International Nuclear Information System (INIS)

    Reimann, M.; Stiefel, S.

    1989-06-01

    The WECHSL-Mod2 code is a mechanistic computer code developed for the analysis of the thermal and chemical interaction of initially molten LWR reactor materials with concrete in a two-dimensional, axisymmetrical concrete cavity. The code performs calculations from the time of initial contact of a hot molten pool over start of solidification processes until long term basemat erosion over several days with the possibility of basemat penetration. The code assumes that the metallic phases of the melt pool form a layer at the bottom overlayed by the oxide melt atop. Heat generation in the melt is by decay heat and chemical reactions from metal oxidation. Energy is lost to the melting concrete and to the upper containment by radiation or evaporation of sumpwater possibly flooding the surface of the melt. Thermodynamic and transport properties as well as criteria for heat transfer and solidification processes are internally calculated for each time step. Heat transfer is modelled taking into account the high gas flux from the decomposing concrete and the heat conduction in the crusts possibly forming in the long term at the melt/concrete interface. The WECHSL code in its present version was validated by the BETA experiments. The test samples include a typical BETA post test calculation and a WECHSL application to a reactor accident. (orig.) [de

  3. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  4. A neutron spectrum unfolding computer code based on artificial neural networks

    Science.gov (United States)

    Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.

    2014-02-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in

  5. A survey of matrix theory and matrix inequalities

    CERN Document Server

    Marcus, Marvin

    2010-01-01

    Written for advanced undergraduate students, this highly regarded book presents an enormous amount of information in a concise and accessible format. Beginning with the assumption that the reader has never seen a matrix before, the authors go on to provide a survey of a substantial part of the field, including many areas of modern research interest.Part One of the book covers not only the standard ideas of matrix theory, but ones, as the authors state, ""that reflect our own prejudices,"" among them Kronecker products, compound and induced matrices, quadratic relations, permanents, incidence

  6. Coding for urologic office procedures.

    Science.gov (United States)

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Invisible data matrix detection with smart phone using geometric correction and Hough transform

    Science.gov (United States)

    Sun, Halit; Uysalturk, Mahir C.; Karakaya, Mahmut

    2016-04-01

    Two-dimensional data matrices are used in many different areas that provide quick and automatic data entry to the computer system. Their most common usage is to automatically read labeled products (books, medicines, food, etc.) and recognize them. In Turkey, alcohol beverages and tobacco products are labeled and tracked with the invisible data matrices for public safety and tax purposes. In this application, since data matrixes are printed on a special paper with a pigmented ink, it cannot be seen under daylight. When red LEDs are utilized for illumination and reflected light is filtered, invisible data matrices become visible and decoded by special barcode readers. Owing to their physical dimensions, price and requirement of special training to use; cheap, small sized and easily carried domestic mobile invisible data matrix reader systems are required to be delivered to every inspector in the law enforcement units. In this paper, we first developed an apparatus attached to the smartphone including a red LED light and a high pass filter. Then, we promoted an algorithm to process captured images by smartphones and to decode all information stored in the invisible data matrix images. The proposed algorithm mainly involves four stages. In the first step, data matrix code is processed by Hough transform processing to find "L" shaped pattern. In the second step, borders of the data matrix are found by using the convex hull and corner detection methods. Afterwards, distortion of invisible data matrix corrected by geometric correction technique and the size of every module is fixed in rectangular shape. Finally, the invisible data matrix is scanned line by line in the horizontal axis to decode it. Based on the results obtained from the real test images of invisible data matrix captured with a smartphone, the proposed algorithm indicates high accuracy and low error rate.

  8. Moving Towards a State of the Art Charge-Exchange Reaction Code

    Science.gov (United States)

    Poxon-Pearson, Terri; Nunes, Filomena; Potel, Gregory

    2017-09-01

    Charge-exchange reactions have a wide range of applications, including late stellar evolution, constraining the matrix elements for neutrinoless double β-decay, and exploring symmetry energy and other aspects of exotic nuclear matter. Still, much of the reaction theory needed to describe these transitions is underdeveloped and relies on assumptions and simplifications that are often extended outside of their region of validity. In this work, we have begun to move towards a state of the art charge-exchange reaction code. As a first step, we focus on Fermi transitions using a Lane potential in a few body, Distorted Wave Born Approximation (DWBA) framework. We have focused on maintaining a modular structure for the code so we can later incorporate complications such as nonlocality, breakup, and microscopic inputs. Results using this new charge-exchange code will be shown compared to the analysis in for the case of 48Ca(p,n)48Sc. This work was supported in part by the National Nuclear Security Administration under the Stewardship Science Academic Alliances program through the U.S. DOE Cooperative Agreement No. DE- FG52-08NA2855.

  9. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  10. User's guide for SAMMY: a computer model for multilevel r-matrix fits to neutron data using Bayes' equations

    International Nuclear Information System (INIS)

    Larson, N.M.; Perey, F.G.

    1980-11-01

    A method is described for determining the parameters of a model from experimental data based upon the utilization of Bayes' theorem. This method has several advantages over the least-squares method as it is commonly used; one important advantage is that the assumptions under which the parameter values have been determined are more clearly evident than in many results based upon least squares. Bayes' method has been used to develop a computer code which can be utilized to analyze neutron cross-section data by means of the R-matrix theory. The required formulae from the R-matrix theory are presented, and the computer implementation of both Bayes' equations and R-matrix theory is described. Details about the computer code and compelte input/output information are given

  11. Binding of matrix metalloproteinase inhibitors to extracellular matrix: 3D-QSAR analysis.

    Science.gov (United States)

    Zhang, Yufen; Lukacova, Viera; Bartus, Vladimir; Nie, Xiaoping; Sun, Guorong; Manivannan, Ethirajan; Ghorpade, Sandeep R; Jin, Xiaomin; Manyem, Shankar; Sibi, Mukund P; Cook, Gregory R; Balaz, Stefan

    2008-10-01

    Binding to the extracellular matrix, one of the most abundant human protein complexes, significantly affects drug disposition. Specifically, the interactions with extracellular matrix determine the free concentrations of small molecules acting in tissues, including signaling peptides, inhibitors of tissue remodeling enzymes such as matrix metalloproteinases, and other drug candidates. The nature of extracellular matrix binding was elucidated for 63 matrix metalloproteinase inhibitors, for which the association constants to an extracellular matrix mimic were reported here. The data did not correlate with lipophilicity as a common determinant of structure-nonspecific, orientation-averaged binding. A hypothetical structure of the binding site of the solidified extracellular matrix surrogate was analyzed using the Comparative Molecular Field Analysis, which needed to be applied in our multi-mode variant. This fact indicates that the compounds bind to extracellular matrix in multiple modes, which cannot be considered as completely orientation-averaged and exhibit structural dependence. The novel comparative molecular field analysis models, exhibiting satisfactory descriptive and predictive abilities, are suitable for prediction of the extracellular matrix binding for the untested chemicals, which are within applicability domains. The results contribute to a better prediction of the pharmacokinetic parameters such as the distribution volume and the tissue-blood partition coefficients, in addition to a more imminent benefit for the development of more effective matrix metalloproteinase inhibitors.

  12. Prediction of the HBS width and Xe concentration in grain matrix by the INFRA code

    International Nuclear Information System (INIS)

    Yang, Yong Sik; Lee, Chan Bok; Kim, Dae Ho; Kim, Young Min

    2004-01-01

    Formation of a HBS(High Burnup Structure) is an important phenomenon for the high burnup fuel performance and safety. For the prediction of the HBS(so called 'rim microstructure') proposed rim microstructure formation model, which is a function of the fuel temperature, grain size and fission rate, was inserted into the high burnup fuel performance code INFRA. During the past decades, various examinations have been performed to find the HBS formation mechanism and define HBS characteristics. In the HBEP(High Burnup Effects Program), several rods were examined by EPMA analysis to measure HBS width and these results were re-measured by improved technology including XRF and detail microstructure examination. Recently, very high burnup(∼100MWd/kgU) fuel examination results were reported by Manzel et al., and EPMA analysis results have been released. Using the measured EPMA analysis data, HBS formation prediction model of INFRA code are verified. HBS width prediction results are compared with measured ones and Xe concentration profile is compared with measured EPMA data. Calculated HBS width shows good agreement with measured data in a reasonable error range. Though, there are some difference in transition region and central region due to model limitation and fission gas release prediction error respectively, however, predicted Xe concentration in the fully developed HBS region shows a good agreement with the measured data. (Author)

  13. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  14. Composite Coatings with Ceramic Matrix Including Nanomaterials as Solid Lubricants for Oil-Less Automotive Applications

    Directory of Open Access Journals (Sweden)

    Posmyk A.

    2016-06-01

    Full Text Available The paper presents the theoretical basis of manufacturing and chosen applications of composite coatings with ceramic matrix containing nanomaterials as a solid lubricant (AHC+NL. From a theoretical point of view, in order to reduce the friction coefficient of sliding contacts, two materials are required, i.e. one with a high hardness and the other with low shear strength. In case of composite coatings AHC+NL the matrix is a very hard and wear resistant anodic oxide coating (AHC whereas the solid lubricant used is the nanomaterial (NL featuring a low shear strength such as glassy carbon nanotubes (GC. Friction coefficient of cast iron GJL-350 sliding against the coating itself is much higher (0.18-0.22 than when it slides against a composite coating (0.08-0.14. It is possible to reduce the friction due to the presence of carbon nanotubes, or metal nanowires.

  15. The Application Strategy of Iterative Solution Methodology to Matrix Equations in Hydraulic Solver Package, SPACE

    International Nuclear Information System (INIS)

    Na, Y. W.; Park, C. E.; Lee, S. Y.

    2009-01-01

    As a part of the Ministry of Knowledge Economy (MKE) project, 'Development of safety analysis codes for nuclear power plants', KOPEC has been developing the hydraulic solver code package applicable to the safety analyses of nuclear power plants (NPP's). The matrices of the hydraulic solver are usually sparse and may be asymmetric. In the earlier stage of this project, typical direct matrix solver packages MA48 and MA28 had been tested as matrix solver for the hydraulic solver code, SPACE. The selection was based on the reasonably reliable performance experience from their former version MA18 in RELAP computer code. In the later stage of this project, the iterative methodologies have been being tested in the SPACE code. Among a few candidate iterative solution methodologies tested so far, the biconjugate gradient stabilization methodology (BICGSTAB) has shown the best performance in the applicability test and in the application to the SPACE code. Regardless of all the merits of using the direct solver packages, there are some other aspects of tackling the iterative solution methodologies. The algorithm is much simpler and easier to handle. The potential problems related to the robustness of the iterative solution methodologies have been resolved by applying pre-conditioning methods adjusted and modified as appropriate to the application in the SPACE code. The application strategy of conjugate gradient method was introduced in detail by Schewchuk, Golub and Saad in the middle of 1990's. The application of his methodology to nuclear engineering in Korea started about the same time and is still going on and there are quite a few examples of application to neutronics. Besides, Yang introduced a conjugate gradient method programmed in C++ language. The purpose of this study is to assess the performance and behavior of the iterative solution methodology compared to those of the direct solution methodology still being preferred due to its robustness and reliability. The

  16. Development of steam explosion simulation code JASMINE

    Energy Technology Data Exchange (ETDEWEB)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu; Kudo, Tamotsu; Sugimoto, Jun [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Nagano, Katsuhiro; Araki, Kazuhiro

    1995-11-01

    A steam explosion is considered as a phenomenon which possibly threatens the integrity of the containment vessel of a nuclear power plant in a severe accident condition. A numerical calculation code JASMINE (JAeri Simulator for Multiphase INteraction and Explosion) purposed to simulate the whole process of steam explosions has been developed. The premixing model is based on a multiphase flow simulation code MISTRAL by Fuji Research Institute Co. In JASMINE code, the constitutive equations and the flow regime map are modified for the simulation of premixing related phenomena. The numerical solution method of the original code is succeeded, i.e. the basic equations are discretized semi-implicitly, BCGSTAB method is used for the matrix solver to improve the stability and convergence, also TVD scheme is applied to capture a steep phase distribution accurately. Test calculations have been performed for the conditions correspond to the experiments by Gilbertson et al. and Angelini et al. in which mixing of solid particles and water were observed in iso-thermal condition and with boiling, respectively. (author).

  17. Description and applicability of the BEFEM-CODE

    Energy Technology Data Exchange (ETDEWEB)

    Groth, T.

    1980-05-15

    The BEFEM-CODE, developed for rock mechanics problems in hard rock with joints, is a simple FEM code constructed using triangular and quadrilateral elements. As an option, a joint element of the Goodman type may be used. The Cook-Pian type quadrilateral stress hybrid element was introduced into the version of the code used for the Naesliden project, to replace the constant stress quadrilateral elements. This hybrid element, derived with assumed stress distributions, simplifies the excavation process for use in non-linear models. The shear behavior of the Goodman 1976 joint element has been replaced by Goodman's 1968 formulation. This element makes it possible to take dilation into account, but it was not considered necessary to use dilation to simulate proper joint behavior in the Naesliden project. The code uses Barton's shear strength criteria. Excessive nodal forces due to failure and non-linearities in the joint elements are redistributed with stress transfer iterations. Convergence can be speeded up by dividing each excavation sequence into several loadsteps in which the stiffness matrix is recalculated.

  18. Development of steam explosion simulation code JASMINE

    International Nuclear Information System (INIS)

    Moriyama, Kiyofumi; Yamano, Norihiro; Maruyama, Yu; Kudo, Tamotsu; Sugimoto, Jun; Nagano, Katsuhiro; Araki, Kazuhiro.

    1995-11-01

    A steam explosion is considered as a phenomenon which possibly threatens the integrity of the containment vessel of a nuclear power plant in a severe accident condition. A numerical calculation code JASMINE (JAeri Simulator for Multiphase INteraction and Explosion) purposed to simulate the whole process of steam explosions has been developed. The premixing model is based on a multiphase flow simulation code MISTRAL by Fuji Research Institute Co. In JASMINE code, the constitutive equations and the flow regime map are modified for the simulation of premixing related phenomena. The numerical solution method of the original code is succeeded, i.e. the basic equations are discretized semi-implicitly, BCGSTAB method is used for the matrix solver to improve the stability and convergence, also TVD scheme is applied to capture a steep phase distribution accurately. Test calculations have been performed for the conditions correspond to the experiments by Gilbertson et al. and Angelini et al. in which mixing of solid particles and water were observed in iso-thermal condition and with boiling, respectively. (author)

  19. 2016 MATRIX annals

    CERN Document Server

    Praeger, Cheryl; Tao, Terence

    2018-01-01

    MATRIX is Australia’s international, residential mathematical research institute. It facilitates new collaborations and mathematical advances through intensive residential research programs, each lasting 1-4 weeks. This book is a scientific record of the five programs held at MATRIX in its first year, 2016: Higher Structures in Geometry and Physics (Chapters 1-5 and 18-21); Winter of Disconnectedness (Chapter 6 and 22-26); Approximation and Optimisation (Chapters 7-8); Refining C*-Algebraic Invariants for Dynamics using KK-theory (Chapters 9-13); Interactions between Topological Recursion, Modularity, Quantum Invariants and Low-dimensional Topology (Chapters 14-17 and 27). The MATRIX Scientific Committee selected these programs based on their scientific excellence and the participation rate of high-profile international participants. Each program included ample unstructured time to encourage collaborative research; some of the longer programs also included an embedded conference or lecture series. The artic...

  20. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  1. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  2. Calculation of total number of disintegrations after intake of radioactive nuclides using the pseudo inverse matrix

    International Nuclear Information System (INIS)

    Noh, Si Wan; Sol, Jeong; Lee, Jai Ki; Lee, Jong Il; Kim, Jang Lyul

    2012-01-01

    Calculation of total number of disintegrations after intake of radioactive nuclides is indispensable to calculate a dose coefficient which means committed effective dose per unit activity (Sv/Bq). In order to calculate the total number of disintegrations analytically, Birch all's algorithm has been commonly used. As described below, an inverse matrix should be calculated in the algorithm. As biokinetic models have been complicated, however, the inverse matrix does not exist sometime and the total number of disintegrations cannot be calculated. Thus, a numerical method has been applied to DCAL code used to calculate dose coefficients in ICRP publication and IMBA code. In this study, however, we applied the pseudo inverse matrix to solve the problem that the inverse matrix does not exist for. In order to validate our method, the method was applied to two examples and the results were compared to the tabulated data in ICRP publication. MATLAB 2012a was used to calculate the total number of disintegrations and exp m and p inv MATLAB built in functions were employed

  3. Simulation of radio nuclide migration in crystalline rock under influence of matrix diffusion and sorption kinetics: Code development and pre-assessment of migration experiment

    International Nuclear Information System (INIS)

    Woerman, A.; Xu Shulan

    1996-04-01

    The overall objective of the present study is to illuminate how spatial variability in rock chemistry in combination with spatial variability in matrix diffusion affects the radio nuclide migration along single fractures in crystalline rock. Models for ground water flow and transport of radio nuclides in a single fracture with micro-fissures have been formulated on the basis of generally accepted physical and chemical principles. Limits for the validity of the models are stated. The model equations are solved by combining finite differences and finite element methods in a computer code package. The computational package consists of three parts, namely, a stochastic field generator, a sub-program that solves the flow problem and a sub-program that solves the transport problem in a single fracture with connecting micro-fissures. Migration experiments have been pre-assessed by simulations of breakthrough curves for a constant concentration change at the upstream boundary. Breakthrough curves are sensitive to variations of parameters, such as, fracture aperture, porosity, distribution coefficient and advection velocity. The impact of matrix diffusion and sorption is manifested in terms of a retention of radionuclides causing a prolonged breakthrough. Heterogeneous sorption was characterized with a variable distribution coefficient for which the coefficient of variation CV(K d )=1 and the integral scale of an exponential covariance function is one tenth of the drill core's length. Simulated breakthrough curves for the heterogeneous sorption case have a relative variance of 3% in comparison to that of homogeneous case. An appropriate experimental set up for investigation of the effect of matrix diffusion and sorption on radio nuclide migration experiments would be an aperture less than 1 mm and porosity larger than 0.5%. 36 refs, 19 figs

  4. Low Complexity Encoder of High Rate Irregular QC-LDPC Codes for Partial Response Channels

    Directory of Open Access Journals (Sweden)

    IMTAWIL, V.

    2011-11-01

    Full Text Available High rate irregular QC-LDPC codes based on circulant permutation matrices, for efficient encoder implementation, are proposed in this article. The structure of the code is an approximate lower triangular matrix. In addition, we present two novel efficient encoding techniques for generating redundant bits. The complexity of the encoder implementation depends on the number of parity bits of the code for the one-stage encoding and the length of the code for the two-stage encoding. The advantage of both encoding techniques is that few XOR-gates are used in the encoder implementation. Simulation results on partial response channels also show that the BER performance of the proposed code has gain over other QC-LDPC codes.

  5. The Morphosyntactic Structure of the Noun and Verb Phrases in Dholuo/Kiswahili Code Switching

    Directory of Open Access Journals (Sweden)

    Jael Anyango Ojanga

    2015-04-01

    Full Text Available Code switching, the use of any two or more languages or dialects interchangeably in a single communication context, is a common linguistic practice owing to the trend of multilingualism in the world today. In many situations of language in contact, constituents of one language can be found within the constituents of another language in a number of linguistic phenomenon namely lexical borrowing, transferring, interference, code switching and diffusion (Annamalai, 1989. Codeswitching is one of the linguistic phenomenon claimed to be the most prevalent and common mode of interaction among multilingual speakers. Brock and Eastman (1971 suggest that topic discussed influences the choice of the language. Nouns and verbs have been found to be the most code switched elements in bilingual exchange. The study took a qualitative approach with the descriptive research design. It was guided by the Matrix Language Frame Model which was formulated by Myers-Scotton in1993. This model expounds on the realization and structure of the major word classes as used in code switching. Data was collected in Nyangeta Zone, Winam Division of Kisumu East District. Winam Division is mostly inhabited by elite Dholuo L1 speakers. A sample of twenty four teachers was purposively selected to provide data needed for the study. Focus group discussion was used to collect a corpus of Dholuo/Kiswahili data which was recorded through audio taping. The recorded data was then analyzed morphosyntactaically using the Matrix Language Frame Model. The data revealed that the noun and verb phrases were realized under three categories: Matrix Language Island constituent (ML Island ML+EL and Embedded Language Island (EL Island. Keywords: Code switching, multilingualism, morphosyntactic

  6. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  7. MELMRK 2.0: A description of computer models and results of code testing

    International Nuclear Information System (INIS)

    Wittman, R.S.; Denny, V.; Mertol, A.

    1992-01-01

    An advanced version of the MELMRK computer code has been developed that provides detailed models for conservation of mass, momentum, and thermal energy within relocating streams of molten metallics during meltdown of Savannah River Site (SRS) reactor assemblies. In addition to a mechanistic treatment of transport phenomena within a relocating stream, MELMRK 2.0 retains the MOD1 capability for real-time coupling of the in-depth thermal response of participating assembly heat structure and, further, augments this capability with models for self-heating of relocating melt owing to steam oxidation of metallics and fission product decay power. As was the case for MELMRK 1.0, the MOD2 version offers state-of-the-art numerics for solving coupled sets of nonlinear differential equations. Principal features include application of multi-dimensional Newton-Raphson techniques to accelerate convergence behavior and direct matrix inversion to advance primitive variables from one iterate to the next. Additionally, MELMRK 2.0 provides logical event flags for managing the broad range of code options available for treating such features as (1) coexisting flow regimes, (2) dynamic transitions between flow regimes, and (3) linkages between heatup and relocation code modules. The purpose of this report is to provide a detailed description of the MELMRK 2.0 computer models for melt relocation. Also included are illustrative results for code testing, as well as an integrated calculation for meltdown of a Mark 31a assembly

  8. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  9. Distributed Remote Vector Gaussian Source Coding with Covariance Distortion Constraints

    DEFF Research Database (Denmark)

    Zahedi, Adel; Østergaard, Jan; Jensen, Søren Holdt

    2014-01-01

    In this paper, we consider a distributed remote source coding problem, where a sequence of observations of source vectors is available at the encoder. The problem is to specify the optimal rate for encoding the observations subject to a covariance matrix distortion constraint and in the presence...

  10. Application of the R-matrix method to photoionization of molecules.

    Science.gov (United States)

    Tashiro, Motomichi

    2010-04-07

    The R-matrix method has been used for theoretical calculation of electron collision with atoms and molecules for long years. The method was also formulated to treat photoionization process, however, its application has been mostly limited to photoionization of atoms. In this work, we implement the R-matrix method to treat molecular photoionization problem based on the UK R-matrix codes. This method can be used for diatomic as well as polyatomic molecules, with multiconfigurational description for electronic states of both target neutral molecule and product molecular ion. Test calculations were performed for valence electron photoionization of nitrogen (N(2)) as well as nitric oxide (NO) molecules. Calculated photoionization cross sections and asymmetry parameters agree reasonably well with the available experimental results, suggesting usefulness of the method for molecular photoionization.

  11. Integrins and extracellular matrix in mechanotransduction

    Directory of Open Access Journals (Sweden)

    Ramage L

    2011-12-01

    Full Text Available Lindsay RamageQueen’s Medical Research Institute, University of Edinburgh, Edinburgh, UKAbstract: Integrins are a family of cell surface receptors which mediate cell–matrix and cell–cell adhesions. Among other functions they provide an important mechanical link between the cells external and intracellular environments while the adhesions that they form also have critical roles in cellular signal-transduction. Cell–matrix contacts occur at zones in the cell surface where adhesion receptors cluster and when activated the receptors bind to ligands in the extracellular matrix. The extracellular matrix surrounds the cells of tissues and forms the structural support of tissue which is particularly important in connective tissues. Cells attach to the extracellular matrix through specific cell-surface receptors and molecules including integrins and transmembrane proteoglycans. Integrins work alongside other proteins such as cadherins, immunoglobulin superfamily cell adhesion molecules, selectins, and syndecans to mediate cell–cell and cell–matrix interactions and communication. Activation of adhesion receptors triggers the formation of matrix contacts in which bound matrix components, adhesion receptors, and associated intracellular cytoskeletal and signaling molecules form large functional, localized multiprotein complexes. Cell–matrix contacts are important in a variety of different cell and tissue properties including embryonic development, inflammatory responses, wound healing, and adult tissue homeostasis. This review summarizes the roles and functions of integrins and extracellular matrix proteins in mechanotransduction.Keywords: ligand binding, α subunit, ß subunit, focal adhesion, cell differentiation, mechanical loading, cell–matrix interaction

  12. Channel coding techniques for wireless communications

    CERN Document Server

    Deergha Rao, K

    2015-01-01

    The book discusses modern channel coding techniques for wireless communications such as turbo codes, low-density parity check (LDPC) codes, space–time (ST) coding, RS (or Reed–Solomon) codes and convolutional codes. Many illustrative examples are included in each chapter for easy understanding of the coding techniques. The text is integrated with MATLAB-based programs to enhance the understanding of the subject’s underlying theories. It includes current topics of increasing importance such as turbo codes, LDPC codes, Luby transform (LT) codes, Raptor codes, and ST coding in detail, in addition to the traditional codes such as cyclic codes, BCH (or Bose–Chaudhuri–Hocquenghem) and RS codes and convolutional codes. Multiple-input and multiple-output (MIMO) communications is a multiple antenna technology, which is an effective method for high-speed or high-reliability wireless communications. PC-based MATLAB m-files for the illustrative examples are provided on the book page on Springer.com for free dow...

  13. VITAMIN-J/COVA/EFF-3 cross-section covariance matrix library and its use to analyse benchmark experiments in sinbad database

    International Nuclear Information System (INIS)

    Kodeli, Ivan-Alexander

    2005-01-01

    The new cross-section covariance matrix library ZZ-VITAMIN-J/COVA/EFF3 intended to simplify and encourage sensitivity and uncertainty analysis was prepared and is available from the NEA Data Bank. The library is organised in a ready-to-use form including both the covariance matrix data as well as processing tools:-Cross-section covariance matrices from the EFF-3 evaluation for five materials: 9 Be, 28 Si, 56 Fe, 58 Ni and 60 Ni. Other data will be included when available. -FORTRAN program ANGELO-2 to extrapolate/interpolate the covariance matrices to a users' defined energy group structure. -FORTRAN program LAMBDA to verify the mathematical properties of the covariance matrices, like symmetry, positive definiteness, etc. The preparation, testing and use of the covariance matrix library are presented. The uncertainties based on the cross-section covariance data were compared with those based on other evaluations, like ENDF/B-VI. The collapsing procedure used in the ANGELO-2 code was compared and validated with the one used in the NJOY system

  14. A solution for automatic parallelization of sequential assembly code

    Directory of Open Access Journals (Sweden)

    Kovačević Đorđe

    2013-01-01

    Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.

  15. ISOGEN: Interactive isotope generation and depletion code

    International Nuclear Information System (INIS)

    Venkata Subbaiah, Kamatam

    2016-01-01

    ISOGEN is an interactive code for solving first order coupled linear differential equations with constant coefficients for a large number of isotopes, which are produced or depleted by the processes of radioactive decay or through neutron transmutation or fission. These coupled equations can be written in a matrix notation involving radioactive decay constants and transmutation coefficients, and the eigenvalues of thus formed matrix vary widely (several tens of orders), and hence no single method of solution is suitable for obtaining precise estimate of concentrations of isotopes. Therefore, different methods of solutions are followed, namely, matrix exponential method, Bateman series method, and Gauss-Seidel iteration method, as was followed in the ORIGEN-2 code. ISOGEN code is written in a modern computer language, VB.NET version 2013 for Windows operating system version 7, which enables one to provide many interactive features between the user and the program. The output results depend on the input neutron database employed and the time step involved in the calculations. The present program can display the information about the database files, and the user has to select one which suits the current need. The program prints the 'WARNING' information if the time step is too large, which is decided based on the built-in convergence criterion. Other salient interactive features provided are (i) inspection of input data that goes into calculation, (ii) viewing of radioactive decay sequence of isotopes (daughters, precursors, photons emitted) in a graphical format, (iii) solution of parent and daughter products by direct Bateman series solution method, (iv) quick input method and context sensitive prompts for guiding the novice user, (v) view of output tables for any parameter of interest, and (vi) output file can be read to generate new information and can be viewed or printed since the program stores basic nuclide concentration unlike other batch jobs. The sample

  16. Saltstone Matrix Characterization And Stadium Simulation Results

    International Nuclear Information System (INIS)

    Langton, C.

    2009-01-01

    SIMCO Technologies, Inc. was contracted to evaluate the durability of the saltstone matrix material and to measure saltstone transport properties. This information will be used to: (1) Parameterize the STADIUM(reg s ign) service life code, (2) Predict the leach rate (degradation rate) for the saltstone matrix over 10,000 years using the STADIUM(reg s ign) concrete service life code, and (3) Validate the modeled results by conducting leaching (water immersion) tests. Saltstone durability for this evaluation is limited to changes in the matrix itself and does not include changes in the chemical speciation of the contaminants in the saltstone. This report summarized results obtained to date which include: characterization data for saltstone cured up to 365 days and characterization of saltstone cured for 137 days and immersed in water for 31 days. Chemicals for preparing simulated non-radioactive salt solution were obtained from chemical suppliers. The saltstone slurry was mixed according to directions provided by SRNL. However SIMCO Technologies Inc. personnel made a mistake in the premix proportions. The formulation SIMCO personnel used to prepare saltstone premix was not the reference mix proportions: 45 wt% slag, 45 wt% fly ash, and 10 wt% cement. SIMCO Technologies Inc. personnel used the following proportions: 21 wt% slag, 65 wt% fly ash, and 14 wt% cement. The mistake was acknowledged and new mixes have been prepared and are curing. The results presented in this report are assumed to be conservative since the excessive fly ash was used in the SIMCO saltstone. The SIMCO mixes are low in slag which is very reactive in the caustic salt solution. The impact is that the results presented in this report are expected to be conservative since the samples prepared were deficient in slag and contained excess fly ash. The hydraulic reactivity of slag is about four times that of fly ash so the amount of hydrated binder formed per unit volume in the SIMCO saltstone samples

  17. PIPIT: a momentum space optical potential code for pions

    Energy Technology Data Exchange (ETDEWEB)

    Eisenstein, R A [Carnegie-Mellon Univ., Pittsburgh, Pa. (USA). Dept. of Physics; Tabakin, F [Pittsburgh Univ., Pa. (USA). Dept. of Physics

    1976-11-01

    Angular distributions for the elastic scattering of pions are generated by summing a partial wave series. The elastic T-matrix elements for each partial wave are obtained by solving a relativistic Lippmann-Schwinger equation in momentum space using a matrix inversion technique. Basically the Coulomb interaction is included exactly using the method of Vincent and Phatak. The ..pi..N amplitude is obtained from phase shift information on-shell and incorporates a separable off-shell form factor to ensure a physically reasonable off-shell extrapolation. The ..pi..N interaction is of finite range and a kinematic transformation procedure is used to express the ..pi..N amplitude in the ..pi.. nucleus frame. A maximum of 30 partial waves can be used in the present version of the program to calculate the cross section. The Lippmann-Schwinger equation is presently solved for each partial wave by inverting a 34x34 supermatrix. At very high energies, larger dimensions may be required. The present version of the code uses a separable non-local ..pi..N potential of finite range; other types of non-localities, or non-separable potentials, may be of physical interest.

  18. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  19. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  20. The importance of including dynamic soil-structure interaction into wind turbine simulation codes

    DEFF Research Database (Denmark)

    Damgaard, Mads; Andersen, Lars Vabbersgaard; Ibsen, Lars Bo

    2014-01-01

    A rigorous numerical model, describing a wind turbine structure and subsoil, may contain thousands of degrees of freedom, making the approach computationally inefficient for fast time domain analysis. In order to meet the requirements of real-time calculations, the dynamic impedance of the founda......A rigorous numerical model, describing a wind turbine structure and subsoil, may contain thousands of degrees of freedom, making the approach computationally inefficient for fast time domain analysis. In order to meet the requirements of real-time calculations, the dynamic impedance...... of the foundation from a rigorous analysis can be formulated into a so-called lumped-parameter model consisting of a few springs, dashpots and point masses which are easily implemented into aeroelastic codes. In this paper, the quality of consistent lumped-parameter models of rigid surface footings and mono piles...... is examined. The optimal order of the models is determined and implemented into the aeroelastic code HAWC2, where the dynamic response of a 5.0 MW wind turbine is evaluated. In contrast to the fore-aft vibrations, the inclusion of soil-structure interaction is shown to be critical for the side-side vibrations...

  1. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  2. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  3. Multireference configuration interaction theory using cumulant reconstruction with internal contraction of density matrix renormalization group wave function.

    Science.gov (United States)

    Saitow, Masaaki; Kurashige, Yuki; Yanai, Takeshi

    2013-07-28

    We report development of the multireference configuration interaction (MRCI) method that can use active space scalable to much larger size references than has previously been possible. The recent development of the density matrix renormalization group (DMRG) method in multireference quantum chemistry offers the ability to describe static correlation in a large active space. The present MRCI method provides a critical correction to the DMRG reference by including high-level dynamic correlation through the CI treatment. When the DMRG and MRCI theories are combined (DMRG-MRCI), the full internal contraction of the reference in the MRCI ansatz, including contraction of semi-internal states, plays a central role. However, it is thought to involve formidable complexity because of the presence of the five-particle rank reduced-density matrix (RDM) in the Hamiltonian matrix elements. To address this complexity, we express the Hamiltonian matrix using commutators, which allows the five-particle rank RDM to be canceled out without any approximation. Then we introduce an approximation to the four-particle rank RDM by using a cumulant reconstruction from lower-particle rank RDMs. A computer-aided approach is employed to derive the exceedingly complex equations of the MRCI in tensor-contracted form and to implement them into an efficient parallel computer code. This approach extends to the size-consistency-corrected variants of MRCI, such as the MRCI+Q, MR-ACPF, and MR-AQCC methods. We demonstrate the capability of the DMRG-MRCI method in several benchmark applications, including the evaluation of single-triplet gap of free-base porphyrin using 24 active orbitals.

  4. Code structure for U-Mo fuel performance analysis in high performance research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Gwan Yoon; Cho, Tae Won; Lee, Chul Min; Sohn, Dong Seong [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of); Lee, Kyu Hong; Park, Jong Man [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-05-15

    A performance analysis modeling applicable to research reactor fuel is being developed with available models describing fuel performance phenomena observed from in-pile tests. We established the calculation algorithm and scheme to best predict fuel performance using radio-thermo-mechanically coupled system to consider fuel swelling, interaction layer growth, pore formation in the fuel meat, and creep fuel deformation and mass relocation, etc. In this paper, we present a general structure of the performance analysis code for typical research reactor fuel and advanced features such as a model to predict fuel failure induced by combination of breakaway swelling and pore growth in the fuel meat. Thermo-mechanical code dedicated to the modeling of U-Mo dispersion fuel plates is being under development in Korea to satisfy a demand for advanced performance analysis and safe assessment of the plates. The major physical phenomena during irradiation are considered in the code such that interaction layer formation by fuel-matrix interdiffusion, fission induced swelling of fuel particle, mass relocation by fission induced stress, and pore formation at the interface between the reaction product and Al matrix.

  5. PROMETHEUS - a code system for dynamic 3-D analysis of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Khotylev, V.A.; Hoogenboom, J.E.; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.

    1996-09-01

    The paper presents a multidimensional, general-purpose neutronics code system. It solves a number of steady-state and/or transient problems with coupled thermal hydraulics in one-, two-, or three-dimensional geometry. Due to a number of specialized features such as cavity treatment, automated convergence control, burnup treatment using the full isotopic transition matrix, the code system can be applied for the analysis of fast and slow transients in small, large, and innovative reactor cores. (author)

  6. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  7. THE CONCEPT “LONDON” AS A TEMPORAL CODE OF LINGUOCULTURE IN THE LITERARY AND REGIONAL WORK OF PETER ACKROYD “LONDON: THE BIOGRAPHY”

    OpenAIRE

    Kaliev, Sultan; Zhumagulova, Batima

    2018-01-01

    This article analyzes the spatial-temporal code oflingua-culture as one of the components of the general cognitive-matrix modelof the structure of the concept "London" in the literary and regionalwork of Peter Ackroyd "London: The Biography". This approachimplements integration of cognitive-matrix modeling of the structure of theconcept and the system of codes of lingua-culture (anthropomorphic,temporal, vegetative, spiritual, social, chemical, etc.) The space-timecode of ...

  8. Inverse Operation of Four-dimensional Vector Matrix

    OpenAIRE

    H J Bao; A J Sang; H X Chen

    2011-01-01

    This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.

  9. On the Representation of Aquifer Compressibility in General Subsurface Flow Codes: How an Alternate Definition of Aquifer Compressibility Matches Results from the Groundwater Flow Equation

    Science.gov (United States)

    Birdsell, D.; Karra, S.; Rajaram, H.

    2017-12-01

    The governing equations for subsurface flow codes in deformable porous media are derived from the fluid mass balance equation. One class of these codes, which we call general subsurface flow (GSF) codes, does not explicitly track the motion of the solid porous media but does accept general constitutive relations for porosity, density, and fluid flux. Examples of GSF codes include PFLOTRAN, FEHM, STOMP, and TOUGH2. Meanwhile, analytical and numerical solutions based on the groundwater flow equation have assumed forms for porosity, density, and fluid flux. We review the derivation of the groundwater flow equation, which uses the form of Darcy's equation that accounts for the velocity of fluids with respect to solids and defines the soil matrix compressibility accordingly. We then show how GSF codes have a different governing equation if they use the form of Darcy's equation that is written only in terms of fluid velocity. The difference is seen in the porosity change, which is part of the specific storage term in the groundwater flow equation. We propose an alternative definition of soil matrix compressibility to correct for the untracked solid velocity. Simulation results show significantly less error for our new compressibility definition than the traditional compressibility when compared to analytical solutions from the groundwater literature. For example, the error in one calculation for a pumped sandstone aquifer goes from 940 to <70 Pa when the new compressibility is used. Code users and developers need to be aware of assumptions in the governing equations and constitutive relations in subsurface flow codes, and our newly-proposed compressibility function should be incorporated into GSF codes.

  10. Structure of matrix metalloproteinase-3 with a platinum-based inhibitor.

    Science.gov (United States)

    Belviso, Benny Danilo; Caliandro, Rocco; Siliqi, Dritan; Calderone, Vito; Arnesano, Fabio; Natile, Giovanni

    2013-06-18

    An X-ray investigation has been performed with the aim of characterizing the binding sites of a platinum-based inhibitor (K[PtCl3(DMSO)]) of matrix metalloproteinase-3 (stromelysin-1). The platinum complex targets His224 in the S1' specificity loop, representing the first step in the selective inhibition process (PDB ID code 4JA1).

  11. A neutron spectrum unfolding computer code based on artificial neural networks

    International Nuclear Information System (INIS)

    Ortiz-Rodríguez, J.M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J.M.; Vega-Carrillo, H.R.

    2014-01-01

    The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding

  12. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  13. Staggered chiral random matrix theory

    International Nuclear Information System (INIS)

    Osborn, James C.

    2011-01-01

    We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.

  14. Modeling the formation of cell-matrix adhesions on a single 3D matrix fiber.

    Science.gov (United States)

    Escribano, J; Sánchez, M T; García-Aznar, J M

    2015-11-07

    Cell-matrix adhesions are crucial in different biological processes like tissue morphogenesis, cell motility, and extracellular matrix remodeling. These interactions that link cell cytoskeleton and matrix fibers are built through protein clutches, generally known as adhesion complexes. The adhesion formation process has been deeply studied in two-dimensional (2D) cases; however, the knowledge is limited for three-dimensional (3D) cases. In this work, we simulate different local extracellular matrix properties in order to unravel the fundamental mechanisms that regulate the formation of cell-matrix adhesions in 3D. We aim to study the mechanical interaction of these biological structures through a three dimensional discrete approach, reproducing the transmission pattern force between the cytoskeleton and a single extracellular matrix fiber. This numerical model provides a discrete analysis of the proteins involved including spatial distribution, interaction between them, and study of the different phenomena, such as protein clutches unbinding or protein unfolding. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. The Exopolysaccharide Matrix

    Science.gov (United States)

    Koo, H.; Falsetta, M.L.; Klein, M.I.

    2013-01-01

    Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647

  16. Pseudomonas biofilm matrix composition and niche biology

    Science.gov (United States)

    Mann, Ethan E.; Wozniak, Daniel J.

    2014-01-01

    Biofilms are a predominant form of growth for bacteria in the environment and in the clinic. Critical for biofilm development are adherence, proliferation, and dispersion phases. Each of these stages includes reinforcement by, or modulation of, the extracellular matrix. Pseudomonas aeruginosa has been a model organism for the study of biofilm formation. Additionally, other Pseudomonas species utilize biofilm formation during plant colonization and environmental persistence. Pseudomonads produce several biofilm matrix molecules, including polysaccharides, nucleic acids, and proteins. Accessory matrix components shown to aid biofilm formation and adaptability under varying conditions are also produced by pseudomonads. Adaptation facilitated by biofilm formation allows for selection of genetic variants with unique and distinguishable colony morphology. Examples include rugose small-colony variants and wrinkly spreaders (WS), which over produce Psl/Pel or cellulose, respectively, and mucoid bacteria that over produce alginate. The well-documented emergence of these variants suggests that pseudomonads take advantage of matrix-building subpopulations conferring specific benefits for the entire population. This review will focus on various polysaccharides as well as additional Pseudomonas biofilm matrix components. Discussions will center on structure–function relationships, regulation, and the role of individual matrix molecules in niche biology. PMID:22212072

  17. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  18. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  19. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    Benyoucef M

    2007-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires floating point operations (flops, where is the processing gain and is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  20. A Chip-Level BSOR-Based Linear GSIC Multiuser Detector for Long-Code CDMA Systems

    Directory of Open Access Journals (Sweden)

    M. Benyoucef

    2008-01-01

    Full Text Available We introduce a chip-level linear group-wise successive interference cancellation (GSIC multiuser structure that is asymptotically equivalent to block successive over-relaxation (BSOR iteration, which is known to outperform the conventional block Gauss-Seidel iteration by an order of magnitude in terms of convergence speed. The main advantage of the proposed scheme is that it uses directly the spreading codes instead of the cross-correlation matrix and thus does not require the calculation of the cross-correlation matrix (requires 2NK2 floating point operations (flops, where N is the processing gain and K is the number of users which reduces significantly the overall computational complexity. Thus it is suitable for long-code CDMA systems such as IS-95 and UMTS where the cross-correlation matrix is changing every symbol. We study the convergence behavior of the proposed scheme using two approaches and prove that it converges to the decorrelator detector if the over-relaxation factor is in the interval ]0, 2[. Simulation results are in excellent agreement with theory.

  1. Computer code ANISN multiplying media and shielding calculation 2. Code description (input/output)

    International Nuclear Information System (INIS)

    Maiorino, J.R.

    1991-01-01

    The new code CCC-0514-ANISN/PC is described, as well as a ''GENERAL DESCRIPTION OF ANISN/PC code''. In addition to the ANISN/PC code, the transmittal package includes an interactive input generation programme called APE (ANISN Processor and Evaluator), which facilitates the work of the user in giving input. Also, a 21 group photon cross section master library FLUNGP.LIB in ISOTX format, which can be edited by an executable file LMOD.EXE, is included in the package. The input and output subroutines are reviewed. 6 refs, 1 fig., 1 tab

  2. Deterministic dense coding and faithful teleportation with multipartite graph states

    International Nuclear Information System (INIS)

    Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.

    2009-01-01

    We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.

  3. Multifaceted role of matrix metalloproteinases (MMPs)

    OpenAIRE

    Singh, Divya; Srivastava, Sanjeev K.; Chaudhuri, Tapas K.; Upadhyay, Ghanshyam

    2015-01-01

    Matrix metalloproteinases (MMPs), a large family of calcium-dependent zinc-containing endopeptidases, are involved in the tissue remodeling and degradation of the extracellular matrix. MMPs are widely distributed in the brain and regulate various processes including microglial activation, inflammation, dopaminergic apoptosis, blood-brain barrier disruption, and modulation of ?-synuclein pathology. High expression of MMPs is well documented in various neurological disorders including Parkinson...

  4. Direct-semidirect (DSD) codes

    International Nuclear Information System (INIS)

    Cvelbar, F.

    1999-01-01

    Recent codes for direct-semidirect (DSD) model calculations in the form of answers to a detailed questionnaire are reviewed. These codes include those embodying the classical DSD approach covering only the transitions to the bound states (RAF, HIKARI, and those of the Bologna group), as well as the code CUPIDO++ that also treats transitions to unbound states. (author)

  5. System Design Description for the TMAD Code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System

  6. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  7. Fast QC-LDPC code for free space optical communication

    Science.gov (United States)

    Wang, Jin; Zhang, Qi; Udeh, Chinonso Paschal; Wu, Rangzhong

    2017-02-01

    Free Space Optical (FSO) Communication systems use the atmosphere as a propagation medium. Hence the atmospheric turbulence effects lead to multiplicative noise related with signal intensity. In order to suppress the signal fading induced by multiplicative noise, we propose a fast Quasi-Cyclic (QC) Low-Density Parity-Check (LDPC) code for FSO Communication systems. As a linear block code based on sparse matrix, the performances of QC-LDPC is extremely near to the Shannon limit. Currently, the studies on LDPC code in FSO Communications is mainly focused on Gauss-channel and Rayleigh-channel, respectively. In this study, the LDPC code design over atmospheric turbulence channel which is nether Gauss-channel nor Rayleigh-channel is closer to the practical situation. Based on the characteristics of atmospheric channel, which is modeled as logarithmic-normal distribution and K-distribution, we designed a special QC-LDPC code, and deduced the log-likelihood ratio (LLR). An irregular QC-LDPC code for fast coding, of which the rates are variable, is proposed in this paper. The proposed code achieves excellent performance of LDPC codes and can present the characteristics of high efficiency in low rate, stable in high rate and less number of iteration. The result of belief propagation (BP) decoding shows that the bit error rate (BER) obviously reduced as the Signal-to-Noise Ratio (SNR) increased. Therefore, the LDPC channel coding technology can effectively improve the performance of FSO. At the same time, the BER, after decoding reduces with the increase of SNR arbitrarily, and not having error limitation platform phenomenon with error rate slowing down.

  8. Computerised output of phonetic codes in Devanagari script by dot-matrix printers

    International Nuclear Information System (INIS)

    Somasundaram, S.; Suri, M.M.K.; Khatua, R.

    1987-01-01

    This report describes the development of a computer software for converting hex-octal, alpha-numeric and pure-alpha mode input in English into 'phenetic Devanagari characters', which can be printed through dot-matrix printers in 2 passes of print-head, along with English text in the same lines. If multilingual terminals presently available in India, are used, it requires 4 passes of print-head for printing phonetic Devanagari characters, and English text also is converted into phonetic Devanagari script during printing. Thus, the software reported in this, is an improvement over the facilities currently available in Indian market. 9 tables, 2 refs. (author)

  9. The response-matrix based AFEN method for the hexagonal geometry

    International Nuclear Information System (INIS)

    Noh, Jae Man; Kim, Keung Koo; Zee, Sung Quun; Joo, Hyung Kook; Cho, Byng Oh; Jeong, Hyung Guk; Cho, Jin Young

    1998-03-01

    The analytic function expansion nodal (AFEN) method, developed to overcome the limitations caused by the transverse integration, has been successfully to predict the neutron behavior in the hexagonal core as well as rectangular core. In the hexagonal node, the transverse leakage resulted from the transverse integration has some singular terms such as delta-function and step-functions near the node center line. In most nodal methods using the transverse integration, the accuracy of nodal method is degraded because the transverse leakage is approximated as a smooth function across the node center line by ignoring singular terms. However, the AFEN method in which there is no transverse leakage term in deriving nodal coupling equations keeps good accuracy for hexagonal node. In this study, the AFEN method which shows excellent accuracy in the hexagonal core analyses is reformulated as a response matrix form. This form of the AFEN method can be implemented easily to nodal codes based on the response matrix method. Therefore, the Coarse Mesh Rebalance (CMR) acceleration technique which is one of main advantages of the response matrix method can be utilized for the AFEN method. The response matrix based AFEN method has been successfully implemented into the MASTER code and its accuracy and computational efficiency were examined by analyzing the two- and three- dimensional benchmark problem of VVER-440. Based on the results, it can be concluded that the newly formulated AFEN method predicts accurately the assembly powers (within 0.2% average error) as well as the effective multiplication factor (within 0.2% average error) as well as the effective multiplication factor (within 20 pcm error). In addition, the CMR acceleration technique is quite efficient in reducing the computation time of the AFEN method by 8 to 10 times. (author). 22 refs., 1 tab., 4 figs

  10. User's manual for computer code RIBD-II, a fission product inventory code

    International Nuclear Information System (INIS)

    Marr, D.R.

    1975-01-01

    The computer code RIBD-II is used to calculate inventories, activities, decay powers, and energy releases for the fission products generated in a fuel irradiation. Changes from the earlier RIBD code are: the expansion to include up to 850 fission product isotopes, input in the user-oriented NAMELIST format, and run-time choice of fuels from an extensively enlarged library of nuclear data. The library that is included in the code package contains yield data for 818 fission product isotopes for each of fourteen different fissionable isotopes, together with fission product transmutation cross sections for fast and thermal systems. Calculational algorithms are little changed from those in RIBD. (U.S.)

  11. Simulations of linear and Hamming codes using SageMath

    Science.gov (United States)

    Timur, Tahta D.; Adzkiya, Dieky; Soleha

    2018-03-01

    Digital data transmission over a noisy channel could distort the message being transmitted. The goal of coding theory is to ensure data integrity, that is, to find out if and where this noise has distorted the message and what the original message was. Data transmission consists of three stages: encoding, transmission, and decoding. Linear and Hamming codes are codes that we discussed in this work, where encoding algorithms are parity check and generator matrix, and decoding algorithms are nearest neighbor and syndrome. We aim to show that we can simulate these processes using SageMath software, which has built-in class of coding theory in general and linear codes in particular. First we consider the message as a binary vector of size k. This message then will be encoded to a vector with size n using given algorithms. And then a noisy channel with particular value of error probability will be created where the transmission will took place. The last task would be decoding, which will correct and revert the received message back to the original message whenever possible, that is, if the number of error occurred is smaller or equal to the correcting radius of the code. In this paper we will use two types of data for simulations, namely vector and text data.

  12. FRESCO: fusion reactor simulation code for tokamaks

    International Nuclear Information System (INIS)

    Mantsinen, M.J.

    1995-03-01

    The study of the dynamics of tokamak fusion reactors, a zero-dimensional particle and power balance code FRESCO (Fusion Reactor Simulation Code) has been developed at the Department of Technical Physics of Helsinki University of Technology. The FRESCO code is based on zero-dimensional particle and power balance equations averaged over prescribed plasma profiles. In the report the data structure of the FRESCO code is described, including the description of the COMMON statements, program input, and program output. The general structure of the code is described, including the description of subprograms and functions. The physical model used and examples of the code performance are also included in the report. (121 tabs.) (author)

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. T-matrix modeling of linear depolarization by morphologically complex soot and soot-containing aerosols

    International Nuclear Information System (INIS)

    Mishchenko, Michael I.; Liu, Li; Mackowski, Daniel W.

    2013-01-01

    We use state-of-the-art public-domain Fortran codes based on the T-matrix method to calculate orientation and ensemble averaged scattering matrix elements for a variety of morphologically complex black carbon (BC) and BC-containing aerosol particles, with a special emphasis on the linear depolarization ratio (LDR). We explain theoretically the quasi-Rayleigh LDR peak at side-scattering angles typical of low-density soot fractals and conclude that the measurement of this feature enables one to evaluate the compactness state of BC clusters and trace the evolution of low-density fluffy fractals into densely packed aggregates. We show that small backscattering LDRs measured with ground-based, airborne, and spaceborne lidars for fresh smoke generally agree with the values predicted theoretically for fluffy BC fractals and densely packed near-spheroidal BC aggregates. To reproduce higher lidar LDRs observed for aged smoke, one needs alternative particle models such as shape mixtures of BC spheroids or cylinders. -- Highlights: ► New superposition T-matrix code is applied to soot aerosols. ► Quasi-Rayleigh side-scattering peak in linear depolarization (LD) is explained. ► LD measurements can be used for morphological characterization of soot aerosols

  15. Matrix inversion tomosynthesis improvements in longitudinal x-ray slice imaging

    International Nuclear Information System (INIS)

    Dobbines, J.T. III.

    1990-01-01

    This patent describes a tomosynthesis apparatus. It comprises: an x-ray tomography machine for producing a plurality of x-ray projection images of a subject including an x-ray source, and detection means; and processing means, connected to receive the plurality of projection images, for: shifting and reconstructing the projection x-ray images to obtain a tomosynthesis matrix of images T; acquiring a blurring matrix F having components which represent out-of-focus and in-focus components of the matrix T; obtaining a matrix P representing only in-focus components of the imaged subject by solving a matrix equation including the matrix T and the matrix F; correcting the matrix P for low spatial frequency components; and displaying images indicative of contents of the matrix P

  16. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  17. Double-Layer Low-Density Parity-Check Codes over Multiple-Input Multiple-Output Channels

    Directory of Open Access Journals (Sweden)

    Yun Mao

    2012-01-01

    Full Text Available We introduce a double-layer code based on the combination of a low-density parity-check (LDPC code with the multiple-input multiple-output (MIMO system, where the decoding can be done in both inner-iteration and outer-iteration manners. The present code, called low-density MIMO code (LDMC, has a double-layer structure, that is, one layer defines subcodes that are embedded in each transmission vector and another glues these subcodes together. It supports inner iterations inside the LDPC decoder and outeriterations between detectors and decoders, simultaneously. It can also achieve the desired design rates due to the full rank of the deployed parity-check matrix. Simulations show that the LDMC performs favorably over the MIMO systems.

  18. LTRACK: Beam-transport calculation including wakefield effects

    International Nuclear Information System (INIS)

    Chan, K.C.D.; Cooper, R.K.

    1988-01-01

    LTRACK is a first-order beam-transport code that includes wakefield effects up to quadrupole modes. This paper will introduce the readers to this computer code by describing the history, the method of calculations, and a brief summary of the input/output information. Future plans for the code will also be described

  19. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  20. Elementary matrix algebra

    CERN Document Server

    Hohn, Franz E

    2012-01-01

    This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur

  1. Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes

    Science.gov (United States)

    Harrington, James William

    Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present

  2. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  3. Recent developments of the MARC/PN transport theory code including a treatment of anisotropic scatter

    International Nuclear Information System (INIS)

    Fletcher, J.K.

    1987-12-01

    The computer code MARC/PN provides a solution of the multigroup transport equation by expanding the flux in spherical harmonics. The coefficients of the series so obtained satisfy linked first order differential equations, and on eliminating terms associated with odd parity harmonics a second order system results which can be solved by established finite difference or finite element techniques. This report describes modifications incorporated in MARC/PN to allow for anisotropic scattering, and the modelling of irregular exterior boundaries in the finite element option. The latter development leads to substantial reductions in problem size, particularly for three dimensions. Also, links to an interactive graphics mesh generator (SUPERTAB) have been added. The final section of the report contains results from problems showing the effects of anisotropic scatter and the ability of the code to model irregular geometries. (author)

  4. Status of reactor core design code system in COSINE code package

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Y.; Yu, H.; Liu, Z., E-mail: yuhui@snptc.com.cn [State Nuclear Power Software Development Center, SNPTC, National Energy Key Laboratory of Nuclear Power Software (NEKLS), Beijiing (China)

    2014-07-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  5. Status of reactor core design code system in COSINE code package

    International Nuclear Information System (INIS)

    Chen, Y.; Yu, H.; Liu, Z.

    2014-01-01

    For self-reliance, COre and System INtegrated Engine for design and analysis (COSINE) code package is under development in China. In this paper, recent development status of the reactor core design code system (including the lattice physics code and the core simulator) is presented. The well-established theoretical models have been implemented. The preliminary verification results are illustrated. And some special efforts, such as updated theory models and direct data access application, are also made to achieve better software product. (author)

  6. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  7. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  8. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Syrio. A program for the calculation of the inverse of a matrix; Syrio. Programa para el calculo de la inversa de una matriz

    Energy Technology Data Exchange (ETDEWEB)

    Garcia de Viedma Alonso, L.

    1963-07-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  10. Long-term effects of neutron absorber and fuel matrix corrosion on criticality

    International Nuclear Information System (INIS)

    Culbreth, W.G.; Zielinski, P.R.

    1994-01-01

    Proposed waste package designs will require the addition of neutron absorbing material to prevent the possibility of a sustained chain reaction occurring in the fuel in the event of water intrusion. Due to the low corrosion rates of the fuel matrix and the Zircaloy cladding, there is a possibility that the neutron absorbing material will corrode and leak from the waste container long before the subsequent release of fuel matrix material. An analysis of the release of fuel matrix and neutron absorber material based on a probabilistic model was conducted and the results were used to prepare input to KENO-V, an neutron criticality code. The results demonstrate that, in the presence of water, the computed values of k eff exceeded the maximum of 0.95 for an extended period of time

  11. LUCID - an optical design and raytrace code

    International Nuclear Information System (INIS)

    Nicholas, D.J.; Duffey, K.P.

    1980-11-01

    A 2D optical design and ray trace code is described. The code can operate either as a geometric optics propagation code or provide a scalar diffraction treatment. There are numerous non-standard options within the code including design and systems optimisation procedures. A number of illustrative problems relating to the design of optical components in the field of high power lasers is included. (author)

  12. MDL, Collineations and the Fundamental Matrix

    OpenAIRE

    Maybank , Steve; Sturm , Peter

    1999-01-01

    International audience; Scene geometry can be inferred from point correspondences between two images. The inference process includes the selection of a model. Four models are considered: background (or null), collineation, affine fundamental matrix and fundamental matrix. It is shown how Minimum Description Length (MDL) can be used to compare the different models. The main result is that there is little reason for preferring the fundamental matrix model over the collineation model, even when ...

  13. Open source Matrix Product States: Opening ways to simulate entangled many-body quantum systems in one dimension

    Science.gov (United States)

    Jaschke, Daniel; Wall, Michael L.; Carr, Lincoln D.

    2018-04-01

    Numerical simulations are a powerful tool to study quantum systems beyond exactly solvable systems lacking an analytic expression. For one-dimensional entangled quantum systems, tensor network methods, amongst them Matrix Product States (MPSs), have attracted interest from different fields of quantum physics ranging from solid state systems to quantum simulators and quantum computing. Our open source MPS code provides the community with a toolset to analyze the statics and dynamics of one-dimensional quantum systems. Here, we present our open source library, Open Source Matrix Product States (OSMPS), of MPS methods implemented in Python and Fortran2003. The library includes tools for ground state calculation and excited states via the variational ansatz. We also support ground states for infinite systems with translational invariance. Dynamics are simulated with different algorithms, including three algorithms with support for long-range interactions. Convenient features include built-in support for fermionic systems and number conservation with rotational U(1) and discrete Z2 symmetries for finite systems, as well as data parallelism with MPI. We explain the principles and techniques used in this library along with examples of how to efficiently use the general interfaces to analyze the Ising and Bose-Hubbard models. This description includes the preparation of simulations as well as dispatching and post-processing of them.

  14. Geographic data: Zip Codes (Shape File)

    Data.gov (United States)

    Montgomery County of Maryland — This dataset contains all zip codes in Montgomery County. Zip codes are the postal delivery areas defined by USPS. Zip codes with mailboxes only are not included. As...

  15. Stochastic-Strength-Based Damage Simulation of Ceramic Matrix Composite Laminates

    Science.gov (United States)

    Nemeth, Noel N.; Mital, Subodh K.; Murthy, Pappu L. N.; Bednarcyk, Brett A.; Pineda, Evan J.; Bhatt, Ramakrishna T.; Arnold, Steven M.

    2016-01-01

    The Finite Element Analysis-Micromechanics Analysis Code/Ceramics Analysis and Reliability Evaluation of Structures (FEAMAC/CARES) program was used to characterize and predict the progressive damage response of silicon-carbide-fiber-reinforced reaction-bonded silicon nitride matrix (SiC/RBSN) composite laminate tensile specimens. Studied were unidirectional laminates [0] (sub 8), [10] (sub 8), [45] (sub 8), and [90] (sub 8); cross-ply laminates [0 (sub 2) divided by 90 (sub 2),]s; angled-ply laminates [plus 45 (sub 2) divided by -45 (sub 2), ]s; doubled-edge-notched [0] (sub 8), laminates; and central-hole laminates. Results correlated well with the experimental data. This work was performed as a validation and benchmarking exercise of the FEAMAC/CARES program. FEAMAC/CARES simulates stochastic-based discrete-event progressive damage of ceramic matrix composite and polymer matrix composite material structures. It couples three software programs: (1) the Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC), (2) the Ceramics Analysis and Reliability Evaluation of Structures Life Prediction Program (CARES/Life), and (3) the Abaqus finite element analysis program. MAC/GMC contributes multiscale modeling capabilities and micromechanics relations to determine stresses and deformations at the microscale of the composite material repeating-unit-cell (RUC). CARES/Life contributes statistical multiaxial failure criteria that can be applied to the individual brittle-material constituents of the RUC, and Abaqus is used to model the overall composite structure. For each FEAMAC/CARES simulation trial, the stochastic nature of brittle material strength results in random, discrete damage events that incrementally progress until ultimate structural failure.

  16. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  17. Syrio. A program for the calculation of the inverse of a matrix; Syrio. Programa para el calculo de la inversa de una matriz

    Energy Technology Data Exchange (ETDEWEB)

    Garcia de Viedma Alonso, L.

    1963-07-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  18. Cost reducing code implementation strategies

    International Nuclear Information System (INIS)

    Kurtz, Randall L.; Griswold, Michael E.; Jones, Gary C.; Daley, Thomas J.

    1995-01-01

    Sargent and Lundy's Code consulting experience reveals a wide variety of approaches toward implementing the requirements of various nuclear Codes Standards. This paper will describe various Code implementation strategies which assure that Code requirements are fully met in a practical and cost-effective manner. Applications to be discussed includes the following: new construction; repair, replacement and modifications; assessments and life extensions. Lessons learned and illustrative examples will be included. Preferred strategies and specific recommendations will also be addressed. Sargent and Lundy appreciates the opportunity provided by the Korea Atomic Industrial Forum and Korean Nuclear Society to share our ideas and enhance global cooperation through the exchange of information and views on relevant topics

  19. Matrix albedo for discrete ordinates infinite-medium boundary condition

    International Nuclear Information System (INIS)

    Mathews, K.; Dishaw, J.

    2007-01-01

    Discrete ordinates problems with an infinite exterior medium (reflector) can be more efficiently computed by eliminating grid cells in the exterior medium and applying a matrix albedo boundary condition. The albedo matrix is a discretized bidirectional reflection distribution function (BRDF) that accounts for the angular quadrature set, spatial quadrature method, and spatial grid that would have been used to model a portion of the exterior medium. The method is exact in slab geometry, and could be used as an approximation in multiple dimensions or curvilinear coordinates. We present an adequate method for computing albedo matrices and demonstrate their use in verifying a discrete ordinates code in slab geometry by comparison with Ganapol's infinite medium semi-analytic TIEL benchmark. With sufficient resolution in the spatial and angular grids and iteration tolerance to yield solutions converged to 6 digits, the conventional (scalar) albedo boundary condition yielded 2-digit accuracy at the boundary, but the matrix albedo solution reproduced the benchmark scalar flux at the boundary to all 6 digits. (authors)

  20. Parallelization Issues and Particle-In Codes.

    Science.gov (United States)

    Elster, Anne Cathrine

    1994-01-01

    "Everything should be made as simple as possible, but not simpler." Albert Einstein. The field of parallel scientific computing has concentrated on parallelization of individual modules such as matrix solvers and factorizers. However, many applications involve several interacting modules. Our analyses of a particle-in-cell code modeling charged particles in an electric field, show that these accompanying dependencies affect data partitioning and lead to new parallelization strategies concerning processor, memory and cache utilization. Our test-bed, a KSR1, is a distributed memory machine with a globally shared addressing space. However, most of the new methods presented hold generally for hierarchical and/or distributed memory systems. We introduce a novel approach that uses dual pointers on the local particle arrays to keep the particle locations automatically partially sorted. Complexity and performance analyses with accompanying KSR benchmarks, have been included for both this scheme and for the traditional replicated grids approach. The latter approach maintains load-balance with respect to particles. However, our results demonstrate it fails to scale properly for problems with large grids (say, greater than 128-by-128) running on as few as 15 KSR nodes, since the extra storage and computation time associated with adding the grid copies, becomes significant. Our grid partitioning scheme, although harder to implement, does not need to replicate the whole grid. Consequently, it scales well for large problems on highly parallel systems. It may, however, require load balancing schemes for non-uniform particle distributions. Our dual pointer approach may facilitate this through dynamically partitioned grids. We also introduce hierarchical data structures that store neighboring grid-points within the same cache -line by reordering the grid indexing. This alignment produces a 25% savings in cache-hits for a 4-by-4 cache. A consideration of the input data's effect on

  1. Interface discontinuity factors in the modal Eigenspace of the multigroup diffusion matrix

    International Nuclear Information System (INIS)

    Garcia-Herranz, N.; Herrero, J.J.; Cuervo, D.; Ahnert, C.

    2011-01-01

    Interface discontinuity factors based on the Generalized Equivalence Theory are commonly used in nodal homogenized diffusion calculations so that diffusion average values approximate heterogeneous higher order solutions. In this paper, an additional form of interface correction factors is presented in the frame of the Analytic Coarse Mesh Finite Difference Method (ACMFD), based on a correction of the modal fluxes instead of the physical fluxes. In the ACMFD formulation, implemented in COBAYA3 code, the coupled multigroup diffusion equations inside a homogenized region are reduced to a set of uncoupled modal equations through diagonalization of the multigroup diffusion matrix. Then, physical fluxes are transformed into modal fluxes in the Eigenspace of the diffusion matrix. It is possible to introduce interface flux discontinuity jumps as the difference of heterogeneous and homogeneous modal fluxes instead of introducing interface discontinuity factors as the ratio of heterogeneous and homogeneous physical fluxes. The formulation in the modal space has been implemented in COBAYA3 code and assessed by comparison with solutions using classical interface discontinuity factors in the physical space. (author)

  2. Verification of unfold error estimates in the unfold operator code

    International Nuclear Information System (INIS)

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. copyright 1997 American Institute of Physics

  3. Inelasticity and the ASME code

    International Nuclear Information System (INIS)

    Berman, I.

    1978-01-01

    Although it may have more general applicability, this paper is specifically concerned with some aspects of plasticity for class I nuclear components that are contained in section III of the ASME Boiler and Pressure Vessel Code. It directly addresses design for components at temperatures at which creep is not a factor. A review is made of the relationship of plasticity to each of the three failure modes that the stress limits are intended to prevent. It is found that the prevention of bursting and gross distortion from a single application of pressure and the prevention of fatigue failure caused by repeated cycles of peak stresses are well supported by experimental results. The experimental verification for the rules to show that the primary plus secondary stresses shakedown to elastic behavior is not clear. Various directed efforts which could lead to greater assurance of shakedown to elastic behavior are suggested. The major approach should be a massive program to develop a test matrix of experimental information for various portions of each component of interest in the Code. (Auth.)

  4. Bar Coding and Tracking in Pathology.

    Science.gov (United States)

    Hanna, Matthew G; Pantanowitz, Liron

    2016-03-01

    Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Proposal to consistently apply the International Code of Nomenclature of Prokaryotes (ICNP) to names of the oxygenic photosynthetic bacteria (cyanobacteria), including those validly published under the International Code of Botanical Nomenclature (ICBN)/International Code of Nomenclature for algae, fungi and plants (ICN), and proposal to change Principle 2 of the ICNP.

    Science.gov (United States)

    Pinevich, Alexander V

    2015-03-01

    This taxonomic note was motivated by the recent proposal [Oren & Garrity (2014) Int J Syst Evol Microbiol 64, 309-310] to exclude the oxygenic photosynthetic bacteria (cyanobacteria) from the wording of General Consideration 5 of the International Code of Nomenclature of Prokaryotes (ICNP), which entails unilateral coverage of these prokaryotes by the International Code of Nomenclature for algae, fungi, and plants (ICN; formerly the International Code of Botanical Nomenclature, ICBN). On the basis of key viewpoints, approaches and rules in the systematics, taxonomy and nomenclature of prokaryotes it is reciprocally proposed to apply the ICNP to names of cyanobacteria including those validly published under the ICBN/ICN. For this purpose, a change to Principle 2 of the ICNP is proposed to enable validation of cyanobacterial names published under the ICBN/ICN rules. © 2015 IUMS.

  6. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  7. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  8. Matrix-exponential distributions in applied probability

    CERN Document Server

    Bladt, Mogens

    2017-01-01

    This book contains an in-depth treatment of matrix-exponential (ME) distributions and their sub-class of phase-type (PH) distributions. Loosely speaking, an ME distribution is obtained through replacing the intensity parameter in an exponential distribution by a matrix. The ME distributions can also be identified as the class of non-negative distributions with rational Laplace transforms. If the matrix has the structure of a sub-intensity matrix for a Markov jump process we obtain a PH distribution which allows for nice probabilistic interpretations facilitating the derivation of exact solutions and closed form formulas. The full potential of ME and PH unfolds in their use in stochastic modelling. Several chapters on generic applications, like renewal theory, random walks and regenerative processes, are included together with some specific examples from queueing theory and insurance risk. We emphasize our intention towards applications by including an extensive treatment on statistical methods for PH distribu...

  9. Order functions and evaluation codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....

  10. MATLAB matrix algebra

    CERN Document Server

    Pérez López, César

    2014-01-01

    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...

  11. Numerical simulation on the screw pinch by 2-D MHD pinch code 'topics' including impurities and neutrals effects

    International Nuclear Information System (INIS)

    Nagata, A.; Ashida, H.; Okamoto, M.; Hirano, K.

    1981-03-01

    Two dimentional fluid simulation code ''TOPICS'' is developed for the STP-2, the shock heated screw pinch at Nagoya. It involves the effects of impurity ions and neutral atoms. In order to estimate the radiation losses, the impurity continuity equations with ionizations and recombinations are solved simultaneously with the plasma fluid equations. The results are compared with the coronal equilibrium model. It is found that the coronal equilibrium model underestimates the radiation losses from shock heated pinch plasmas in its initial dynamic phase. The present calculations including impurities and neutrals show the importance of the radiation losses from the plasma of the STP-2. Introducing the anomalous resistivity caused by the ion acoustic instability, the observed magnetic field penetration is explained fairly well. (author)

  12. Pump Component Model in SPACE Code

    International Nuclear Information System (INIS)

    Kim, Byoung Jae; Kim, Kyoung Doo

    2010-08-01

    This technical report describes the pump component model in SPACE code. A literature survey was made on pump models in existing system codes. The models embedded in SPACE code were examined to check the confliction with intellectual proprietary rights. Design specifications, computer coding implementation, and test results are included in this report

  13. Burn-up calculation of different thorium-based fuel matrixes in a thermal research reactor using MCNPX 2.6 code

    Directory of Open Access Journals (Sweden)

    Gholamzadeh Zohreh

    2014-12-01

    Full Text Available Decrease of the economically accessible uranium resources and the inherent proliferation resistance of thorium fuel motivate its application in nuclear power systems. Estimation of the nuclear reactor’s neutronic parameters during different operational situations is of key importance for the safe operation of nuclear reactors. In the present research, thorium oxide fuel burn-up calculations for a demonstrative model of a heavy water- -cooled reactor have been performed using MCNPX 2.6 code. Neutronic parameters for three different thorium fuel matrices loaded separately in the modelled thermal core have been investigated. 233U, 235U and 239Pu isotopes have been used as fissile element in the thorium oxide fuel, separately. Burn-up of three different fuels has been calculated at 1 MW constant power. 135X and 149Sm concentration variations have been studied in the modelled core during 165 days burn-up. Burn-up of thorium oxide enriched with 233U resulted in the least 149Sm and 135Xe productions and net fissile production of 233U after 165 days. The negative fuel, coolant and void reactivity of the used fuel assures safe operation of the modelled thermal core containing (233U-Th O2 matrix. Furthermore, utilisation of thorium breeder fuel demonstrates several advantages, such as good neutronic economy, 233U production and less production of long-lived α emitter high radiotoxic wastes in biological internal exposure point of view

  14. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  15. Unfolding code for neutron spectrometry based on neural nets technology

    International Nuclear Information System (INIS)

    Ortiz R, J. M.; Vega C, H. R.

    2012-10-01

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the R obust Design of Artificial Neural Networks Methodology . The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6 Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  16. The FRAM code: Description and some comparisons with MGA

    International Nuclear Information System (INIS)

    Sampson, T.E.; Kelley, T.A.

    1994-01-01

    The authors describe the initial development of the FRAM gamma-ray spectrometry code for analyzing plutonium isotopics, discuss its methodology, and present some comparisons with MGA on identical items. They also present some of the features of a new Windows 3.1-based version (PC/FRAM) and describe some current measurement problems. Development of the FRAM code began in about 1985, growing out of the need at the Los Alamos TA-55 Plutonium Facility for an isotopic analysis code to give accurate results for the effective specific power of heterogeneous (Am/Pu) pyrochemical residues. These residues present a difficult challenge because the americium is present mostly in a low-Z salt matrix (AmCl 3 ) with fines and small pieces of plutonium metal dispersed throughout the salt. Plutonium gamma rays suffer different attenuation than americium gamma rays of the same energy; this makes conventional analysis with a single relative efficiency function inaccurate for Am/Pu ratios and affects the analysis in other subtle ways

  17. In-depth, high-accuracy proteomics of sea urchin tooth organic matrix

    Directory of Open Access Journals (Sweden)

    Mann Matthias

    2008-12-01

    Full Text Available Abstract Background The organic matrix contained in biominerals plays an important role in regulating mineralization and in determining biomineral properties. However, most components of biomineral matrices remain unknown at present. In sea urchin tooth, which is an important model for developmental biology and biomineralization, only few matrix components have been identified. The recent publication of the Strongylocentrotus purpuratus genome sequence rendered possible not only the identification of genes potentially coding for matrix proteins, but also the direct identification of proteins contained in matrices of skeletal elements by in-depth, high-accuracy proteomic analysis. Results We identified 138 proteins in the matrix of tooth powder. Only 56 of these proteins were previously identified in the matrices of test (shell and spine. Among the novel components was an interesting group of five proteins containing alanine- and proline-rich neutral or basic motifs separated by acidic glycine-rich motifs. In addition, four of the five proteins contained either one or two predicted Kazal protease inhibitor domains. The major components of tooth matrix were however largely identical to the set of spicule matrix proteins and MSP130-related proteins identified in test (shell and spine matrix. Comparison of the matrices of crushed teeth to intact teeth revealed a marked dilution of known intracrystalline matrix proteins and a concomitant increase in some intracellular proteins. Conclusion This report presents the most comprehensive list of sea urchin tooth matrix proteins available at present. The complex mixture of proteins identified may reflect many different aspects of the mineralization process. A comparison between intact tooth matrix, presumably containing odontoblast remnants, and crushed tooth matrix served to differentiate between matrix components and possible contributions of cellular remnants. Because LC-MS/MS-based methods directly

  18. GAMERA - The New Magnetospheric Code

    Science.gov (United States)

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  19. Modeling the curing process of thermosetting resin matrix composites

    Science.gov (United States)

    Loos, A. C.

    1986-01-01

    A model is presented for simulating the curing process of a thermosetting resin matrix composite. The model relates the cure temperature, the cure pressure, and the properties of the prepreg to the thermal, chemical, and rheological processes occurring in the composite during cure. The results calculated with the computer code developed on the basis of the model were compared with the experimental data obtained from autoclave-curved composite laminates. Good agreement between the two sets of results was obtained.

  20. Three-loop SM beta-functions for matrix Yukawa couplings

    Directory of Open Access Journals (Sweden)

    A.V. Bednyakov

    2014-10-01

    Full Text Available We present the extension of our previous results for three-loop Yukawa coupling beta-functions to the case of complex Yukawa matrices describing the flavour structure of the SM. The calculation is carried out in the context of unbroken phase of the SM with the help of the MINCER program in a general linear gauge and cross-checked by means of MATAD/BAMBA codes. In addition, ambiguities in Yukawa matrix beta-functions are studied.

  1. Short-Block Protograph-Based LDPC Codes

    Science.gov (United States)

    Divsalar, Dariush; Dolinar, Samuel; Jones, Christopher

    2010-01-01

    Short-block low-density parity-check (LDPC) codes of a special type are intended to be especially well suited for potential applications that include transmission of command and control data, cellular telephony, data communications in wireless local area networks, and satellite data communications. [In general, LDPC codes belong to a class of error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels.] The codes of the present special type exhibit low error floors, low bit and frame error rates, and low latency (in comparison with related prior codes). These codes also achieve low maximum rate of undetected errors over all signal-to-noise ratios, without requiring the use of cyclic redundancy checks, which would significantly increase the overhead for short blocks. These codes have protograph representations; this is advantageous in that, for reasons that exceed the scope of this article, the applicability of protograph representations makes it possible to design highspeed iterative decoders that utilize belief- propagation algorithms.

  2. Write up of the codes for microscopic models of NN and NA scattering

    International Nuclear Information System (INIS)

    Amos, K.

    1998-01-01

    This report documents the essential details of the NN and NA computer programs that culminate in the prediction of elastic and inelastic nucleon scattering observables form optical potentials generated by full folding and effective NN interaction within the nuclear medium. That same (energy and density dependent) effective interaction is used as the transition operator in the distorted wave approximation (DWA) for inelastic (and charge exchange) nucleon scattering from nuclei. The report consists of four sections: 1) general remarks and program locations, 2) the t- and g-matrix codes and how to use them, 3) the effective interaction codes and how to use them, and 4) the NA codes, DWBA97 and DWBB97 and how to use them. (author)

  3. Light-water reactor safety analysis codes

    International Nuclear Information System (INIS)

    Jackson, J.F.; Ransom, V.H.; Ybarrondo, L.J.; Liles, D.R.

    1980-01-01

    A brief review of the evolution of light-water reactor safety analysis codes is presented. Included is a summary comparison of the technical capabilities of major system codes. Three recent codes are described in more detail to serve as examples of currently used techniques. Example comparisons between calculated results using these codes and experimental data are given. Finally, a brief evaluation of current code capability and future development trends is presented

  4. Preconditioned Krylov and Gauss-Seidel solutions of response matrix equations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Smith, M.A.; Yang, W.S.; Wollaber, A.

    2011-01-01

    The use of preconditioned Krylov methods is examined as an alternative to the partitioned matrix acceleration applied to red-black Gauss Seidel (RBGS) iteration that is presently used in the variational nodal code, VARIANT. We employ the GMRES algorithm to treat non-symmetric response matrix equations. A pre conditioner is formulated for the within-group diffusion equation which is equivalent to partitioned matrix acceleration of RBGS iterations. We employ the pre conditioner, which closely parallels two-level p multigrid, to improve RBGS and GMRES algorithms. Of the accelerated algorithms, GMRES converges with less computational effort than RBGS and therefore is chosen for further development. The p multigrid pre conditioner requires response matrices with two or more degrees of freedom (DOF) per interface that are polynomials, which are both orthogonal and hierarchical. It is therefore not directly applicable to very fine mesh calculations that are both slow to converge and that are often modeled with response matrices with only one DOF per interface. Orthogonal matrix aggregation (OMA) is introduced to circumvent this difficulty by combining N×N fine mesh response matrices with one DOF per interface into a coarse mesh response matrix with N orthogonal DOF per interface. Numerical results show that OMA used alone or in combination with p multigrid preconditioning substantially accelerates GMRES solutions. (author)

  5. Preconditioned Krylov and Gauss-Seidel solutions of response matrix equations

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, E.E., E-mail: e-lewis@northwestern.edu [Department of Mechanical Engineering, Northwestern University, Evanston, IL (United States); Smith, M.A.; Yang, W.S.; Wollaber, A., E-mail: masmith@anl.gov, E-mail: wsyang@anl.gov, E-mail: wollaber@lanl.gov [Nuclear Engineering Division, Argonne National Laboratory, Argonne, IL (United States)

    2011-07-01

    The use of preconditioned Krylov methods is examined as an alternative to the partitioned matrix acceleration applied to red-black Gauss Seidel (RBGS) iteration that is presently used in the variational nodal code, VARIANT. We employ the GMRES algorithm to treat non-symmetric response matrix equations. A pre conditioner is formulated for the within-group diffusion equation which is equivalent to partitioned matrix acceleration of RBGS iterations. We employ the pre conditioner, which closely parallels two-level p multigrid, to improve RBGS and GMRES algorithms. Of the accelerated algorithms, GMRES converges with less computational effort than RBGS and therefore is chosen for further development. The p multigrid pre conditioner requires response matrices with two or more degrees of freedom (DOF) per interface that are polynomials, which are both orthogonal and hierarchical. It is therefore not directly applicable to very fine mesh calculations that are both slow to converge and that are often modeled with response matrices with only one DOF per interface. Orthogonal matrix aggregation (OMA) is introduced to circumvent this difficulty by combining N×N fine mesh response matrices with one DOF per interface into a coarse mesh response matrix with N orthogonal DOF per interface. Numerical results show that OMA used alone or in combination with p multigrid preconditioning substantially accelerates GMRES solutions. (author)

  6. Computing eigenvalue sensitivity coefficients to nuclear data based on the CLUTCH method with RMC code

    International Nuclear Information System (INIS)

    Qiu, Yishu; She, Ding; Tang, Xiao; Wang, Kan; Liang, Jingang

    2016-01-01

    Highlights: • A new algorithm is proposed to reduce memory consumption for sensitivity analysis. • The fission matrix method is used to generate adjoint fission source distributions. • Sensitivity analysis is performed on a detailed 3D full-core benchmark with RMC. - Abstract: Recently, there is a need to develop advanced methods of computing eigenvalue sensitivity coefficients to nuclear data in the continuous-energy Monte Carlo codes. One of these methods is the iterated fission probability (IFP) method, which is adopted by most of Monte Carlo codes of having the capabilities of computing sensitivity coefficients, including the Reactor Monte Carlo code RMC. Though it is accurate theoretically, the IFP method faces the challenge of huge memory consumption. Therefore, it may sometimes produce poor sensitivity coefficients since the number of particles in each active cycle is not sufficient enough due to the limitation of computer memory capacity. In this work, two algorithms of the Contribution-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) method, namely, the collision-event-based algorithm (C-CLUTCH) which is also implemented in SCALE and the fission-event-based algorithm (F-CLUTCH) which is put forward in this work, are investigated and implemented in RMC to reduce memory requirements for computing eigenvalue sensitivity coefficients. While the C-CLUTCH algorithm requires to store concerning reaction rates of every collision, the F-CLUTCH algorithm only stores concerning reaction rates of every fission point. In addition, the fission matrix method is put forward to generate the adjoint fission source distribution for the CLUTCH method to compute sensitivity coefficients. These newly proposed approaches implemented in RMC code are verified by a SF96 lattice model and the MIT BEAVRS benchmark problem. The numerical results indicate the accuracy of the F-CLUTCH algorithm is the same as the C

  7. R-matrix parameters in reactor applications

    International Nuclear Information System (INIS)

    Hwang, R.N.

    1992-01-01

    The key role of the resonance phenomena in reactor applications manifests through the self-shielding effect. The basic issue involves the application of the microscopic cross sections in the macroscopic reactor lattices consisting of many nuclides that exhibit resonance behavior. To preserve the fidelity of such a effect requires the accurate calculations of the cross sections and the neutron flux in great detail. This clearly not possible without viable resonance data. Recently released ENDF/B VI resonance data in the resolved range especially reflect the dramatic improvement in two important areas; namely, the significant extension of the resolved resonance ranges accompanied by the availability of the R-matrix parameters of the Reich-Moore type. Aside from the obvious increase in computing time required for the significantly greater number of resonances, the main concern is the compatibility of the Riech-Moore representation to the existing reactor processing codes which, until now, are based on the traditional cross section formalisms. This purpose of this paper is to summarize our recent efforts to facilitate implementation of the proposed methods into the production codes at ANL

  8. Form of multicomponent Fickian diffusion coefficients matrix

    International Nuclear Information System (INIS)

    Wambui Mutoru, J.; Firoozabadi, Abbas

    2011-01-01

    Highlights: → Irreversible thermodynamics establishes form of multicomponent diffusion coefficients. → Phenomenological coefficients and thermodynamic factors affect sign of diffusion coefficients. → Negative diagonal elements of diffusion coefficients matrix can occur in non-ideal mixtures. → Eigenvalues of the matrix of Fickian diffusion coefficients may not be all real. - Abstract: The form of multicomponent Fickian diffusion coefficients matrix in thermodynamically stable mixtures is established based on the form of phenomenological coefficients and thermodynamic factors. While phenomenological coefficients form a symmetric positive definite matrix, the determinant of thermodynamic factors matrix is positive. As a result, the Fickian diffusion coefficients matrix has a positive determinant, but its elements - including diagonal elements - can be negative. Comprehensive survey of reported diffusion coefficients data for ternary and quaternary mixtures, confirms that invariably the determinant of the Fickian diffusion coefficients matrix is positive.

  9. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  10. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  11. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  12. Lattice polytopes in coding theory

    Directory of Open Access Journals (Sweden)

    Ivan Soprunov

    2015-05-01

    Full Text Available In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.

  13. A method of non-contact reading code based on computer vision

    Science.gov (United States)

    Zhang, Chunsen; Zong, Xiaoyu; Guo, Bingxuan

    2018-03-01

    With the purpose of guarantee the computer information exchange security between internal and external network (trusted network and un-trusted network), A non-contact Reading code method based on machine vision has been proposed. Which is different from the existing network physical isolation method. By using the computer monitors, camera and other equipment. Deal with the information which will be on exchanged, Include image coding ,Generate the standard image , Display and get the actual image , Calculate homography matrix, Image distort correction and decoding in calibration, To achieve the computer information security, Non-contact, One-way transmission between the internal and external network , The effectiveness of the proposed method is verified by experiments on real computer text data, The speed of data transfer can be achieved 24kb/s. The experiment shows that this algorithm has the characteristics of high security, fast velocity and less loss of information. Which can meet the daily needs of the confidentiality department to update the data effectively and reliably, Solved the difficulty of computer information exchange between Secret network and non-secret network, With distinctive originality, practicability, and practical research value.

  14. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2007-01-01

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....

  15. 4TH International Conference on High-Temperature Ceramic Matrix Composites

    National Research Council Canada - National Science Library

    2001-01-01

    .... Topic to be covered include fibers, interfaces, interphases, non-oxide ceramic matrix composites, oxide/oxide ceramic matrix composites, coatings, and applications of high-temperature ceramic matrix...

  16. Development of the PARVMEC Code for Rapid Analysis of 3D MHD Equilibrium

    Science.gov (United States)

    Seal, Sudip; Hirshman, Steven; Cianciosa, Mark; Wingen, Andreas; Unterberg, Ezekiel; Wilcox, Robert; ORNL Collaboration

    2015-11-01

    The VMEC three-dimensional (3D) MHD equilibrium has been used extensively for designing stellarator experiments and analyzing experimental data in such strongly 3D systems. Recent applications of VMEC include 2D systems such as tokamaks (in particular, the D3D experiment), where application of very small (delB/B ~ 10-3) 3D resonant magnetic field perturbations render the underlying assumption of axisymmetry invalid. In order to facilitate the rapid analysis of such equilibria (for example, for reconstruction purposes), we have undertaken the task of parallelizing the VMEC code (PARVMEC) to produce a scalable and temporally rapidly convergent equilibrium code for use on parallel distributed memory platforms. The parallelization task naturally splits into three distinct parts 1) radial surfaces in the fixed-boundary part of the calculation; 2) two 2D angular meshes needed to compute the Green's function integrals over the plasma boundary for the free-boundary part of the code; and 3) block tridiagonal matrix needed to compute the full (3D) pre-conditioner near the final equilibrium state. Preliminary results show that scalability is achieved for tasks 1 and 3, with task 2 still nearing completion. The impact of this work on the rapid reconstruction of D3D plasmas using PARVMEC in the V3FIT code will be discussed. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.

  17. The 1992 ENDF Pre-processing codes

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1992-01-01

    This document summarizes the 1992 version of the ENDF pre-processing codes which are required for processing evaluated nuclear data coded in the format ENDF-4, ENDF-5, or ENDF-6. Included are the codes CONVERT, MERGER, LINEAR, RECENT, SIGMA1, LEGEND, FIXUP, GROUPIE, DICTION, MIXER, VIRGIN, COMPLOT, EVALPLOT, RELABEL. Some of the functions of these codes are: to calculate cross-sections from resonance parameters; to calculate angular distributions, group average, mixtures of cross-sections, etc; to produce graphical plottings and data comparisons. The codes are designed to operate on virtually any type of computer including PC's. They are available from the IAEA Nuclear Data Section, free of charge upon request, on magnetic tape or a set of HD diskettes. (author)

  18. Tandem Mirror Reactor Systems Code (Version I)

    International Nuclear Information System (INIS)

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost

  19. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  20. The S-matrix of superstring field theory

    International Nuclear Information System (INIS)

    Konopka, Sebastian

    2015-01-01

    We show that the classical S-matrix calculated from the recently proposed superstring field theories give the correct perturbative S-matrix. In the proof we exploit the fact that the vertices are obtained by a field redefinition in the large Hilbert space. The result extends to include the NS-NS subsector of type II superstring field theory and the recently found equations of motions for the Ramond fields. In addition, our proof implies that the S-matrix obtained from Berkovits’ WZW-like string field theory then agrees with the perturbative S-matrix to all orders.

  1. Worst configurations (instantons) for compressed sensing over reals: a channel coding approach

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Vasic, Bane

    2010-01-01

    We consider Linear Programming (LP) solution of a Compressed Sensing (CS) problem over reals, also known as the Basis Pursuit (BasP) algorithm. The BasP allows interpretation as a channel-coding problem, and it guarantees the error-free reconstruction over reals for properly chosen measurement matrix and sufficiently sparse error vectors. In this manuscript, we examine how the BasP performs on a given measurement matrix and develop a technique to discover sparse vectors for which the BasP fails. The resulting algorithm is a generalization of our previous results on finding the most probable error-patterns, so called instantons, degrading performance of a finite size Low-Density Parity-Check (LDPC) code in the error-floor regime. The BasP fails when its output is different from the actual error-pattern. We design CS-Instanton Search Algorithm (ISA) generating a sparse vector, called CS-instanton, such that the BasP fails on the instanton, while its action on any modification of the CS-instanton decreasing a properly defined norm is successful. We also prove that, given a sufficiently dense random input for the error-vector, the CS-ISA converges to an instanton in a small finite number of steps. Performance of the CS-ISA is tested on example of a randomly generated 512 * 120 matrix, that outputs the shortest instanton (error vector) pattern of length 11.

  2. Permanent phonetic identification code for radiation workers

    International Nuclear Information System (INIS)

    Khatua, R.; Somasundaram, S.; Srivastava, D.N.

    1987-01-01

    This report describes a system of self-checking short and easily memorisable 4-digit 'Permanent Phonetic Radiation Code' (PPRC) using radix 128 for Indians occupationally exposed to radiation, to facilitate entry of all radiation dose data pertaining to an individual in a single record of a file. The logic of PPRC is computer compatible. The necessary computer program has been developed in Health Physics Division for printing the PPRCs in Devanagari script through dot-matrix printers for making it understandable to the majority of the persons concerned. (author)

  3. Implementation and evaluation of a simulation curriculum for paediatric residency programs including just-in-time in situ mock codes.

    Science.gov (United States)

    Sam, Jonathan; Pierse, Michael; Al-Qahtani, Abdullah; Cheng, Adam

    2012-02-01

    To develop, implement and evaluate a simulation-based acute care curriculum in a paediatric residency program using an integrated and longitudinal approach. Curriculum framework consisting of three modular, year-specific courses and longitudinal just-in-time, in situ mock codes. Paediatric residency program at BC Children's Hospital, Vancouver, British Columbia. The three year-specific courses focused on the critical first 5 min, complex medical management and crisis resource management, respectively. The just-in-time in situ mock codes simulated the acute deterioration of an existing ward patient, prepared the actual multidisciplinary code team, and primed the surrounding crisis support systems. Each curriculum component was evaluated with surveys using a five-point Likert scale. A total of 40 resident surveys were completed after each of the modular courses, and an additional 28 surveys were completed for the overall simulation curriculum. The highest Likert scores were for hands-on skill stations, immersive simulation environment and crisis resource management teaching. Survey results also suggested that just-in-time mock codes were realistic, reinforced learning, and prepared ward teams for patient deterioration. A simulation-based acute care curriculum was successfully integrated into a paediatric residency program. It provides a model for integrating simulation-based learning into other training programs, as well as a model for any hospital that wishes to improve paediatric resuscitation outcomes using just-in-time in situ mock codes.

  4. Utility experience in code updating of equipment built to 1974 code, Section 3, Subsection NF

    International Nuclear Information System (INIS)

    Rao, K.R.; Deshpande, N.

    1990-01-01

    This paper addresses changes to ASME Code Subsection NF and reconciles the differences between the updated codes and the as built construction code, of ASME Section III, 1974 to which several nuclear plants have been built. Since Section III is revised every three years and replacement parts complying with the construction code are invariably not available from the plant stock inventory, parts must be procured from vendors who comply with the requirements of the latest codes. Aspects of the ASME code which reflect Subsection NF are identified and compared with the later Code editions and addenda, especially up to and including the 1974 ASME code used as the basis for the plant qualification. The concern of the regulatory agencies is that if later code allowables and provisions are adopted it is possible to reduce the safety margins of the construction code. Areas of concern are highlighted and the specific changes of later codes are discerned; adoption of which, would not sacrifice the intended safety margins of the codes to which plants are licensed

  5. Conceptual OOP design of Pilot Code for Two-Fluid, Three-field Model with C++ 6.0

    Energy Technology Data Exchange (ETDEWEB)

    Chung, B. D.; Lee, Y. J

    2006-09-15

    To establish the concept of the objective oriented program (OOP) design for reactor safety analysis code, the preliminary OOP design for PILOT code, which based on one dimensional two fluid three filed model, has been attempted with C++ language feature. Microsoft C++ language has been used since it is available as groupware utilization in KAERI. The language has can be merged with Compac Visual Fortran 6.6 in Visual Studio platform. In the development platform, C++ has been used as main language and Fortran has been used as mixed language in connection with C++ main drive program. The mixed language environment is a specific feature provided in visual studio. Existing Fortran source was utilized for input routine of reading steam table from generated file and routine of steam property calculation. The calling convention and passing argument from C++ driver was corrected. The mathematical routine, such as inverse matrix conversion and tridiagonal matrix solver, has been used as PILOT Fortran routines. Simple volume and junction utilized in PILOT code can be treated as objects, since they are the basic construction elements of code system. Other routines for overall solution scheme have been realized as procedure C functions. The conceptual design which consists of hydraulic loop, component, volume, and junction class has been described in the appendix in order to give the essential OOP structure of system safety analysis code. The attempt shows that many part of system analysis code can be expressed as objects, although the overall structure should be maintained as procedure functions. The encapsulation of data and functions within an object can provide many beneficial aspects in programming of system code.

  6. Conceptual OOP design of Pilot Code for Two-Fluid, Three-field Model with C++ 6.0

    International Nuclear Information System (INIS)

    Chung, B. D.; Lee, Y. J.

    2006-09-01

    To establish the concept of the objective oriented program (OOP) design for reactor safety analysis code, the preliminary OOP design for PILOT code, which based on one dimensional two fluid three filed model, has been attempted with C++ language feature. Microsoft C++ language has been used since it is available as groupware utilization in KAERI. The language has can be merged with Compac Visual Fortran 6.6 in Visual Studio platform. In the development platform, C++ has been used as main language and Fortran has been used as mixed language in connection with C++ main drive program. The mixed language environment is a specific feature provided in visual studio. Existing Fortran source was utilized for input routine of reading steam table from generated file and routine of steam property calculation. The calling convention and passing argument from C++ driver was corrected. The mathematical routine, such as inverse matrix conversion and tridiagonal matrix solver, has been used as PILOT Fortran routines. Simple volume and junction utilized in PILOT code can be treated as objects, since they are the basic construction elements of code system. Other routines for overall solution scheme have been realized as procedure C functions. The conceptual design which consists of hydraulic loop, component, volume, and junction class has been described in the appendix in order to give the essential OOP structure of system safety analysis code. The attempt shows that many part of system analysis code can be expressed as objects, although the overall structure should be maintained as procedure functions. The encapsulation of data and functions within an object can provide many beneficial aspects in programming of system code

  7. The algebras of large N matrix mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Halpern, M.B.; Schwartz, C.

    1999-09-16

    Extending early work, we formulate the large N matrix mechanics of general bosonic, fermionic and supersymmetric matrix models, including Matrix theory: The Hamiltonian framework of large N matrix mechanics provides a natural setting in which to study the algebras of the large N limit, including (reduced) Lie algebras, (reduced) supersymmetry algebras and free algebras. We find in particular a broad array of new free algebras which we call symmetric Cuntz algebras, interacting symmetric Cuntz algebras, symmetric Bose/Fermi/Cuntz algebras and symmetric Cuntz superalgebras, and we discuss the role of these algebras in solving the large N theory. Most important, the interacting Cuntz algebras are associated to a set of new (hidden!) local quantities which are generically conserved only at large N. A number of other new large N phenomena are also observed, including the intrinsic nonlocality of the (reduced) trace class operators of the theory and a closely related large N field identification phenomenon which is associated to another set (this time nonlocal) of new conserved quantities at large N.

  8. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Directory of Open Access Journals (Sweden)

    Aaron T L Lun

    2018-05-01

    Full Text Available Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  9. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types.

    Science.gov (United States)

    Lun, Aaron T L; Pagès, Hervé; Smith, Mike L

    2018-05-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set.

  10. beachmat: A Bioconductor C++ API for accessing high-throughput biological data from a variety of R matrix types

    Science.gov (United States)

    Pagès, Hervé

    2018-01-01

    Biological experiments involving genomics or other high-throughput assays typically yield a data matrix that can be explored and analyzed using the R programming language with packages from the Bioconductor project. Improvements in the throughput of these assays have resulted in an explosion of data even from routine experiments, which poses a challenge to the existing computational infrastructure for statistical data analysis. For example, single-cell RNA sequencing (scRNA-seq) experiments frequently generate large matrices containing expression values for each gene in each cell, requiring sparse or file-backed representations for memory-efficient manipulation in R. These alternative representations are not easily compatible with high-performance C++ code used for computationally intensive tasks in existing R/Bioconductor packages. Here, we describe a C++ interface named beachmat, which enables agnostic data access from various matrix representations. This allows package developers to write efficient C++ code that is interoperable with dense, sparse and file-backed matrices, amongst others. We evaluated the performance of beachmat for accessing data from each matrix representation using both simulated and real scRNA-seq data, and defined a clear memory/speed trade-off to motivate the choice of an appropriate representation. We also demonstrate how beachmat can be incorporated into the code of other packages to drive analyses of a very large scRNA-seq data set. PMID:29723188

  11. Development of a large-scale general purpose two-phase flow analysis code

    International Nuclear Information System (INIS)

    Terasaka, Haruo; Shimizu, Sensuke

    2001-01-01

    A general purpose three-dimensional two-phase flow analysis code has been developed for solving large-scale problems in industrial fields. The code uses a two-fluid model to describe the conservation equations for two-phase flow in order to be applicable to various phenomena. Complicated geometrical conditions are modeled by FAVOR method in structured grid systems, and the discretization equations are solved by a modified SIMPLEST scheme. To reduce computing time a matrix solver for the pressure correction equation is parallelized with OpenMP. Results of numerical examples show that the accurate solutions can be obtained efficiently and stably. (author)

  12. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  14. Description of the COMRADEX code

    International Nuclear Information System (INIS)

    Spangler, G.W.; Boling, M.; Rhoades, W.A.; Willis, C.A.

    1967-01-01

    The COMRADEX Code is discussed briefly and instructions are provided for the use of the code. The subject code was developed for calculating doses from hypothetical power reactor accidents. It permits the user to analyze four successive levels of containment with time-varying leak rates. Filtration, cleanup, fallout and plateout in each containment shell can also be analyzed. The doses calculated include the direct gamma dose from the containment building, the internal doses to as many as 14 organs including the thyroid, bone, lung, etc. from inhaling the contaminated air, and the external gamma doses from the cloud. While further improvements are needed, such as a provision for calculating doses from fallout, rainout and washout, the present code capabilities have a wide range of applicability for reactor accident analysis

  15. Data exchange between zero dimensional code and physics platform in the CFETR integrated system code

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Guoliang [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Shi, Nan [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Zhou, Yifu; Mao, Shifeng [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Jian, Xiang [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, School of Electrical and Electronics Engineering, Huazhong University of Science and Technology, Wuhan 430074 (China); Chen, Jiale [Institute of Plasma Physics, Chinese Academy of Sciences, No. 350 Shushanhu Road, Hefei (China); Liu, Li; Chan, Vincent [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China); Ye, Minyou, E-mail: yemy@ustc.edu.cn [School of Nuclear Science and Technology, University of Science and Technology of China, Hefei 230026 China (China)

    2016-11-01

    Highlights: • The workflow of the zero dimensional code and the multi-dimension physics platform of CFETR integrated system codeis introduced. • The iteration process among the codes in the physics platform. • The data transfer between the zero dimensionalcode and the physical platform, including data iteration and validation, and justification for performance parameters.. - Abstract: The China Fusion Engineering Test Reactor (CFETR) integrated system code contains three parts: a zero dimensional code, a physics platform and an engineering platform. We use the zero dimensional code to identify a set of preliminary physics and engineering parameters for CFETR, which is used as input to initiate multi-dimension studies using the physics and engineering platform for design, verification and validation. Effective data exchange between the zero dimensional code and the physical platform is critical for the optimization of CFETR design. For example, in evaluating the impact of impurity radiation on core performance, an open field line code is used to calculate the impurity transport from the first-wall boundary to the pedestal. The impurity particle in the pedestal are used as boundary conditions in a transport code for calculating impurity transport in the core plasma and the impact of core radiation on core performance. Comparison of the results from the multi-dimensional study to those from the zero dimensional code is used to further refine the controlled radiation model. The data transfer between the zero dimensional code and the physical platform, including data iteration and validation, and justification for performance parameters will be presented in this paper.

  16. Fully-differential NNLO predictions for vector-boson pair production with MATRIX

    CERN Document Server

    Wiesemann, Marius; Kallweit, Stefan; Rathlev, Dirk

    2016-01-01

    We review the computations of the next-to-next-to-leading order (NNLO) QCD corrections to vector-boson pair production processes in proton–proton collisions and their implementation in the numerical code MATRIX. Our calculations include the leptonic decays of W and Z bosons, consistently taking into account all spin correlations, off-shell effects and non-resonant contributions. For massive vector-boson pairs we show inclusive cross sections, applying the respective mass windows chosen by ATLAS and CMS to define Z bosons from their leptonic decay products, as well as total cross sections for stable bosons. Moreover, we provide samples of differential distributions in fiducial phase-space regions inspired by typical selection cuts used by the LHC experiments. For the vast majority of measurements, the inclusion of NNLO corrections significantly improves the agreement of the Standard Model predictions with data.

  17. 抽动障碍儿童沙盘游戏分析评估表的编制%Development of sandplay production analysis evaluation matrix for children with tic disorders

    Institute of Scientific and Technical Information of China (English)

    马红霞; 庞斯靖; 章小雷; 黄钢; 陈毅怡

    2017-01-01

    Objective To develop a sandplay analysis evaluation matrix for children with tic disor?ders. Methods 113 children were chosen as the study subjects . Based on the grounded theory,the open coding,correlated coding and core coding were carried out on coding elements for sandplay process to gain the third?level,second?level and first?level evaluation codes. The reliability and validity of the codes were tested.Results The sandplay analysis evaluation matrix for children with tic disorders was established inclu?ding 48 third?level,17 second?level and 3 first?level evaluation codes. The codes were proved to be reliable and valid through the comparison and conditional coding with query. Conclusion The sandplay analysis e?valuation matrix for children with tic disorders,which is established based on the grounded theory,is reliable and operable. It can be used as a tool to assess the psychological or behavioral problems of children with tic disorders dynamically.%目的 编制一套抽动障碍儿童沙盘游戏分析评估表.方法 以113例抽动障碍儿童作为研究对象,应用扎根理论对沙盘游戏编码要素进行开放式编码、关联编码、核心编码,获得抽动障碍儿童沙盘游戏的三级、二级、一级评估编码,对编码建立适当的操作性定义,并进行编码的信度和效度检验.结果 抽动障碍儿童沙盘游戏分析评估表中包含48个三级评估编码,17个二级评估编码,3个一级评估编码.编码比较质询及条件编码质询提示编码有效可信.结论 基于扎根理论构建的抽动障碍儿童沙盘游戏分析评估表,该量表信度可靠,可操作性强,可作为抽动障碍儿童心理行为的动态评估工具.

  18. Test and intercomparisons of data fitting with general least squares code GMA versus Bayesian code GLUCS

    International Nuclear Information System (INIS)

    Pronyaev, V.G.

    2003-01-01

    Data fitting with GMA and GLUCS gives consistent results. Difference in the evaluated central values obtained with different formalisms can be related to the general accuracy with which fits could be done in different formalisms. It has stochastic nature and should be accounted in the final results of the data evaluation as small SERC uncertainty. Some shift in central values of data evaluated with GLUCS and GMA relative the central values evaluated with the R-matrix model code RAC is observed for cases of fitting strongly varying data and is related to the PPP. The procedure of evaluation, free from PPP, should be elaborated. (author)

  19. Matrix Management: An Organizational Alternative for Libraries.

    Science.gov (United States)

    Johnson, Peggy

    1990-01-01

    Describes various organizational structures and models, presents matrix management as an alternative to traditional hierarchical structures, and suggests matrix management as an appropriate organizational alternative for academic libraries. Benefits that are discussed include increased flexibility, a higher level of professional independence, and…

  20. Australasian code for reporting of mineral resources and ore reserves (the JORC code)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-06-01

    The latest revision of the Code first published in 1989 becomes effective in September 1999. It was prepared by the Joint Ores Reserves Committee of the Australasian Institute of Mining and Metallurgy, Australian Institute of Geoscientists and Minerals Council of Australia (JORC). It sets out minimum standards, recommendations and guidelines for public reporting of exploration results, mineral resources and ore reserves in Australasia. In this edition, the guidelines, which were previously separated from the Code, have been placed after the respective Code clauses. The Code is applicable to all solid minerals, including diamonds, other gemstones and coal for which public reporting is required by the Australian and New Zealand Stock Exchanges.

  1. A finite range coupled channel Born approximation code

    International Nuclear Information System (INIS)

    Nagel, P.; Koshel, R.D.

    1978-01-01

    The computer code OUKID calculates differential cross sections for direct transfer nuclear reactions in which multistep processes, arising from strongly coupled inelastic states in both the target and residual nuclei, are possible. The code is designed for heavy ion reactions where full finite range and recoil effects are important. Distorted wave functions for the elastic and inelastic scattering are calculated by solving sets of coupled differential equations using a Matrix Numerov integration procedure. These wave functions are then expanded into bases of spherical Bessel functions by the plane-wave expansion method. This approach allows the six-dimensional integrals for the transition amplitude to be reduced to products of two one-dimensional integrals. Thus, the inelastic scattering is treated in a coupled channel formalism while the transfer process is treated in a finite range born approximation formalism. (Auth.)

  2. Development of an inelastic stress analysis code 'KINE-T' and its evaluations

    International Nuclear Information System (INIS)

    Kobatake, K.; Takahashi, S.; Suzuki, M.

    1977-01-01

    Referring to the ASME B and PVC Code Case 1592-7, the inelastic stress analysis is required for the designs of the class 1 components in elevated temperature if the results of the elastic stress analysis and/or simplified inelastic analysis do not satisfy the requirements. Authors programmed a two-dimensional axisymmetric inelastic analysis code 'KINE-T', and carried out its evaluations and an application. This FEM code is based on the incremental method and the following: elastic-plastic constitutive equation (yield condition of von Mises; flow rule of Prandtl-Reuss; Prager's hardening rule); creep constitutive equation (equation of state approach; flow rule of von Mises; strain hardening rule); the temperature dependency of the yield function is considered; solution procedure of the assembled stiffness matrix is the 'initial stress method'. After the completion of the programming, authors compared the output with not only theoretical results but also with those of the MARC code and the ANSYS code. In order to apply the code to the practical designing, authors settled a quasi-component two-dimensional axisymmetric model and a loading cycle (500 cycles). Then, an inelastic analysis and its integrity evaluation are carried out

  3. R-Matrix Evaluation of 16O neutron cross sections up to 6.3 MeV

    International Nuclear Information System (INIS)

    Sayer, R.O.; Leal, L.C.; Larson, N.M.; Spencer, R.R.; Wright, R.Q.

    2000-01-01

    In this paper the authors describe an evaluation of 16 O neutron cross sections in the resolved resonance region with the multilevel Reich-Moore R-matrix formalism. Resonance analyses were performed with the computer code SAMMY [LA98] which utilizes Bayes' method, a generalized least squares technique

  4. Quantitative profiling of O-glycans by electrospray ionization- and matrix-assisted laser desorption ionization-time-of-flight-mass spectrometry after in-gel derivatization with isotope-coded 1-phenyl-3-methyl-5-pyrazolone

    International Nuclear Information System (INIS)

    Sić, Siniša; Maier, Norbert M.; Rizzi, Andreas M.

    2016-01-01

    The potential and benefits of isotope-coded labeling in the context of MS-based glycan profiling are evaluated focusing on the analysis of O-glycans. For this purpose, a derivatization strategy using d_0/d_5-1-phenyl-3-methyl-5-pyrazolone (PMP) is employed, allowing O-glycan release and derivatization to be achieved in one single step. The paper demonstrates that this release and derivatization reaction can be carried out also in-gel with only marginal loss in sensitivity compared to in-solution derivatization. Such an effective in-gel reaction allows one to extend this release/labeling method also to glycoprotein/glycoform samples pre-separated by gel-electrophoresis without the need of extracting the proteins/digested peptides from the gel. With highly O-glycosylated proteins (e.g. mucins) LODs in the range of 0.4 μg glycoprotein (100 fmol) loaded onto the electrophoresis gel can be attained, with minor glycosylated proteins (like IgAs, FVII, FIX) the LODs were in the range of 80–100 μg (250 pmol–1.5 nmol) glycoprotein loaded onto the gel. As second aspect, the potential of isotope coded labeling as internal standardization strategy for the reliable determination of quantitative glycan profiles via MALDI-MS is investigated. Towards this goal, a number of established and emerging MALDI matrices were tested for PMP-glycan quantitation, and their performance is compared with that of ESI-based measurements. The crystalline matrix 2,6-dihydroxyacetophenone (DHAP) and the ionic liquid matrix N,N-diisopropyl-ethyl-ammonium 2,4,6-trihydroxyacetophenone (DIEA-THAP) showed potential for MALDI-based quantitation of PMP-labeled O-glycans. We also provide a comprehensive overview on the performance of MS-based glycan quantitation approaches by comparing sensitivity, LOD, accuracy and repeatability data obtained with RP-HPLC-ESI-MS, stand-alone nano-ESI-MS with a spray-nozzle chip, and MALDI-MS. Finally, the suitability of the isotope-coded PMP labeling strategy for

  5. Quantitative profiling of O-glycans by electrospray ionization- and matrix-assisted laser desorption ionization-time-of-flight-mass spectrometry after in-gel derivatization with isotope-coded 1-phenyl-3-methyl-5-pyrazolone

    Energy Technology Data Exchange (ETDEWEB)

    Sić, Siniša; Maier, Norbert M.; Rizzi, Andreas M., E-mail: Andreas.Rizzi@univie.ac.at

    2016-09-07

    The potential and benefits of isotope-coded labeling in the context of MS-based glycan profiling are evaluated focusing on the analysis of O-glycans. For this purpose, a derivatization strategy using d{sub 0}/d{sub 5}-1-phenyl-3-methyl-5-pyrazolone (PMP) is employed, allowing O-glycan release and derivatization to be achieved in one single step. The paper demonstrates that this release and derivatization reaction can be carried out also in-gel with only marginal loss in sensitivity compared to in-solution derivatization. Such an effective in-gel reaction allows one to extend this release/labeling method also to glycoprotein/glycoform samples pre-separated by gel-electrophoresis without the need of extracting the proteins/digested peptides from the gel. With highly O-glycosylated proteins (e.g. mucins) LODs in the range of 0.4 μg glycoprotein (100 fmol) loaded onto the electrophoresis gel can be attained, with minor glycosylated proteins (like IgAs, FVII, FIX) the LODs were in the range of 80–100 μg (250 pmol–1.5 nmol) glycoprotein loaded onto the gel. As second aspect, the potential of isotope coded labeling as internal standardization strategy for the reliable determination of quantitative glycan profiles via MALDI-MS is investigated. Towards this goal, a number of established and emerging MALDI matrices were tested for PMP-glycan quantitation, and their performance is compared with that of ESI-based measurements. The crystalline matrix 2,6-dihydroxyacetophenone (DHAP) and the ionic liquid matrix N,N-diisopropyl-ethyl-ammonium 2,4,6-trihydroxyacetophenone (DIEA-THAP) showed potential for MALDI-based quantitation of PMP-labeled O-glycans. We also provide a comprehensive overview on the performance of MS-based glycan quantitation approaches by comparing sensitivity, LOD, accuracy and repeatability data obtained with RP-HPLC-ESI-MS, stand-alone nano-ESI-MS with a spray-nozzle chip, and MALDI-MS. Finally, the suitability of the isotope-coded PMP labeling

  6. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  7. 24 CFR 200.925c - Model codes.

    Science.gov (United States)

    2010-04-01

    ... below. (1) Model Building Codes—(i) The BOCA National Building Code, 1993 Edition, The BOCA National..., Administration, for the Building, Plumbing and Mechanical Codes and the references to fire retardant treated wood... number 2 (Chapter 7) of the Building Code, but including the Appendices of the Code. Available from...

  8. Response matrix of a multisphere neutron spectrometer with an 3 He proportional counter

    International Nuclear Information System (INIS)

    Vega C, H.R.; Manzanares A, E.; Hernandez D, V.M.; Mercado S, G.A.

    2005-01-01

    The response matrix of a Bonner sphere spectrometer was calculated by use of the MCNP code. As thermal neutron counter, the spectrometer has a 3.2 cm-diameter 3 He-filled proportional counter which is located at the center of a set of polyethylene spheres. The response was calculated for 0, 3, 5, 6, 8, 10, 12, and 16 inches-diameter polyethylene spheres for neutrons whose energy goes from 10 -9 to 20 MeV. The response matrix was compared with a set of responses measured with several monoenergetic neutron sources. In this comparison the calculated matrix agrees with the experimental results. The matrix was also compared with the response matrix calculated for the PTB C spectrometer. Even though that calculation was carried out using a detailed model to describe the proportional counter; both matrices do agree, but small differences are observed in the bare case because of the difference in the model used during calculations. Other differences are in some spheres for 14.8 and 20 MeV neutrons, probably due to the differences in the cross sections used during both calculations. (Author) 28 refs., 1 tab., 6 figs

  9. Abstracts of digital computer code packages. Assembled by the Radiation Shielding Information Center. [Radiation transport codes

    Energy Technology Data Exchange (ETDEWEB)

    McGill, B.; Maskewitz, B.F.; Anthony, C.M.; Comolander, H.E.; Hendrickson, H.R.

    1976-01-01

    The term ''code package'' is used to describe a miscellaneous grouping of materials which, when interpreted in connection with a digital computer, enables the scientist--user to solve technical problems in the area for which the material was designed. In general, a ''code package'' consists of written material--reports, instructions, flow charts, listings of data, and other useful material and IBM card decks (or, more often, a reel of magnetic tape) on which the source decks, sample problem input (including libraries of data) and the BCD/EBCDIC output listing from the sample problem are written. In addition to the main code, and any available auxiliary routines are also included. The abstract format was chosen to give to a potential code user several criteria for deciding whether or not he wishes to request the code package. (RWR)

  10. Unfolding code for neutron spectrometry based on neural nets technology

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J. M.; Vega C, H. R., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Unidad Academica de Ingenieria Electrica, Apdo. Postal 336, 98000 Zacatecas (Mexico)

    2012-10-15

    The most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Neural Networks have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This unfolding code called Neutron Spectrometry and Dosimetry by means of Artificial Neural Networks was designed in a graphical interface under LabVIEW programming environment. The core of the code is an embedded neural network architecture, previously optimized by the {sup R}obust Design of Artificial Neural Networks Methodology{sup .} The main features of the code are: is easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a {sup 6}Lil(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, only seven rate counts measurement with a Bonner spheres spectrometer are required for simultaneously unfold the 60 energy bins of the neutron spectrum and to calculate 15 dosimetric quantities, for radiation protection porpoises. This code generates a full report in html format with all relevant information. (Author)

  11. New MoM code incorporating multiple domain basis functions

    CSIR Research Space (South Africa)

    Lysko, AA

    2011-08-01

    Full Text Available piecewise linear approximation of geometry. This often leads to an unnecessarily great number of unknowns used to model relatively small loop and spiral antennas, coils and other curved structures. This is because the program creates a dense mesh... to accelerate computation of the elements of the impedance matrix and showed acceleration factor exceeding an order of magnitude, subject to a high accuracy requirement. 3. On Code Functionality and Application Results The package of programs was written...

  12. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  13. JaSTA-2: Second version of the Java Superposition T-matrix Application

    Science.gov (United States)

    Halder, Prithish; Das, Himadri Sekhar

    2017-12-01

    In this article, we announce the development of a new version of the Java Superposition T-matrix App (JaSTA-2), to study the light scattering properties of porous aggregate particles. It has been developed using Netbeans 7.1.2, which is a java integrated development environment (IDE). The JaSTA uses double precision superposition T-matrix codes for multi-sphere clusters in random orientation, developed by Mackowski and Mischenko (1996). The new version consists of two options as part of the input parameters: (i) single wavelength and (ii) multiple wavelengths. The first option (which retains the applicability of older version of JaSTA) calculates the light scattering properties of aggregates of spheres for a single wavelength at a given instant of time whereas the second option can execute the code for a multiple numbers of wavelengths in a single run. JaSTA-2 provides convenient and quicker data analysis which can be used in diverse fields like Planetary Science, Atmospheric Physics, Nanoscience, etc. This version of the software is developed for Linux platform only, and it can be operated over all the cores of a processor using the multi-threading option.

  14. Implementation of burnup in FERM nodal computer code

    International Nuclear Information System (INIS)

    Yoriyaz, H.; Nakata, H.

    1986-01-01

    In this work a spatial burnup scheme and feedback effects has been implemented into the FERM [1] ('Finite Element Response Matrix') program. The spatially dependent neutronic parameters have been considered in three levels: zonewise calculation, assemblywise calculation and pointwise calculation. The results have been compared with the results obtained by CITATION [2] program and showed that the processing time in the FERM code has been hundred of times shorter and no significant difference has been observed in the assembly average power distribution. (Author) [pt

  15. On the development of LWR fuel analysis code (1). Analysis of the FEMAXI code and proposal of a new model

    International Nuclear Information System (INIS)

    Lemehov, Sergei; Suzuki, Motoe

    2000-01-01

    This report summarizes the review on the modeling features of FEMAXI code and proposal of a new theoretical equation model of clad creep on the basis of irradiation-induced microstructure change. It was pointed out that plutonium build-up in fuel matrix and non-uniform radial power profile at high burn-up affect significantly fuel behavior through the interconnected effects with such phenomena as clad irradiation-induced creep, fission gas release, fuel thermal conductivity degradation, rim porous band formation and associated fuel swelling. Therefore, these combined effects should be properly incorporated into the models of the FEMAXI code so that the code can carry out numerical analysis at the level of accuracy and elaboration that modern experimental data obtained in test reactors have. Also, the proposed new mechanistic clad creep model has a general formalism which allows the model to be flexibly applied for clad behavior analysis under normal operation conditions and power transients as well for Zr-based clad materials by the use of established out-of-pile mechanical properties. The model has been tested against experimental data, while further verification is needed with specific emphasis on power ramps and transients. (author)

  16. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  17. Global calculation of PWR reactor core using the two group energy solution by the response matrix method

    International Nuclear Information System (INIS)

    Conti, C.F.S.; Watson, F.V.

    1991-01-01

    A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)

  18. New electromagnetic particle simulation code for the analysis of spacecraft-plasma interactions

    International Nuclear Information System (INIS)

    Miyake, Yohei; Usui, Hideyuki

    2009-01-01

    A novel particle simulation code, the electromagnetic spacecraft environment simulator (EMSES), has been developed for the self-consistent analysis of spacecraft-plasma interactions on the full electromagnetic (EM) basis. EMSES includes several boundary treatments carefully coded for both longitudinal and transverse electric fields to satisfy perfect conductive surface conditions. For the longitudinal component, the following are considered: (1) the surface charge accumulation caused by impinging or emitted particles and (2) the surface charge redistribution, such that the surface becomes an equipotential. For item (1), a special treatment has been adopted for the current density calculated around the spacecraft surface, so that the charge accumulation occurs exactly on the surface. As a result, (1) is realized automatically in the updates of the charge density and the electric field through the current density. Item (2) is achieved by applying the capacity matrix method. Meanwhile, the transverse electric field is simply set to zero for components defined inside and tangential to the spacecraft surfaces. This paper also presents the validation of EMSES by performing test simulations for spacecraft charging and peculiar EM wave modes in a plasma sheath.

  19. New tools to analyze overlapping coding regions.

    Science.gov (United States)

    Bayegan, Amir H; Garcia-Martin, Juan Antonio; Clote, Peter

    2016-12-13

    Retroviruses transcribe messenger RNA for the overlapping Gag and Gag-Pol polyproteins, by using a programmed -1 ribosomal frameshift which requires a slippery sequence and an immediate downstream stem-loop secondary structure, together called frameshift stimulating signal (FSS). It follows that the molecular evolution of this genomic region of HIV-1 is highly constrained, since the retroviral genome must contain a slippery sequence (sequence constraint), code appropriate peptides in reading frames 0 and 1 (coding requirements), and form a thermodynamically stable stem-loop secondary structure (structure requirement). We describe a unique computational tool, RNAsampleCDS, designed to compute the number of RNA sequences that code two (or more) peptides p,q in overlapping reading frames, that are identical (or have BLOSUM/PAM similarity that exceeds a user-specified value) to the input peptides p,q. RNAsampleCDS then samples a user-specified number of messenger RNAs that code such peptides; alternatively, RNAsampleCDS can exactly compute the position-specific scoring matrix and codon usage bias for all such RNA sequences. Our software allows the user to stipulate overlapping coding requirements for all 6 possible reading frames simultaneously, even allowing IUPAC constraints on RNA sequences and fixing GC-content. We generalize the notion of codon preference index (CPI) to overlapping reading frames, and use RNAsampleCDS to generate control sequences required in the computation of CPI. Moreover, by applying RNAsampleCDS, we are able to quantify the extent to which the overlapping coding requirement in HIV-1 [resp. HCV] contribute to the formation of the stem-loop [resp. double stem-loop] secondary structure known as the frameshift stimulating signal. Using our software, we confirm that certain experimentally determined deleterious HCV mutations occur in positions for which our software RNAsampleCDS and RNAiFold both indicate a single possible nucleotide. We

  20. Light-water-reactor coupled neutronic and thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Diamond, D.J.

    1982-01-01

    An overview is presented of computer codes that model light water reactor cores with coupled neutronics and thermal-hydraulics. This includes codes for transient analysis and codes for steady state analysis which include fuel depletion and fission product buildup. Applications in nuclear design, reactor operations and safety analysis are given and the major codes in use in the USA are identified. The neutronic and thermal-hydraulic methodologies and other code features are outlined for three steady state codes (PDQ7, NODE-P/B and SIMULATE) and four dynamic codes (BNL-TWIGL, MEKIN, RAMONA-3B, RETRAN-02). Speculation as to future trends with such codes is also presented

  1. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  2. Recent advances in the Poisson/superfish codes

    International Nuclear Information System (INIS)

    Ryne, R.; Barts, T.; Chan, K.C.D.; Cooper, R.; Deaven, H.; Merson, J.; Rodenz, G.

    1992-01-01

    We report on advances in the POISSON/SUPERFISH family of codes used in the design and analysis of magnets and rf cavities. The codes include preprocessors for mesh generation and postprocessors for graphical display of output and calculation of auxiliary quantities. Release 3 became available in January 1992; it contains many code corrections and physics enhancements, and it also includes support for PostScript, DISSPLA, GKS and PLOT10 graphical output. Release 4 will be available in September 1992; it is free of all bit packing, making the codes more portable and able to treat very large numbers of mesh points. Release 4 includes the preprocessor FRONT and a new menu-driven graphical postprocessor that runs on workstations under X-Windows and that is capable of producing arrow plots. We will present examples that illustrate the new capabilities of the codes. (author). 6 refs., 3 figs

  3. SAMMY, Multilevel R-Matrix Fits to Neutron and Charged-Particle Cross-Section Data Using Bayes' Equations

    International Nuclear Information System (INIS)

    Larson, Nancy M.

    2007-01-01

    1 - Description of problem or function: The purpose of the code is to analyze time-of-flight cross section data in the resolved and unresolved resonance regions, where the incident particle is either a neutron or a charged particle (p, alpha, d,...). Energy-differential cross sections and angular-distribution data are treated, as are certain forms of energy-integrated data. In the resolved resonance region (RRR), theoretical cross sections are generated using the Reich-Moore approximation to R-matrix theory (and extensions thereof). Sophisticated models are used to describe the experimental situation: Data-reduction parameters (e.g. normalization, background, sample thickness) are included. Several options are available for both resolution and Doppler broadening, including a crystal-lattice model for Doppler broadening. Self-shielding and multiple-scattering correction options are available for analysis of capture cross sections. Multiple isotopes and impurities within a sample are handled accurately. Cross sections in the unresolved resonance region (URR) can also be analyzed using SAMMY. The capability was borrowed from Froehner's FITACS code; SAMMY modifications for the URR include more exact calculation of partial derivatives, normalization options for the experimental data, increased flexibility for input of experimental data, introduction of user-friendly input options. In both energy regions, values for resonance parameters and for data-related parameters (such as normalization, sample thickness, effective temperature, resolution parameters) are determined via fits to the experimental data using Bayes' method (see below). Final results may be reported in ENDF format for inclusion in the evaluated nuclear data files. The manner in which SAMMY 7 (released in 2006) differs from the previous version (SAMMY-M6) is itemized in Section I.A of the SAMMY users' manual. Details of the 7.0.1 update are documented in an errata SAMMY 7.0.1 Errata (http

  4. Performance Analysis of Iterative Decoding Algorithms for PEG LDPC Codes in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    O. Al Rasheed

    2013-11-01

    Full Text Available In this paper we give a comparative analysis of decoding algorithms of Low Density Parity Check (LDPC codes in a channel with the Nakagami distribution of the fading envelope. We consider the Progressive Edge-Growth (PEG method and Improved PEG method for the parity check matrix construction, which can be used to avoid short girths, small trapping sets and a high level of error floor. A comparative analysis of several classes of LDPC codes in various propagation conditions and decoded using different decoding algorithms is also presented.

  5. What to do with a Dead Research Code

    Science.gov (United States)

    Nemiroff, Robert J.

    2016-01-01

    The project has ended -- should all of the computer codes that enabled the project be deleted? No. Like research papers, research codes typically carry valuable information past project end dates. Several possible end states to the life of research codes are reviewed. Historically, codes are typically left dormant on an increasingly obscure local disk directory until forgotten. These codes will likely become any or all of: lost, impossible to compile and run, difficult to decipher, and likely deleted when the code's proprietor moves on or dies. It is argued here, though, that it would be better for both code authors and astronomy generally if project codes were archived after use in some way. Archiving is advantageous for code authors because archived codes might increase the author's ADS citable publications, while astronomy as a science gains transparency and reproducibility. Paper-specific codes should be included in the publication of the journal papers they support, just like figures and tables. General codes that support multiple papers, possibly written by multiple authors, including their supporting websites, should be registered with a code registry such as the Astrophysics Source Code Library (ASCL). Codes developed on GitHub can be archived with a third party service such as, currently, BackHub. An important code version might be uploaded to a web archiving service like, currently, Zenodo or Figshare, so that this version receives a Digital Object Identifier (DOI), enabling it to found at a stable address into the future. Similar archiving services that are not DOI-dependent include perma.cc and the Internet Archive Wayback Machine at archive.org. Perhaps most simply, copies of important codes with lasting value might be kept on a cloud service like, for example, Google Drive, while activating Google's Inactive Account Manager.

  6. Code-Hopping Based Transmission Scheme for Wireless Physical-Layer Security

    Directory of Open Access Journals (Sweden)

    Liuguo Yin

    2018-01-01

    Full Text Available Due to the broadcast and time-varying natures of wireless channels, traditional communication systems that provide data encryption at the application layer suffer many challenges such as error diffusion. In this paper, we propose a code-hopping based secrecy transmission scheme that uses dynamic nonsystematic low-density parity-check (LDPC codes and automatic repeat-request (ARQ mechanism to jointly encode and encrypt source messages at the physical layer. In this scheme, secret keys at the transmitter and the legitimate receiver are generated dynamically upon the source messages that have been transmitted successfully. During the transmission, each source message is jointly encoded and encrypted by a parity-check matrix, which is dynamically selected from a set of LDPC matrices based on the shared dynamic secret key. As for the eavesdropper (Eve, the uncorrectable decoding errors prevent her from generating the same secret key as the legitimate parties. Thus she cannot select the correct LDPC matrix to recover the source message. We demonstrate that our scheme can be compatible with traditional cryptosystems and enhance the security without sacrificing the error-correction performance. Numerical results show that the bit error rate (BER of Eve approaches 0.5 as the number of transmitted source messages increases and the security gap of the system is small.

  7. COLLAGE 2: a numerical code for radionuclide migration through a fractured geosphere in aqueous and colloidal phases

    International Nuclear Information System (INIS)

    Grindrod, P.; Cooper, N.

    1993-05-01

    In previous work, the COLLAGE code was developed to model the impacts of mobile and immobile colloidal material upon the dispersal and migration of a radionuclide species within a saturated planer fracture surrounded by porous media. The adsorption of radionuclides to colloid surfaces was treated as instantaneous and reversible. In this report we present a new version of the code, COLLAGE 2. Here the adsorption of radionuclides to the colloidal material is treated via first order kinetics. The flow and geometry of the fracture remain as in the previous model. The major effect of colloids upon the radionuclide species is to adsorb them within the fracture space and thus exclude them from the surrounding porous medium. Thus the matrix diffusion process, a strongly retarding effect, is exchanged for a colloid capture/release process by which adsorbed nuclides are also retarded. The effects of having a colloid-radionuclide kinetic interaction include the phenomena of double pulse breakthrough (the pseudo colloid population followed by the solute plume) in cases where the desorption process is slow and the pseudo colloids are highly mobile. Some example calculations are given and some verification examples are discussed. Finally a complete listing of the code is presented as an appendix, including the subroutines allowing for the numerical inversion of the Laplace transformed solution via Talbot's method. 6 figs

  8. COLLAGE 2: a numerical code for radionuclide migration through a fractured geosphere in aqueous and colloidal phases

    Energy Technology Data Exchange (ETDEWEB)

    Grindrod, P.; Cooper, N. [Intera Information Technologies Ltd., Henley-on-Thames (United Kingdom)

    1993-05-01

    In previous work, the COLLAGE code was developed to model the impacts of mobile and immobile colloidal material upon the dispersal and migration of a radionuclide species within a saturated planer fracture surrounded by porous media. The adsorption of radionuclides to colloid surfaces was treated as instantaneous and reversible. In this report we present a new version of the code, COLLAGE 2. Here the adsorption of radionuclides to the colloidal material is treated via first order kinetics. The flow and geometry of the fracture remain as in the previous model. The major effect of colloids upon the radionuclide species is to adsorb them within the fracture space and thus exclude them from the surrounding porous medium. Thus the matrix diffusion process, a strongly retarding effect, is exchanged for a colloid capture/release process by which adsorbed nuclides are also retarded. The effects of having a colloid-radionuclide kinetic interaction include the phenomena of double pulse breakthrough (the pseudo colloid population followed by the solute plume) in cases where the desorption process is slow and the pseudo colloids are highly mobile. Some example calculations are given and some verification examples are discussed. Finally a complete listing of the code is presented as an appendix, including the subroutines allowing for the numerical inversion of the Laplace transformed solution via Talbot`s method. 6 figs.

  9. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  10. A Simple Differential Modulation Scheme for Quasi-Orthogonal Space-Time Block Codes with Partial Transmit Diversity

    Directory of Open Access Journals (Sweden)

    Lingyang Song

    2007-04-01

    Full Text Available We report a simple differential modulation scheme for quasi-orthogonal space-time block codes. A new class of quasi-orthogonal coding structures that can provide partial transmit diversity is presented for various numbers of transmit antennas. Differential encoding and decoding can be simplified for differential Alamouti-like codes by grouping the signals in the transmitted matrix and decoupling the detection of data symbols, respectively. The new scheme can achieve constant amplitude of transmitted signals, and avoid signal constellation expansion; in addition it has a linear signal detector with very low complexity. Simulation results show that these partial-diversity codes can provide very useful results at low SNR for current communication systems. Extension to more than four transmit antennas is also considered.

  11. FINELM: a multigroup finite element diffusion code. Part I

    International Nuclear Information System (INIS)

    Davierwalla, D.M.

    1980-12-01

    The author presents a two dimensional code for multigroup diffusion using the finite element method. It was realized that the extensive connectivity which contributes significantly to the accuracy, results in a matrix which, although symmetric and positive definite, is wide band and possesses an irregular profile. Hence, it was decided to introduce sparsity techniques into the code. The introduction of the R-Z geometry lead to a great deal of changes in the code since the rotational invariance of the removal matrices in X-Y geometry did not carry over in R-Z geometry. Rectangular elements were introduced to remedy the inability of the triangles to model essentially one dimensional problems such as slab geometry. The matter is discussed briefly in the text in the section on benchmark problems. This report is restricted to the general theory of the triangular elements and to the sparsity techniques viz. incomplete disections. The latter makes the size of the problem that can be handled independent of core memory and dependent only on disc storage capacity which is virtually unlimited. (Auth.)

  12. A 3D heat conduction model for block-type high temperature reactors and its implementation into the code DYN3D

    International Nuclear Information System (INIS)

    Baier, Silvio; Kliem, Soeren; Rohde, Ulrich

    2011-01-01

    The gas-cooled high temperature reactor is a concept to produce energy at high temperatures with a high level of inherent safety. It gets special attraction due to e.g. high thermal efficiency and the possibility of hydrogen production. In addition to the PBMR (Pebble Bed Modular Reactor) the (V)HTR (Very high temperature reactor) concept has been established. The basic design of a prismatic HTR consists of the following elements. The fuel is coated with four layers of isotropic materials. These so-called TRISO particles are dispersed into compacts which are placed in a graphite block matrix. The graphite matrix additionally contains holes for the coolant gas. A one-dimensional model is sufficient to describe (the radial) heat transfer in LWRs. But temperature gradients in a prismatic HTR can occur in axial as well as in radial direction, since regions with different heat source release and with different coolant temperature heat up are coupled through the graphite matrix elements. Furthermore heat transfer into reflector elements is possible. DYN3D is a code system for coupled neutron and thermal hydraulics core calculations developed at the Helmholtzzentrum Dresden-Rossendorf. Concerning neutronics DYN3D consists of a two-group and multi-group diffusion approach based on nodal expansion methods. Furthermore a 1D thermal-hydraulics model for parallel coolant flow channels is included. The DYN3D code was extensively verified and validated via numerous numerical and experimental benchmark problems. That includes the NEA CRP benchmarks for PWR and BWR, the Three-Miles-Island-1 main steam line break and the Peach Bottom Turbine Trip benchmarks, as well as measurements carried out in an original-size VVER-1000 mock-up. An overview of the verification and validation activities can be found. Presently a DYN3D-HTR version is under development. It involves a 3D heat conduction model to deal with higher-(than one)-dimensional effects of heat transfer and heat conduction in

  13. Coding in Stroke and Other Cerebrovascular Diseases.

    Science.gov (United States)

    Korb, Pearce J; Jones, William

    2017-02-01

    Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.

  14. Technique for information retrieval using enhanced latent semantic analysis generating rank approximation matrix by factorizing the weighted morpheme-by-document matrix

    Science.gov (United States)

    Chew, Peter A; Bader, Brett W

    2012-10-16

    A technique for information retrieval includes parsing a corpus to identify a number of wordform instances within each document of the corpus. A weighted morpheme-by-document matrix is generated based at least in part on the number of wordform instances within each document of the corpus and based at least in part on a weighting function. The weighted morpheme-by-document matrix separately enumerates instances of stems and affixes. Additionally or alternatively, a term-by-term alignment matrix may be generated based at least in part on the number of wordform instances within each document of the corpus. At least one lower rank approximation matrix is generated by factorizing the weighted morpheme-by-document matrix and/or the term-by-term alignment matrix.

  15. Recent developments of JAEA's Monte Carlo Code MVP for reactor physics applications

    International Nuclear Information System (INIS)

    Nagaya, Y.; Okumura, K.; Mori, T.

    2013-01-01

    MVP is a general-purpose continuous-energy Monte Carlo code for neutron and photon transport calculations that has been developed since the late 1980's at Japan Atomic Energy Agency (JAEA, formerly JAERI). The MVP code is designed for nuclear reactor applications such as reactor core design/analysis, criticality safety and reactor shielding. This paper describes the MVP code and present its latest developments. Among the new capabilities of MVP we find: -) the perturbation method has been implemented for the change in k(eff); -) the eigenvalue calculations can be performed with an explicit treatment of delayed neutrons in which their fission spectra are taken into account; -) the capability of tallying the scattering matrix (group-to-group scattering cross sections); -) the implementation of an exact model for resonance elastic scattering; and -) a Monte Carlo perturbation technique is used to calculate reactor kinetics parameters

  16. Application of the DART Code for the Assessment of Advanced Fuel Behavior

    International Nuclear Information System (INIS)

    Rest, J.; Totev, T.

    2007-01-01

    The Dispersion Analysis Research Tool (DART) code is a dispersion fuel analysis code that contains mechanistically-based fuel and reaction-product swelling models, a one dimensional heat transfer analysis, and mechanical deformation models. DART has been used to simulate the irradiation behavior of uranium oxide, uranium silicide, and uranium molybdenum aluminum dispersion fuels, as well as their monolithic counterparts. The thermal-mechanical DART code has been validated against RERTR tests performed in the ATR for irradiation data on interaction thickness, fuel, matrix, and reaction product volume fractions, and plate thickness changes. The DART fission gas behavior model has been validated against UO 2 fission gas release data as well as measured fission gas-bubble size distributions. Here DART is utilized to analyze various aspects of the observed bubble growth in U-Mo/Al interaction product. (authors)

  17. Development of the Multi-Phase/Multi-Dimensional Code BUBBLEX

    International Nuclear Information System (INIS)

    Lee, Sang Yong; Kim, Shin Whan; Kim, Eun Kee

    2005-01-01

    A test version of the two-fluid program has been developed by extending the PISO algorithm. Unlike the conventional industry two-fluid codes, such as, RELAP5 and TRAC, this scheme does not need to develop a pressure matrix. Instead, it adopts the iterative procedure to implement the implicitness of the pressure. In this paper, a brief introduction to the numerical scheme will be presented. Then, its application to bubble column simulation will be described. Some concluding remarks will be followed

  18. HELAC-Onia: an automatic matrix element generator for heavy quarkonium physics

    CERN Document Server

    Shao, Hua-Sheng

    2013-01-01

    By the virtues of the Dyson-Schwinger equations, we upgrade the published code \\mtt{HELAC} to be capable to calculate the heavy quarkonium helicity amplitudes in the framework of NRQCD factorization, which we dub \\mtt{HELAC-Onia}. We rewrote the original \\mtt{HELAC} to make the new program be able to calculate helicity amplitudes of multi P-wave quarkonium states production at hadron colliders and electron-positron colliders by including new P-wave off-shell currents. Therefore, besides the high efficiencies in computation of multi-leg processes within the Standard Model, \\mtt{HELAC-Onia} is also sufficiently numerical stable in dealing with P-wave quarkonia (e.g. $h_{c,b},\\chi_{c,b}$) and P-wave color-octet intermediate states. To the best of our knowledge, it is a first general-purpose automatic quarkonium matrix elements generator based on recursion relations on the market.

  19. TRACK The New Beam Dynamics Code

    CERN Document Server

    Mustapha, Brahim; Ostroumov, Peter; Schnirman-Lessner, Eliane

    2005-01-01

    The new ray-tracing code TRACK was developed* to fulfill the special requirements of the RIA accelerator systems. The RIA lattice includes an ECR ion source, a LEBT containing a MHB and a RFQ followed by three SC linac sections separated by two stripping stations with appropriate magnetic transport systems. No available beam dynamics code meet all the necessary requirements for an end-to-end simulation of the RIA driver linac. The latest version of TRACK was used for end-to-end simulations of the RIA driver including errors and beam loss analysis.** In addition to the standard capabilities, the code includes the following new features: i) multiple charge states ii) realistic stripper model; ii) static and dynamic errors iii) automatic steering to correct for misalignments iv) detailed beam-loss analysis; v) parallel computing to perform large scale simulations. Although primarily developed for simulations of the RIA machine, TRACK is a general beam dynamics code. Currently it is being used for the design and ...

  20. A novel construction scheme of QC-LDPC codes based on the RU algorithm for optical transmission systems

    Science.gov (United States)

    Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu

    2016-03-01

    A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.

  1. Validation uncertainty of MATRA code for subchannel void distributions

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Dae-Hyun; Kim, S. J.; Kwon, H.; Seo, K. W. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    To extend code capability to the whole core subchannel analysis, pre-conditioned Krylov matrix solvers such as BiCGSTAB and GMRES are implemented in MATRA code as well as parallel computing algorithms using MPI and OPENMP. It is coded by fortran 90, and has some user friendly features such as graphic user interface. MATRA code was approved by Korean regulation body for design calculation of integral-type PWR named SMART. The major role subchannel code is to evaluate core thermal margin through the hot channel analysis and uncertainty evaluation for CHF predictions. In addition, it is potentially used for the best estimation of core thermal hydraulic field by incorporating into multiphysics and/or multi-scale code systems. In this study we examined a validation process for the subchannel code MATRA specifically in the prediction of subchannel void distributions. The primary objective of validation is to estimate a range within which the simulation modeling error lies. The experimental data for subchannel void distributions at steady state and transient conditions was provided on the framework of OECD/NEA UAM benchmark program. The validation uncertainty of MATRA code was evaluated for a specific experimental condition by comparing the simulation result and experimental data. A validation process should be preceded by code and solution verification. However, quantification of verification uncertainty was not addressed in this study. The validation uncertainty of the MATRA code for predicting subchannel void distribution was evaluated for a single data point of void fraction measurement at a 5x5 PWR test bundle on the framework of OECD UAM benchmark program. The validation standard uncertainties were evaluated as 4.2%, 3.9%, and 2.8% with the Monte-Carlo approach at the axial levels of 2216 mm, 2669 mm, and 3177 mm, respectively. The sensitivity coefficient approach revealed similar results of uncertainties but did not account for the nonlinear effects on the

  2. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  3. Calculation of hadronic matrix elements using lattice QCD

    International Nuclear Information System (INIS)

    Gupta, R.

    1993-01-01

    The author gives a brief introduction to the scope of lattice QCD calculations in his effort to extract the fundamental parameters of the standard model. This goal is illustrated by two examples. First the author discusses the extraction of CKM matrix elements from measurements of form factors for semileptonic decays of heavy-light pseudoscalar mesons such as D → Keν. Second, he presents the status of results for the kaon B parameter relevant to CP violation. He concludes the talk with a short outline of his experiences with optimizing QCD codes on the CM5

  4. Calculation of hadronic matrix elements using lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Gupta, R.

    1993-08-01

    The author gives a brief introduction to the scope of lattice QCD calculations in his effort to extract the fundamental parameters of the standard model. This goal is illustrated by two examples. First the author discusses the extraction of CKM matrix elements from measurements of form factors for semileptonic decays of heavy-light pseudoscalar mesons such as D {yields} Ke{nu}. Second, he presents the status of results for the kaon B parameter relevant to CP violation. He concludes the talk with a short outline of his experiences with optimizing QCD codes on the CM5.

  5. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  6. Convergence acceleration in the Monte-Carlo particle transport code TRIPOLI-4 in criticality

    International Nuclear Information System (INIS)

    Dehaye, Benjamin

    2014-01-01

    Fields such as criticality studies need to compute some values of interest in neutron physics. Two kind of codes may be used: deterministic ones and stochastic ones. The stochastic codes do not require approximation and are thus more exact. However, they may require a lot of time to converge with a sufficient precision.The work carried out during this thesis aims to build an efficient acceleration strategy in the TRIPOLI-4. We wish to implement the zero variance game. To do so, the method requires to compute the adjoint flux. The originality of this work is to directly compute the adjoint flux directly from a Monte-Carlo simulation without using external codes thanks to the fission matrix method. This adjoint flux is then used as an importance map to bias the simulation. (author) [fr

  7. Convex nonnegative matrix factorization with manifold regularization.

    Science.gov (United States)

    Hu, Wenjun; Choi, Kup-Sze; Wang, Peiliang; Jiang, Yunliang; Wang, Shitong

    2015-03-01

    Nonnegative Matrix Factorization (NMF) has been extensively applied in many areas, including computer vision, pattern recognition, text mining, and signal processing. However, nonnegative entries are usually required for the data matrix in NMF, which limits its application. Besides, while the basis and encoding vectors obtained by NMF can represent the original data in low dimension, the representations do not always reflect the intrinsic geometric structure embedded in the data. Motivated by manifold learning and Convex NMF (CNMF), we propose a novel matrix factorization method called Graph Regularized and Convex Nonnegative Matrix Factorization (GCNMF) by introducing a graph regularized term into CNMF. The proposed matrix factorization technique not only inherits the intrinsic low-dimensional manifold structure, but also allows the processing of mixed-sign data matrix. Clustering experiments on nonnegative and mixed-sign real-world data sets are conducted to demonstrate the effectiveness of the proposed method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Space-Time Convolutional Codes over Finite Fields and Rings for Systems with Large Diversity Order

    Directory of Open Access Journals (Sweden)

    B. F. Uchôa-Filho

    2008-06-01

    Full Text Available We propose a convolutional encoder over the finite ring of integers modulo pk,ℤpk, where p is a prime number and k is any positive integer, to generate a space-time convolutional code (STCC. Under this structure, we prove three properties related to the generator matrix of the convolutional code that can be used to simplify the code search procedure for STCCs over ℤpk. Some STCCs of large diversity order (≥4 designed under the trace criterion for n=2,3, and 4 transmit antennas are presented for various PSK signal constellations.

  9. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  10. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  11. MIMO-OFDM Chirp Waveform Diversity Design and Implementation Based on Sparse Matrix and Correlation Optimization

    Directory of Open Access Journals (Sweden)

    Wang Wen-qin

    2015-02-01

    Full Text Available The waveforms used in Multiple-Input Multiple-Output (MIMO Synthetic Aperture Radar (SAR should have a large time-bandwidth product and good ambiguity function performance. A scheme to design multiple orthogonal MIMO SAR Orthogonal Frequency Division Multiplexing (OFDM chirp waveforms by combinational sparse matrix and correlation optimization is proposed. First, the problem of MIMO SAR waveform design amounts to the associated design of hopping frequency and amplitudes. Then a iterative exhaustive search algorithm is adopted to optimally design the code matrix with the constraints minimizing the block correlation coefficient of sparse matrix and the sum of cross-correlation peaks. And the amplitudes matrix are adaptively designed by minimizing the cross-correlation peaks with the genetic algorithm. Additionally, the impacts of waveform number, hopping frequency interval and selectable frequency index are also analyzed. The simulation results verify the proposed scheme can design multiple orthogonal large time-bandwidth product OFDM chirp waveforms with low cross-correlation peak and sidelobes and it improves ambiguity performance.

  12. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  13. Optimal Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Kroon, I. B.; Faber, Michael Havbro

    1994-01-01

    Calibration of partial safety factors is considered in general, including classes of structures where no code exists beforehand. The partial safety factors are determined such that the difference between the reliability for the different structures in the class considered and a target reliability...... level is minimized. Code calibration on a decision theoretical basis is also considered and it is shown how target reliability indices can be calibrated. Results from code calibration for rubble mound breakwater designs are shown....

  14. ENDF utility codes version 6.8

    International Nuclear Information System (INIS)

    McLaughlin, P.K.

    1992-01-01

    Description and operating instructions are given for a package of utility codes operating on evaluated nuclear data files in the formats ENDF-5 and ENDF-6. Included are the data checking codes CHECKER, FIZCON, PSYCHE; the code INTER for retrieving thermal cross-sections and some other data; graphical plotting codes PLOTEF, GRALIB, graphic device interface subroutine library INTLIB; and the file maintenance and retrieval codes LISTEF, SETMDC, GETMAT, STANEF. This program package which is designed for CDC, IBM, DEC and PC computers, can be obtained on magnetic tape or floppy diskette, free of charge, from the IAEA Nuclear Data Section. (author)

  15. Recursive construction of (J,L (J,L QC LDPC codes with girth 6

    Directory of Open Access Journals (Sweden)

    Mohammad Gholami

    2016-06-01

    Full Text Available ‎In this paper‎, ‎a recursive algorithm is presented to generate some exponent matrices which correspond to Tanner graphs with girth at least 6‎. ‎For a J×L J×L exponent matrix E E‎, ‎the lower bound Q(E Q(E is obtained explicitly such that (J,L (J,L QC LDPC codes with girth at least 6 exist for any circulant permutation matrix (CPM size m≥Q(E m≥Q(E‎. ‎The results show that the exponent matrices constructed with our recursive algorithm have smaller lower-bound than the ones proposed recently with girth 6‎

  16. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    Energy Technology Data Exchange (ETDEWEB)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa [Japan Atomic Energy Agency, Nuclear Safety Research Center, Tokai, Ibaraki (Japan); Saitou, Hiroaki [ITOCHU Techno-Solutions Corp., Tokyo (Japan)

    2012-07-15

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  17. Input/output manual of light water reactor fuel performance code FEMAXI-7 and its related codes

    International Nuclear Information System (INIS)

    Suzuki, Motoe; Udagawa, Yutaka; Nagase, Fumihisa; Saitou, Hiroaki

    2012-07-01

    A light water reactor fuel analysis code FEMAXI-7 has been developed for the purpose of analyzing the fuel behavior in normal conditions and in anticipated transient conditions. Numerous functional improvements and extensions have been incorporated in FEMAXI-7, which has been fully disclosed in the code model description published recently as JAEA-Data/Code 2010-035. The present manual, which is the counterpart of this description, gives detailed explanations of operation method of FEMAXI-7 code and its related codes, methods of Input/Output, methods of source code modification, features of subroutine modules, and internal variables in a specific manner in order to facilitate users to perform a fuel analysis with FEMAXI-7. This report includes some descriptions which are modified from the original contents of JAEA-Data/Code 2010-035. A CD-ROM is attached as an appendix. (author)

  18. Code of Ethics for Electrical Engineers

    Science.gov (United States)

    Matsuki, Junya

    The Institute of Electrical Engineers of Japan (IEEJ) has established the rules of practice for its members recently, based on its code of ethics enacted in 1998. In this paper, first, the characteristics of the IEEJ 1998 ethical code are explained in detail compared to the other ethical codes for other fields of engineering. Secondly, the contents which shall be included in the modern code of ethics for electrical engineers are discussed. Thirdly, the newly-established rules of practice and the modified code of ethics are presented. Finally, results of questionnaires on the new ethical code and rules which were answered on May 23, 2007, by 51 electrical and electronic students of the University of Fukui are shown.

  19. MARS code manual volume I: code structure, system models, and solution methods

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Kim, Kyung Doo; Bae, Sung Won; Jeong, Jae Jun; Lee, Seung Wook; Hwang, Moon Kyu; Yoon, Churl

    2010-02-01

    Korea Advanced Energy Research Institute (KAERI) conceived and started the development of MARS code with the main objective of producing a state-of-the-art realistic thermal hydraulic systems analysis code with multi-dimensional analysis capability. MARS achieves this objective by very tightly integrating the one dimensional RELAP5/MOD3 with the multi-dimensional COBRA-TF codes. The method of integration of the two codes is based on the dynamic link library techniques, and the system pressure equation matrices of both codes are implicitly integrated and solved simultaneously. In addition, the Equation-Of-State (EOS) for the light water was unified by replacing the EOS of COBRA-TF by that of the RELAP5. This theory manual provides a complete list of overall information of code structure and major function of MARS including code architecture, hydrodynamic model, heat structure, trip / control system and point reactor kinetics model. Therefore, this report would be very useful for the code users. The overall structure of the manual is modeled on the structure of the RELAP5 and as such the layout of the manual is very similar to that of the RELAP. This similitude to RELAP5 input is intentional as this input scheme will allow minimum modification between the inputs of RELAP5 and MARS3.1. MARS3.1 development team would like to express its appreciation to the RELAP5 Development Team and the USNRC for making this manual possible

  20. Deciphering the genetic regulatory code using an inverse error control coding framework.

    Energy Technology Data Exchange (ETDEWEB)

    Rintoul, Mark Daniel; May, Elebeoba Eni; Brown, William Michael; Johnston, Anna Marie; Watson, Jean-Paul

    2005-03-01

    We have found that developing a computational framework for reconstructing error control codes for engineered data and ultimately for deciphering genetic regulatory coding sequences is a challenging and uncharted area that will require advances in computational technology for exact solutions. Although exact solutions are desired, computational approaches that yield plausible solutions would be considered sufficient as a proof of concept to the feasibility of reverse engineering error control codes and the possibility of developing a quantitative model for understanding and engineering genetic regulation. Such evidence would help move the idea of reconstructing error control codes for engineered and biological systems from the high risk high payoff realm into the highly probable high payoff domain. Additionally this work will impact biological sensor development and the ability to model and ultimately develop defense mechanisms against bioagents that can be engineered to cause catastrophic damage. Understanding how biological organisms are able to communicate their genetic message efficiently in the presence of noise can improve our current communication protocols, a continuing research interest. Towards this end, project goals include: (1) Develop parameter estimation methods for n for block codes and for n, k, and m for convolutional codes. Use methods to determine error control (EC) code parameters for gene regulatory sequence. (2) Develop an evolutionary computing computational framework for near-optimal solutions to the algebraic code reconstruction problem. Method will be tested on engineered and biological sequences.

  1. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Rollstin, J.A.; Chanin, D.I.; Jow, H.N.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projections, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management

  2. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T. (Sandia National Labs., Albuquerque, NM (USA)); Rollstin, J.A. (GRAM, Inc., Albuquerque, NM (USA)); Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs.

  3. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Jow, H.N.; Sprung, J.L.; Ritchie, L.T.; Rollstin, J.A.; Chanin, D.I.

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previously used CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. Volume I, the User's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems. Volume II, the Model Description, describes the underlying models that are implemented in the code, and Volume III, the Programmer's Reference Manual, describes the code's structure and database management. 59 refs., 14 figs., 15 tabs

  4. NLTE steady-state response matrix method.

    Science.gov (United States)

    Faussurier, G.; More, R. M.

    2000-05-01

    A connection between atomic kinetics and non-equilibrium thermodynamics has been recently established by using a collisional-radiative model modified to include line absorption. The calculated net emission can be expressed as a non-local thermodynamic equilibrium (NLTE) symmetric response matrix. In the paper, this connection is extended to both cases of the average-atom model and the Busquet's model (RAdiative-Dependent IOnization Model, RADIOM). The main properties of the response matrix still remain valid. The RADIOM source function found in the literature leads to a diagonal response matrix, stressing the absence of any frequency redistribution among the frequency groups at this order of calculation.

  5. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  6. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    Energy Technology Data Exchange (ETDEWEB)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany); Schuetze, Jochen [ANSYS Germany GmbH, Darmstadt (Germany); Frank, Thomas [ANSYS Germany GmbH, Otterfing (Germany); Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo [Karlsruher Institut fuer Technologie (KIT), Eggenstein-Leopoldshafen (Germany)

    2011-07-15

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  7. Development of multi-physics code systems based on the reactor dynamics code DYN3D

    International Nuclear Information System (INIS)

    Kliem, Soeren; Gommlich, Andre; Grahn, Alexander; Rohde, Ulrich; Schuetze, Jochen; Frank, Thomas; Gomez Torres, Armando M.; Sanchez Espinoza, Victor Hugo

    2011-01-01

    The reactor dynamics code DYN3D has been coupled with the CFD code ANSYS CFX and the 3D thermal hydraulic core model FLICA4. In the coupling with ANSYS CFX, DYN3D calculates the neutron kinetics and the fuel behavior including the heat transfer to the coolant. The physical data interface between the codes is the volumetric heat release rate into the coolant. In the coupling with FLICA4 only the neutron kinetics module of DYN3D is used. Fluid dynamics and related transport phenomena in the reactor's coolant and fuel behavior is calculated by FLICA4. The correctness of the coupling of DYN3D with both thermal hydraulic codes was verified by the calculation of different test problems. These test problems were set-up in such a way that comparison with the DYN3D stand-alone code was possible. This included steady-state and transient calculations of a mini-core consisting of nine real-size PWR fuel assemblies with ANSYS CFX/DYN3D as well as mini-core and a full core steady-state calculation using FLICA4/DYN3D. (orig.)

  8. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  9. Matrix groups for undergraduates

    CERN Document Server

    Tapp, Kristopher

    2016-01-01

    Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...

  10. Optimizing Sparse Matrix-Multiple Vectors Multiplication for Nuclear Configuration Interaction Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-14

    Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.

  11. Quantum Chemical Calculations Using Accelerators: Migrating Matrix Operations to the NVIDIA Kepler GPU and the Intel Xeon Phi.

    Science.gov (United States)

    Leang, Sarom S; Rendell, Alistair P; Gordon, Mark S

    2014-03-11

    Increasingly, modern computer systems comprise a multicore general-purpose processor augmented with a number of special purpose devices or accelerators connected via an external interface such as a PCI bus. The NVIDIA Kepler Graphical Processing Unit (GPU) and the Intel Phi are two examples of such accelerators. Accelerators offer peak performances that can be well above those of the host processor. How to exploit this heterogeneous environment for legacy application codes is not, however, straightforward. This paper considers how matrix operations in typical quantum chemical calculations can be migrated to the GPU and Phi systems. Double precision general matrix multiply operations are endemic in electronic structure calculations, especially methods that include electron correlation, such as density functional theory, second order perturbation theory, and coupled cluster theory. The use of approaches that automatically determine whether to use the host or an accelerator, based on problem size, is explored, with computations that are occurring on the accelerator and/or the host. For data-transfers over PCI-e, the GPU provides the best overall performance for data sizes up to 4096 MB with consistent upload and download rates between 5-5.6 GB/s and 5.4-6.3 GB/s, respectively. The GPU outperforms the Phi for both square and nonsquare matrix multiplications.

  12. Paracantor: A two group, two region reactor code

    Energy Technology Data Exchange (ETDEWEB)

    Stone, Stuart

    1956-07-01

    Paracantor I a two energy group, two region, time independent reactor code, which obtains a closed solution for a critical reactor assembly. The code deals with cylindrical reactors of finite length and with a radial reflector of finite thickness. It is programmed for the 1.B.M: Magnetic Drum Data-Processing Machine, Type 650. The limited memory space available does not permit a flux solution to be included in the basic Paracantor code. A supplementary code, Paracantor 11, has been programmed which computes fluxes, .including adjoint fluxes, from the .output of Paracamtor I.

  13. Coding theory and cryptography the essentials

    CERN Document Server

    Hankerson, DC; Leonard, DA; Phelps, KT; Rodger, CA; Wall, JR; Wall, J R

    2000-01-01

    Containing data on number theory, encryption schemes, and cyclic codes, this highly successful textbook, proven by the authors in a popular two-quarter course, presents coding theory, construction, encoding, and decoding of specific code families in an ""easy-to-use"" manner appropriate for students with only a basic background in mathematics offering revised and updated material on the Berlekamp-Massey decoding algorithm and convolutional codes. Introducing the mathematics as it is needed and providing exercises with solutions, this edition includes an extensive section on cryptography, desig

  14. Bayesian decision support for coding occupational injury data.

    Science.gov (United States)

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  15. Full scale seismic simulation of a nuclear reactor with parallel finite element analysis code for assembled structure

    International Nuclear Information System (INIS)

    Yamada, Tomonori

    2010-01-01

    The safety requirement of nuclear power plant attracts much attention nowadays. With the growing computing power, numerical simulation is one of key technologies to meet this safety requirement. Center for Computational Science and e-Systems of Japan Atomic Energy Agency has been developing a finite element analysis code for assembled structure to accurately evaluate the structural integrity of nuclear power plant in its entirety under seismic events. Because nuclear power plant is very huge assembled structure with tens of millions of mechanical components, the finite element model of each component is assembled into one structure and non-conforming meshes of mechanical components are bonded together inside the code. The main technique to bond these mechanical components is triple sparse matrix multiplication with multiple point constrains and global stiffness matrix. In our code, this procedure is conducted in a component by component manner, so that the working memory size and computing time for this multiplication are available on the current computing environment. As an illustrative example, seismic simulation of a real nuclear reactor of High Temperature engineering Test Reactor, which is located at the O-arai research and development center of JAEA, with 80 major mechanical components was conducted. Consequently, our code successfully simulated detailed elasto-plastic deformation of nuclear reactor and its computational performance was investigated. (author)

  16. A code reviewer assignment model incorporating the competence differences and participant preferences

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2016-03-01

    Full Text Available A good assignment of code reviewers can effectively utilize the intellectual resources, assure code quality and improve programmers’ skills in software development. However, little research on reviewer assignment of code review has been found. In this study, a code reviewer assignment model is created based on participants’ preference to reviewing assignment. With a constraint of the smallest size of a review group, the model is optimized to maximize review outcomes and avoid the negative impact of “mutual admiration society”. This study shows that the reviewer assignment strategies incorporating either the reviewers’ preferences or the authors’ preferences get much improvement than a random assignment. The strategy incorporating authors’ preference makes higher improvement than that incorporating reviewers’ preference. However, when the reviewers’ and authors’ preference matrixes are merged, the improvement becomes moderate. The study indicates that the majority of the participants have a strong wish to work with reviewers and authors having highest competence. If we want to satisfy the preference of both reviewers and authors at the same time, the overall improvement of learning outcomes may be not the best.

  17. Overview of hypersonic CFD code calibration studies

    Science.gov (United States)

    Miller, Charles G.

    1987-01-01

    The topics are presented in viewgraph form and include the following: definitions of computational fluid dynamics (CFD) code validation; climate in hypersonics and LaRC when first 'designed' CFD code calibration studied was initiated; methodology from the experimentalist's perspective; hypersonic facilities; measurement techniques; and CFD code calibration studies.

  18. Matrix factorization-based data fusion for the prediction of lncRNA-disease associations.

    Science.gov (United States)

    Fu, Guangyuan; Wang, Jun; Domeniconi, Carlotta; Yu, Guoxian

    2018-05-01

    Long non-coding RNAs (lncRNAs) play crucial roles in complex disease diagnosis, prognosis, prevention and treatment, but only a small portion of lncRNA-disease associations have been experimentally verified. Various computational models have been proposed to identify lncRNA-disease associations by integrating heterogeneous data sources. However, existing models generally ignore the intrinsic structure of data sources or treat them as equally relevant, while they may not be. To accurately identify lncRNA-disease associations, we propose a Matrix Factorization based LncRNA-Disease Association prediction model (MFLDA in short). MFLDA decomposes data matrices of heterogeneous data sources into low-rank matrices via matrix tri-factorization to explore and exploit their intrinsic and shared structure. MFLDA can select and integrate the data sources by assigning different weights to them. An iterative solution is further introduced to simultaneously optimize the weights and low-rank matrices. Next, MFLDA uses the optimized low-rank matrices to reconstruct the lncRNA-disease association matrix and thus to identify potential associations. In 5-fold cross validation experiments to identify verified lncRNA-disease associations, MFLDA achieves an area under the receiver operating characteristic curve (AUC) of 0.7408, at least 3% higher than those given by state-of-the-art data fusion based computational models. An empirical study on identifying masked lncRNA-disease associations again shows that MFLDA can identify potential associations more accurately than competing models. A case study on identifying lncRNAs associated with breast, lung and stomach cancers show that 38 out of 45 (84%) associations predicted by MFLDA are supported by recent biomedical literature and further proves the capability of MFLDA in identifying novel lncRNA-disease associations. MFLDA is a general data fusion framework, and as such it can be adopted to predict associations between other biological

  19. Put Your Ethics Code to Work.

    Science.gov (United States)

    Eveslage, Tom

    1996-01-01

    States that in order for a code of ethics to be effective, it must make a difference. Discusses some qualities and considerations found in a good code of ethics, including being in accordance with accepted professional values. (PA)

  20. Survey of coded aperture imaging

    International Nuclear Information System (INIS)

    Barrett, H.H.

    1975-01-01

    The basic principle and limitations of coded aperture imaging for x-ray and gamma cameras are discussed. Current trends include (1) use of time varying apertures, (2) use of ''dilute'' apertures with transmission much less than 50%, and (3) attempts to derive transverse tomographic sections, unblurred by other planes, from coded images

  1. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  2. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  3. User's manual for the TMAD code

    International Nuclear Information System (INIS)

    Finfrock, S.H.

    1995-01-01

    This document serves as the User's Manual for the TMAD code system, which includes the TMAD code and the LIBMAKR code. The TMAD code was commissioned to make it easier to interpret moisture probe measurements in the Hanford Site waste tanks. In principle, the code is an interpolation routine that acts over a library of benchmark data based on two independent variables, typically anomaly size and moisture content. Two additional variables, anomaly type and detector type, also can be considered independent variables, but no interpolation is done over them. The dependent variable is detector response. The intent is to provide the code with measured detector responses from two or more detectors. The code then will interrogate (and interpolate upon) the benchmark data library and find the anomaly-type/anomaly-size/moisture-content combination that provides the closest match to the measured data

  4. Graphene-Reinforced Metal and Polymer Matrix Composites

    Science.gov (United States)

    Kasar, Ashish K.; Xiong, Guoping; Menezes, Pradeep L.

    2018-06-01

    Composites have tremendous applicability due to their excellent capabilities. The performance of composites mainly depends on the reinforcing material applied. Graphene is successful as an efficient reinforcing material due to its versatile as well as superior properties. Even at very low content, graphene can dramatically improve the properties of polymer and metal matrix composites. This article reviews the fabrication followed by mechanical and tribological properties of metal and polymer matrix composites filled with different kinds of graphene, including single-layer, multilayer, and functionalized graphene. Results reported to date in literature indicate that functionalized graphene or graphene oxide-polymer composites are promising materials offering significantly improved strength and frictional properties. A similar trend of improved properties has been observed in case of graphene-metal matrix composites. However, achieving higher graphene loading with uniform dispersion in metal matrix composites remains a challenge. Although graphene-reinforced composites face some challenges, such as understanding the graphene-matrix interaction or fabrication techniques, graphene-reinforced polymer and metal matrix composites have great potential for application in various fields due to their outstanding properties.

  5. Noncoherent Spectral Optical CDMA System Using 1D Active Weight Two-Code Keying Codes

    Directory of Open Access Journals (Sweden)

    Bih-Chyun Yeh

    2016-01-01

    Full Text Available We propose a new family of one-dimensional (1D active weight two-code keying (TCK in spectral amplitude coding (SAC optical code division multiple access (OCDMA networks. We use encoding and decoding transfer functions to operate the 1D active weight TCK. The proposed structure includes an optical line terminal (OLT and optical network units (ONUs to produce the encoding and decoding codes of the proposed OLT and ONUs, respectively. The proposed ONU uses the modified cross-correlation to remove interferences from other simultaneous users, that is, the multiuser interference (MUI. When the phase-induced intensity noise (PIIN is the most important noise, the modified cross-correlation suppresses the PIIN. In the numerical results, we find that the bit error rate (BER for the proposed system using the 1D active weight TCK codes outperforms that for two other systems using the 1D M-Seq codes and 1D balanced incomplete block design (BIBD codes. The effective source power for the proposed system can achieve −10 dBm, which has less power than that for the other systems.

  6. Design compliance matrix waste sample container filling system for nested, fixed-depth sampling system

    International Nuclear Information System (INIS)

    BOGER, R.M.

    1999-01-01

    This design compliance matrix document provides specific design related functional characteristics, constraints, and requirements for the container filling system that is part of the nested, fixed-depth sampling system. This document addresses performance, external interfaces, ALARA, Authorization Basis, environmental and design code requirements for the container filling system. The container filling system will interface with the waste stream from the fluidic pumping channels of the nested, fixed-depth sampling system and will fill containers with waste that meet the Resource Conservation and Recovery Act (RCRA) criteria for waste that contains volatile and semi-volatile organic materials. The specifications for the nested, fixed-depth sampling system are described in a Level 2 Specification document (HNF-3483, Rev. 1). The basis for this design compliance matrix document is the Tank Waste Remediation System (TWRS) desk instructions for design Compliance matrix documents (PI-CP-008-00, Rev. 0)

  7. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  8. Fibre-Matrix Interaction in Soft Tissue

    International Nuclear Information System (INIS)

    Guo, Zaoyang

    2010-01-01

    Although the mechanical behaviour of soft tissue has been extensively studied, the interaction between the collagen fibres and the ground matrix has not been well understood and is therefore ignored by most constitutive models of soft tissue. In this paper, the human annulus fibrosus is used as an example and the potential fibre-matrix interaction is identified by careful investigation of the experimental results of biaxial and uniaxial testing of the human annulus fibrosus. First, the uniaxial testing result of the HAF along the axial direction is analysed and it is shown that the mechanical behaviour of the ground matrix can be well simulated by the incompressible neo-Hookean model when the collagen fibres are all under contraction. If the collagen fibres are stretched, the response of the ground matrix can still be described by the incompressible neo-Hookean model, but the effective stiffness of the matrix depends on the fibre stretch ratio. This stiffness can be more than 10 times larger than the one obtained with collagen fibres under contraction. This phenomenon can only be explained by the fibre-matrix interaction. Furthermore, we find that the physical interpretation of this interaction includes the inhomogeneity of the soft tissue and the fibre orientation dispersion. The dependence of the tangent stiffness of the matrix on the first invariant of the deformation tensor can also be explained by the fibre orientation dispersion. The significant effect of the fibre-matrix interaction strain energy on mechanical behaviour of the soft tissue is also illustrated by comparing some simulation results.

  9. A Monte Carlo burnup code linking MCNP and REBUS

    International Nuclear Information System (INIS)

    Hanan, N.A.; Olson, A.P.; Pond, R.B.; Matos, J.E.

    1998-01-01

    The REBUS-3 burnup code, used in the anl RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented. (author)

  10. A Monte Carlo burnup code linking MCNP and REBUS

    International Nuclear Information System (INIS)

    Hanan, N. A.

    1998-01-01

    The REBUS-3 burnup code, used in the ANL RERTR Program, is a very general code that uses diffusion theory (DIF3D) to obtain the fluxes required for reactor burnup analyses. Diffusion theory works well for most reactors. However, to include the effects of exact geometry and strong absorbers that are difficult to model using diffusion theory, a Monte Carlo method is required. MCNP, a general-purpose, generalized-geometry, time-dependent, Monte Carlo transport code, is the most widely used Monte Carlo code. This paper presents a linking of the MCNP code and the REBUS burnup code to perform these difficult burnup analyses. The linked code will permit the use of the full capabilities of REBUS which include non-equilibrium and equilibrium burnup analyses. Results of burnup analyses using this new linked code are also presented

  11. An Enhanced Erasure Code-Based Security Mechanism for Cloud Storage

    Directory of Open Access Journals (Sweden)

    Wenfeng Wang

    2014-01-01

    Full Text Available Cloud computing offers a wide range of luxuries, such as high performance, rapid elasticity, on-demand self-service, and low cost. However, data security continues to be a significant impediment in the promotion and popularization of cloud computing. To address the problem of data leakage caused by unreliable service providers and external cyber attacks, an enhanced erasure code-based security mechanism is proposed and elaborated in terms of four aspects: data encoding, data transmission, data placement, and data reconstruction, which ensure data security throughout the whole traversing into cloud storage. Based on the mechanism, we implement a secure cloud storage system (SCSS. The key design issues, including data division, construction of generator matrix, data encoding, fragment naming, and data decoding, are also described in detail. Finally, we conduct an analysis of data availability and security and performance evaluation. Experimental results and analysis demonstrate that SCSS achieves high availability, strong security, and excellent performance.

  12. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  13. Interface matrix method in AFEN framework

    Energy Technology Data Exchange (ETDEWEB)

    Pogosbekyan, Leonid; Cho, Jin Young; Kim, Young Jin [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    In this study, we extend the application of the interface-matrix(IM) method for reflector modeling to Analytic Flux Expansion Nodal (AFEN) method. This include the modifications of the surface-averaged net current continuity and the net leakage balance conditions for IM method in accordance with AFEN formula. AFEN-interface matrix (AFEN-IM) method has been tested against ZION-1 benchmark problem. The numerical result of AFEN-IM method shows 1.24% of maximum error and 0.42% of root-mean square error in assembly power distribution, and 0.006% {Delta} k of neutron multiplication factor. This result proves that the interface-matrix method for reflector modeling can be useful in AFEN method. 3 refs., 4 figs. (Author)

  14. Radionuclide daughter inventory generator code: DIG

    International Nuclear Information System (INIS)

    Fields, D.E.; Sharp, R.D.

    1985-09-01

    The Daughter Inventory Generator (DIG) code accepts a tabulation of radionuclide initially present in a waste stream, specified as amounts present either by mass or by activity, and produces a tabulation of radionuclides present after a user-specified elapsed time. This resultant radionuclide inventory characterizes wastes that have undergone daughter ingrowth during subsequent processes, such as leaching and transport, and includes daughter radionuclides that should be considered in these subsequent processes or for inclusion in a pollutant source term. Output of the DIG code also summarizes radionuclide decay constants. The DIG code was developed specifically to assist the user of the PRESTO-II methodology and code in preparing data sets and accounting for possible daughter ingrowth in wastes buried in shallow-land disposal areas. The DIG code is also useful in preparing data sets for the PRESTO-EPA code. Daughter ingrowth in buried radionuclides and in radionuclides that have been leached from the wastes and are undergoing hydrologic transport are considered, and the quantities of daughter radionuclide are calculated. Radionuclide decay constants generated by DIG and included in the DIG output are required in the PRESTO-II code input data set. The DIG accesses some subroutines written for use with the CRRIS system and accesses files containing radionuclide data compiled by D.C. Kocher. 11 refs

  15. MELCOR Accident Consequence Code System (MACCS)

    International Nuclear Information System (INIS)

    Chanin, D.I.; Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems

  16. MELCOR Accident Consequence Code System (MACCS)

    Energy Technology Data Exchange (ETDEWEB)

    Chanin, D.I. (Technadyne Engineering Consultants, Inc., Albuquerque, NM (USA)); Sprung, J.L.; Ritchie, L.T.; Jow, Hong-Nian (Sandia National Labs., Albuquerque, NM (USA))

    1990-02-01

    This report describes the MACCS computer code. The purpose of this code is to simulate the impact of severe accidents at nuclear power plants on the surrounding environment. MACCS has been developed for the US Nuclear Regulatory Commission to replace the previous CRAC2 code, and it incorporates many improvements in modeling flexibility in comparison to CRAC2. The principal phenomena considered in MACCS are atmospheric transport, mitigative actions based on dose projection, dose accumulation by a number of pathways including food and water ingestion, early and latent health effects, and economic costs. The MACCS code can be used for a variety of applications. These include (1) probabilistic risk assessment (PRA) of nuclear power plants and other nuclear facilities, (2) sensitivity studies to gain a better understanding of the parameters important to PRA, and (3) cost-benefit analysis. This report is composed of three volumes. This document, Volume 1, the Users's Guide, describes the input data requirements of the MACCS code and provides directions for its use as illustrated by three sample problems.

  17. Code Team Training: Demonstrating Adherence to AHA Guidelines During Pediatric Code Blue Activations.

    Science.gov (United States)

    Stewart, Claire; Shoemaker, Jamie; Keller-Smith, Rachel; Edmunds, Katherine; Davis, Andrew; Tegtmeyer, Ken

    2017-10-16

    Pediatric code blue activations are infrequent events with a high mortality rate despite the best effort of code teams. The best method for training these code teams is debatable; however, it is clear that training is needed to assure adherence to American Heart Association (AHA) Resuscitation Guidelines and to prevent the decay that invariably occurs after Pediatric Advanced Life Support training. The objectives of this project were to train a multidisciplinary, multidepartmental code team and to measure this team's adherence to AHA guidelines during code simulation. Multidisciplinary code team training sessions were held using high-fidelity, in situ simulation. Sessions were held several times per month. Each session was filmed and reviewed for adherence to 5 AHA guidelines: chest compression rate, ventilation rate, chest compression fraction, use of a backboard, and use of a team leader. After the first study period, modifications were made to the code team including implementation of just-in-time training and alteration of the compression team. Thirty-eight sessions were completed, with 31 eligible for video analysis. During the first study period, 1 session adhered to all AHA guidelines. During the second study period, after alteration of the code team and implementation of just-in-time training, no sessions adhered to all AHA guidelines; however, there was an improvement in percentage of sessions adhering to ventilation rate and chest compression rate and an improvement in median ventilation rate. We present a method for training a large code team drawn from multiple hospital departments and a method of assessing code team performance. Despite subjective improvement in code team positioning, communication, and role completion and some improvement in ventilation rate and chest compression rate, we failed to consistently demonstrate improvement in adherence to all guidelines.

  18. The VEGA Assembly Spectrum Code

    International Nuclear Information System (INIS)

    Milosevic, M.

    1997-01-01

    The VEGA is assembly spectrum code, developed as a design tool for producing a few-group averaged cross section data for a wide range of reactor types including both thermal and fast reactors. It belongs to a class of codes, which may be characterized by the separate stages for micro group, spectrum and macro group assembly calculations. The theoretical foundation for the development of the VEGA code was integral transport theory in the first-flight collision probability formulation. Two versions of VEGA are now in use, VEGA-1 established on standard equivalence theory and VEGA-2 based on new subgroup method applicable for any geometry for which a flux solution is possible. This paper describes a features which are unique to the VEGA codes with concentration on the basic principles and algorithms used in the proposed subgroup method. Presented validation of this method, comprise the results for a homogenous uranium-plutonium mixture and a PWR cell containing a recycled uranium-plutonium oxide. Example application for a realistic fuel dissolver benchmark problem , which was extensive analyzed in the international calculations, is also included. (author)

  19. The neutrons flux density calculations by Monte Carlo code for the double heterogeneity fuel

    International Nuclear Information System (INIS)

    Gurevich, M.I.; Brizgalov, V.I.

    1994-01-01

    This document provides the calculation technique for the fuel elements which consists of the one substance as a matrix and the other substance as the corn embedded in it. This technique can be used in the neutron flux density calculation by the universal Monte Carlo code. The estimation of accuracy is presented too. (authors). 6 refs., 1 fig

  20. Sparse coding reveals greater functional connectivity in female brains during naturalistic emotional experience.

    Directory of Open Access Journals (Sweden)

    Yudan Ren

    Full Text Available Functional neuroimaging is widely used to examine changes in brain function associated with age, gender or neuropsychiatric conditions. FMRI (functional magnetic resonance imaging studies employ either laboratory-designed tasks that engage the brain with abstracted and repeated stimuli, or resting state paradigms with little behavioral constraint. Recently, novel neuroimaging paradigms using naturalistic stimuli are gaining increasing attraction, as they offer an ecologically-valid condition to approximate brain function in real life. Wider application of naturalistic paradigms in exploring individual differences in brain function, however, awaits further advances in statistical methods for modeling dynamic and complex dataset. Here, we developed a novel data-driven strategy that employs group sparse representation to assess gender differences in brain responses during naturalistic emotional experience. Comparing to independent component analysis (ICA, sparse coding algorithm considers the intrinsic sparsity of neural coding and thus could be more suitable in modeling dynamic whole-brain fMRI signals. An online dictionary learning and sparse coding algorithm was applied to the aggregated fMRI signals from both groups, which was subsequently factorized into a common time series signal dictionary matrix and the associated weight coefficient matrix. Our results demonstrate that group sparse representation can effectively identify gender differences in functional brain network during natural viewing, with improved sensitivity and reliability over ICA-based method. Group sparse representation hence offers a superior data-driven strategy for examining brain function during naturalistic conditions, with great potential for clinical application in neuropsychiatric disorders.

  1. Expansion of the CHR bone code system

    International Nuclear Information System (INIS)

    Farnham, J.E.; Schlenker, R.A.

    1976-01-01

    This report describes the coding system used in the Center for Human Radiobiology (CHR) to identify individual bones and portions of bones of a complete skeletal system. It includes illustrations of various bones and bone segments with their respective code numbers. Codes are also presented for bone groups and for nonbone materials

  2. Development and assessment of best estimate integrated safety analysis code

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Lee, Young Jin; Hwang, Moon Kyu

    2007-03-01

    Improvement of the integrated safety analysis code MARS3.0 has been carried out and a multi-D safety analysis application system has been established. Iterative matrix solver and parallel processing algorithm have been introduced, and a LINUX version has been generated to enable MARS to run in cluster PCs. MARS variables and sub-routines have been reformed and modularised to simplify code maintenance. Model uncertainty analyses have been performed for THTF, FLECHT, NEPTUN, and LOFT experiments as well as APR1400 plant. Participations in international cooperation research projects such as OECD BEMUSE, SETH, PKL, BFBT, and TMI-2 have been actively pursued as part of code assessment efforts. The assessment, evaluation and experimental data obtained through international cooperation projects have been registered and maintained in the T/H Databank. Multi-D analyses of APR1400 LBLOCA, DVI Break, SLB, and SGTR have been carried out as a part of application efforts in multi-D safety analysis. GUI based 3D input generator has been developed for user convenience. Operation of the MARS Users Group (MUG) was continued and through MUG, the technology has been transferred to 24 organisations. A set of 4 volumes of user manuals has been compiled and the correction reports for the code errors reported during MARS development have been published

  3. Development and assessment of best estimate integrated safety analysis code

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Bub Dong; Lee, Young Jin; Hwang, Moon Kyu (and others)

    2007-03-15

    Improvement of the integrated safety analysis code MARS3.0 has been carried out and a multi-D safety analysis application system has been established. Iterative matrix solver and parallel processing algorithm have been introduced, and a LINUX version has been generated to enable MARS to run in cluster PCs. MARS variables and sub-routines have been reformed and modularised to simplify code maintenance. Model uncertainty analyses have been performed for THTF, FLECHT, NEPTUN, and LOFT experiments as well as APR1400 plant. Participations in international cooperation research projects such as OECD BEMUSE, SETH, PKL, BFBT, and TMI-2 have been actively pursued as part of code assessment efforts. The assessment, evaluation and experimental data obtained through international cooperation projects have been registered and maintained in the T/H Databank. Multi-D analyses of APR1400 LBLOCA, DVI Break, SLB, and SGTR have been carried out as a part of application efforts in multi-D safety analysis. GUI based 3D input generator has been developed for user convenience. Operation of the MARS Users Group (MUG) was continued and through MUG, the technology has been transferred to 24 organisations. A set of 4 volumes of user manuals has been compiled and the correction reports for the code errors reported during MARS development have been published.

  4. Unbiased minimum variance estimator of a matrix exponential function. Application to Boltzmann/Bateman coupled equations solving

    International Nuclear Information System (INIS)

    Dumonteil, E.; Diop, C. M.

    2009-01-01

    This paper derives an unbiased minimum variance estimator (UMVE) of a matrix exponential function of a normal wean. The result is then used to propose a reference scheme to solve Boltzmann/Bateman coupled equations, thanks to Monte Carlo transport codes. The last section will present numerical results on a simple example. (authors)

  5. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  6. Convergent j-matrix calculation of electron-helium resonances

    International Nuclear Information System (INIS)

    Konovalov, D.A.; McCarthy, I.E.

    1994-12-01

    Resonance structures in n=2 and n=3 electron-helium excitation cross sections are calculated using the J-matrix method. The number of close-coupled helium bound and continuum states is taken to convergence, e.g. about 100 channels are coupled for each total spin and angular momentum. It is found that the present J-matrix results are in good shape agreement with recent 29-state R-matrix calculations. However the J-matrix absolute cross sections are slightly lower due to the influence of continuum channels included in the present method. Experiment and theory agree on the positions of n=2 and n=3 resonances. 22 refs., 1 tab.; 3 figs

  7. Thinking through the Issues in a Code of Ethics

    Science.gov (United States)

    Davis, Michael

    2008-01-01

    In June 2005, seven people met at the Illinois Institute of Technology (IIT) to develop a code of ethics governing all members of the university community. The initial group developed a preamble, that included reasons for establishing such a code and who was to be governed by the code, including rationale for following the guidelines. From this…

  8. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    International Nuclear Information System (INIS)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E.; Tills, J.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions

  9. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    Energy Technology Data Exchange (ETDEWEB)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.; Gido, R.G.; Tadios, E.L.; Davis, F.J.; Martinez, G.M.; Washington, K.E. [Sandia National Labs., Albuquerque, NM (United States); Tills, J. [J. Tills and Associates, Inc., Sandia Park, NM (United States)

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of the input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.

  10. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  11. R-matrix calculations for electron-impact excitation of C(+), N(2+), and O(3+) including fine structure

    Science.gov (United States)

    Luo, D.; Pradhan, A. K.

    1990-01-01

    The new R-matrix package for comprehensive close-coupling calculations for electron scattering with the first three ions in the boron isoelectronic sequence, the astrophysically significant C(+), N(2+), and O(3+), is presented. The collision strengths are calculated in the LS coupling approximation, as well as in pair-coupling scheme, for the transitions among the fine-structure sublevels. Calculations are carried out at a large number of energies in order to study the detailed effects of autoionizing resonances.

  12. Application of a parallel 3-dimensional hydrogeochemistry HPF code to a proposed waste disposal site at the Oak Ridge National Laboratory

    International Nuclear Information System (INIS)

    Gwo, Jin-Ping; Yeh, Gour-Tsyh

    1997-01-01

    The objectives of this study are (1) to parallelize a 3-dimensional hydrogeochemistry code and (2) to apply the parallel code to a proposed waste disposal site at the Oak Ridge National Laboratory (ORNL). The 2-dimensional hydrogeochemistry code HYDROGEOCHEM, developed at the Pennsylvania State University for coupled subsurface solute transport and chemical equilibrium processes, was first modified to accommodate 3-dimensional problem domains. A bi-conjugate gradient stabilized linear matrix solver was then incorporated to solve the matrix equation. We chose to parallelize the 3-dimensional code on the Intel Paragons at ORNL by using an HPF (high performance FORTRAN) compiler developed at PGI. The data- and task-parallel algorithms available in the HPF compiler proved to be highly efficient for the geochemistry calculation. This calculation can be easily implemented in HPF formats and is perfectly parallel because the chemical speciation on one finite-element node is virtually independent of those on the others. The parallel code was applied to a subwatershed of the Melton Branch at ORNL. Chemical heterogeneity, in addition to physical heterogeneities of the geological formations, has been identified as one of the major factors that affect the fate and transport of contaminants at ORNL. This study demonstrated an application of the 3-dimensional hydrogeochemistry code on the Melton Branch site. A uranium tailing problem that involved in aqueous complexation and precipitation-dissolution was tested. Performance statistics was collected on the Intel Paragons at ORNL. Implications of these results on the further optimization of the code were discussed

  13. Intermediate coupling collision strengths from LS coupled R-matrix elements

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    1978-01-01

    Fine structure collision strength for transitions between two groups of states in intermediate coupling and with inclusion of configuration mixing are obtained from LS coupled reactance matrix elements (R-matrix elements) and a set of mixing coefficients. The LS coupled R-matrix elements are transformed to pair coupling using Wigner 6-j coefficients. From these pair coupled R-matrix elements together with a set of mixing coefficients, R-matrix elements are obtained which include the intermediate coupling and configuration mixing effects. Finally, from the latter R-matrix elements, collision strengths for fine structure transitions are computed (with inclusion of both intermediate coupling and configuration mixing). (Auth.)

  14. Analyticity properties of the S-matrix: historical survey and recent results in S-matrix theory and axiomatic field theory

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1981-02-01

    An introduction to recent works, in S-matrix theory and axiomatic field theory, on the analysis and derivation of momentum-space analyticity properties of the multiparticle S matrix is presented. It includes an historical survey, which outlines the successes but also the basic difficulties encountered in the sixties in both theories, and the evolution of the subject in the seventies

  15. Effective enforcement of the forest practices code

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-12-31

    The British Columbia Forest Practices Code establishes a scheme to guide and direct forest harvesting and other forest uses in concert with other related acts. The Code is made up of the Forest Practices Code of British Columbia Act, regulations, standards, and guidebooks. This document provides information on Code enforcement. It reviews the roles of the three provincial resource ministries and the Attorney General in enforcing the code, the various activities undertaken to ensure compliance (including inspections, investigations, and responses to noncompliance), and the role of the public in helping to enforce the Code. The appendix contains a list of Ministry of Forests office locations and telephone numbers.

  16. Introduction of thermal-hydraulic analysis code and system analysis code for HTGR

    International Nuclear Information System (INIS)

    Tanaka, Mitsuhiro; Izaki, Makoto; Koike, Hiroyuki; Tokumitsu, Masashi

    1984-01-01

    Kawasaki Heavy Industries Ltd. has advanced the development and systematization of analysis codes, aiming at lining up the analysis codes for heat transferring flow and control characteristics, taking up HTGR plants as the main object. In order to make the model of flow when shock waves propagate to heating tubes, SALE-3D which can analyze a complex system was developed, therefore, it is reported in this paper. Concerning the analysis code for control characteristics, the method of sensitivity analysis in a topological space including an example of application is reported. The flow analysis code SALE-3D is that for analyzing the flow of compressible viscous fluid in a three-dimensional system over the velocity range from incompressibility limit to supersonic velocity. The fundamental equations and fundamental algorithm of the SALE-3D, the calculation of cell volume, the plotting of perspective drawings and the analysis of the three-dimensional behavior of shock waves propagating in heating tubes after their rupture accident are described. The method of sensitivity analysis was added to the analysis code for control characteristics in a topological space, and blow-down phenomena was analyzed by its application. (Kako, I.)

  17. Fibre-matrix bond strength studies of glass, ceramic, and metal matrix composites

    Science.gov (United States)

    Grande, D. H.; Mandell, J. F.; Hong, K. C. C.

    1988-01-01

    An indentation test technique for compressively loading the ends of individual fibers to produce debonding has been applied to metal, glass, and glass-ceramic matrix composites; bond strength values at debond initiation are calculated using a finite-element model. Results are correlated with composite longitudinal and interlaminar shear behavior for carbon and Nicalon fiber-reinforced glasses and glass-ceramics including the effects of matrix modifications, processing conditions, and high-temperature oxidation embrittlement. The data indicate that significant bonding to improve off-axis and shear properties can be tolerated before the longitudinal behavior becomes brittle. Residual stress and other mechanical bonding effects are important, but improved analyses and multiaxial interfacial failure criteria are needed to adequately interpret bond strength data in terms of composite performance.

  18. RSAP - A Code for Display of Neutron Cross Section Data and SAMMY Fit Results

    International Nuclear Information System (INIS)

    Sayer, R.O.

    2001-01-01

    RSAP is a computer code for display of neutron cross section data and selected SAMMY output. SAMMY is a multilevel R-matrix code for fitting neutron time-of-flight cross-section data using Bayes' method. RSAP, which runs on the Digital Unix Alpha platform, reads ORELA Data Files (ODF) created by SAMMY and uses graphics routines from the PLPLOT package. In addition, RSAP can read data and/or computed values from ASCII files with a format specified by the user. Plot output may be displayed in an X window, sent to a postscript file (rsap.ps), or sent to a color postscript file (rsap.psc). Thirteen plot types are supported, allowing the user to display cross section data, transmission data, errors, theory, Bayes fits, and residuals in various combinations. In this document the designations theory and Bayes refer to the initial and final theoretical cross sections, respectively, as evaluated by SAMMY. Special plot types include Bayes/Data, Theory--Data, and Bayes--Data. Output from two SAMMY runs may be compared by plotting the ratios Theory2/Theory1 and Bayes2/Bayes1 or by plotting the differences (Theory2-Theory1) and (Bayes2-Bayes1)

  19. LDPC Codes with Minimum Distance Proportional to Block Size

    Science.gov (United States)

    Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy

    2009-01-01

    Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low

  20. Reactor safety computer code development at INEL

    International Nuclear Information System (INIS)

    Johnsen, G.W.

    1985-01-01

    This report provides a brief overview of the computer code development programs being conducted at EG and G Idaho, Inc. on behalf of US Nuclear Regulatory Commission and the Department of Energy, Idaho Operations Office. Included are descriptions of the codes being developed, their development status as of the date of this report, and resident code development expertise