WorldWideScience

Sample records for depletion codes applied

  1. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  2. TURTLE 24.0 diffusion depletion code

    International Nuclear Information System (INIS)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is core-contained. Provision is made to save data on tape for future reference. (auth)

  3. ISOGEN: Interactive isotope generation and depletion code

    International Nuclear Information System (INIS)

    Venkata Subbaiah, Kamatam

    2016-01-01

    ISOGEN is an interactive code for solving first order coupled linear differential equations with constant coefficients for a large number of isotopes, which are produced or depleted by the processes of radioactive decay or through neutron transmutation or fission. These coupled equations can be written in a matrix notation involving radioactive decay constants and transmutation coefficients, and the eigenvalues of thus formed matrix vary widely (several tens of orders), and hence no single method of solution is suitable for obtaining precise estimate of concentrations of isotopes. Therefore, different methods of solutions are followed, namely, matrix exponential method, Bateman series method, and Gauss-Seidel iteration method, as was followed in the ORIGEN-2 code. ISOGEN code is written in a modern computer language, VB.NET version 2013 for Windows operating system version 7, which enables one to provide many interactive features between the user and the program. The output results depend on the input neutron database employed and the time step involved in the calculations. The present program can display the information about the database files, and the user has to select one which suits the current need. The program prints the 'WARNING' information if the time step is too large, which is decided based on the built-in convergence criterion. Other salient interactive features provided are (i) inspection of input data that goes into calculation, (ii) viewing of radioactive decay sequence of isotopes (daughters, precursors, photons emitted) in a graphical format, (iii) solution of parent and daughter products by direct Bateman series solution method, (iv) quick input method and context sensitive prompts for guiding the novice user, (v) view of output tables for any parameter of interest, and (vi) output file can be read to generate new information and can be viewed or printed since the program stores basic nuclide concentration unlike other batch jobs. The sample

  4. Sensibility analysis of fuel depletion using different nuclear fuel depletion codes

    International Nuclear Information System (INIS)

    Martins, F.; Velasquez, C.E.; Castro, V.F.; Pereira, C.; Silva, C. A. Mello da

    2017-01-01

    Nowadays, the utilization of different nuclear codes to perform the depletion and criticality calculations has been used to simulated nuclear reactors problems. Therefore, the goal is to analyze the sensibility of the fuel depletion of a PWR assembly using three different nuclear fuel depletion codes. The burnup calculations are performed using the codes MCNP5/ORIGEN2.1 (MONTEBURNS), KENO-VI/ORIGEN-S (TRITONSCALE6.0) and MCNPX (MCNPX/CINDER90). Each nuclear code performs the burnup using different depletion codes. Each depletion code works with collapsed energies from a master library in 1, 3 and 63 groups, respectively. Besides, each code uses different ways to obtain neutron flux that influences the depletions calculation. The results present a comparison of the neutronic parameters and isotopes composition such as criticality and nuclides build-up, the deviation in results are going to be assigned to features of the depletion code in use, such as the different radioactive decay internal libraries and the numerical method involved in solving the coupled differential depletion equations. It is also seen that the longer the period is and the more time steps are chosen, the larger the deviation become. (author)

  5. Sensibility analysis of fuel depletion using different nuclear fuel depletion codes

    Energy Technology Data Exchange (ETDEWEB)

    Martins, F.; Velasquez, C.E.; Castro, V.F.; Pereira, C.; Silva, C. A. Mello da, E-mail: felipmartins94@gmail.com, E-mail: carlosvelcab@hotmail.com, E-mail: victorfariascastro@gmail.com, E-mail: claubia@nuclear.ufmg.br, E-mail: clarysson@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Nowadays, the utilization of different nuclear codes to perform the depletion and criticality calculations has been used to simulated nuclear reactors problems. Therefore, the goal is to analyze the sensibility of the fuel depletion of a PWR assembly using three different nuclear fuel depletion codes. The burnup calculations are performed using the codes MCNP5/ORIGEN2.1 (MONTEBURNS), KENO-VI/ORIGEN-S (TRITONSCALE6.0) and MCNPX (MCNPX/CINDER90). Each nuclear code performs the burnup using different depletion codes. Each depletion code works with collapsed energies from a master library in 1, 3 and 63 groups, respectively. Besides, each code uses different ways to obtain neutron flux that influences the depletions calculation. The results present a comparison of the neutronic parameters and isotopes composition such as criticality and nuclides build-up, the deviation in results are going to be assigned to features of the depletion code in use, such as the different radioactive decay internal libraries and the numerical method involved in solving the coupled differential depletion equations. It is also seen that the longer the period is and the more time steps are chosen, the larger the deviation become. (author)

  6. Development of the point-depletion code DEPTH

    International Nuclear Information System (INIS)

    She, Ding; Wang, Kan; Yu, Ganglin

    2013-01-01

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code

  7. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  8. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  9. Hybrid microscopic depletion model in nodal code DYN3D

    International Nuclear Information System (INIS)

    Bilodid, Y.; Kotlyar, D.; Shwageraus, E.; Fridman, E.; Kliem, S.

    2016-01-01

    Highlights: • A new hybrid method of accounting for spectral history effects is proposed. • Local concentrations of over 1000 nuclides are calculated using micro depletion. • The new method is implemented in nodal code DYN3D and verified. - Abstract: The paper presents a general hybrid method that combines the micro-depletion technique with correction of micro- and macro-diffusion parameters to account for the spectral history effects. The fuel in a core is subjected to time- and space-dependent operational conditions (e.g. coolant density), which cannot be predicted in advance. However, lattice codes assume some average conditions to generate cross sections (XS) for nodal diffusion codes such as DYN3D. Deviation of local operational history from average conditions leads to accumulation of errors in XS, which is referred as spectral history effects. Various methods to account for the spectral history effects, such as spectral index, burnup-averaged operational parameters and micro-depletion, were implemented in some nodal codes. Recently, an alternative method, which characterizes fuel depletion state by burnup and 239 Pu concentration (denoted as Pu-correction) was proposed, implemented in nodal code DYN3D and verified for a wide range of history effects. The method is computationally efficient, however, it has applicability limitations. The current study seeks to improve the accuracy and applicability range of Pu-correction method. The proposed hybrid method combines the micro-depletion method with a XS characterization technique similar to the Pu-correction method. The method was implemented in DYN3D and verified on multiple test cases. The results obtained with DYN3D were compared to those obtained with Monte Carlo code Serpent, which was also used to generate the XS. The observed differences are within the statistical uncertainties.

  10. ISODEP, A Fuel Depletion Analysis Code for Predicting Isotopic ...

    African Journals Online (AJOL)

    The trend of results was found to be consistent with those obtained by analytical and other numerical methods. Discovery and Innovation Vol. 13 no. 3/4 December (2001) pp. 184-195. KEY WORDS: depletion analysis, code, research reactor, simultaneous equations, decay of nuclides, radionuclitides, isotope. Résumé

  11. Monte Carlo simulation in UWB1 depletion code

    International Nuclear Information System (INIS)

    Lovecky, M.; Prehradny, J.; Jirickova, J.; Skoda, R.

    2015-01-01

    U W B 1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article

  12. Implantation of a new calculation method of fuel depletion in the CITHAM code

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1985-01-01

    It is evaluated the accuracy of the linear aproximation method used in the CITHAN code to obtain the solution of depletion equations. Results are compared with the Benchmark problem. The convenience of depletion chain before criticality calculations is analysed. The depletion calculation was modified using linear combination technic of linear chains. (M.C.K.) [pt

  13. Depletion methodology in the 3-D whole core transport code DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog; Cho, Jin Young; Zee, Sung Quun

    2005-02-01

    Three dimensional whole-core transport code DeCART has been developed to include a characteristics of the numerical reactor to replace partly the experiment. This code adopts the deterministic method in simulating the neutron behavior with the least assumption and approximation. This neutronic code is also coupled with the thermal hydraulic code CFD and the thermo mechanical code to simulate the combined effects. Depletion module has been implemented in DeCART code to predict the depleted composition in the fuel. The exponential matrix method of ORIGEN-2 has been used for the depletion calculation. The library of including decay constants, yield matrix and others has been used and greatly simplified for the calculation efficiency. This report summarizes the theoretical backgrounds and includes the verification of the depletion module in DeCART by performing the benchmark calculations.

  14. Monte carlo depletion analysis of SMART core by MCNAP code

    International Nuclear Information System (INIS)

    Jung, Jong Sung; Sim, Hyung Jin; Kim, Chang Hyo; Lee, Jung Chan; Ji, Sung Kyun

    2001-01-01

    Depletion an analysis of SMART, a small-sized advanced integral PWR under development by KAERI, is conducted using the Monte Carlo (MC) depletion analysis program, MCNAP. The results are compared with those of the CASMO-3/ MASTER nuclear analysis. The difference between MASTER and MCNAP on k eff prediction is observed about 600pcm at BOC, and becomes smaller as the core burnup increases. The maximum difference bet ween two predict ions on fuel assembly (FA) normalized power distribution is about 6.6% radially , and 14.5% axially but the differences are observed to lie within standard deviation of MC estimations

  15. San Onofre PWR Data for Code Validation of MOX Fuel Depletion Analyses - Revision 1

    International Nuclear Information System (INIS)

    Hermann, O.W.

    2000-01-01

    The isotopic composition of mixed-oxide fuel (fabricated with both uranium and plutonium isotopes) discharged from reactors is of interest to the Fissile Material Disposition Program. The validation of depletion codes used to predict isotopic compositions of MOX fuel, similar to studies concerning uranium-only fueled reactors, thus, is very important. The EEI-Westinghouse Plutonium Recycle Demonstration Program was conducted to examine the use of MOX fuel in the San Onofre PWR, Unit I, during cycles 2 and 3. The data, usually required as input to depletion codes, either one-dimensional or lattice codes, were taken from various sources and compiled into this report. Where data were either lacking or determined inadequate, the appropriate data were supplied from other references. The scope of the reactor operations and design data, in addition to the isotopic analyses, was considered to be of sufficient quality for depletion code validation

  16. The development of depletion program coupled with Monte Carlo computer code

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan

    2015-01-01

    The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)

  17. Verification of the depletion capabilities of the MCNPX code on a LWR MOX fuel assembly

    International Nuclear Information System (INIS)

    Cerba, S.; Hrncir, M.; Necas, V.

    2012-01-01

    The study deals with the verification of the depletion capabilities of the MCNPX code, which is a linked Monte-Carlo depletion code. For such a purpose the IV-B phase of the OECD NEA Burnup credit benchmark has been chosen. The mentioned benchmark is a code to code comparison of the multiplication coefficient k eff and the isotopic composition of a LWR MOX fuel assembly at three given burnup levels and after five years of cooling. The benchmark consists of 6 cases, 2 different Pu vectors and 3 geometry models, however in this study only the fuel assembly calculations with two Pu vectors were performed. The aim of this study was to compare the obtained result with data from the participants of the OECD NEA Burnup Credit project and confirm the burnup capability of the MCNPX code. (Authors)

  18. ORIGEN2: a revised and updated version of the Oak Ridge isotope generation and depletion code

    International Nuclear Information System (INIS)

    Croff, A.G.

    1980-07-01

    ORIGEN2 is a versatile point depletion and decay computer code for use in simulating nuclear fuel cycles and calculating the nuclide compositions of materials contained therein. This code represents a revision and update of the original ORIGEN computer code which has been distributed world-wide beginning in the early 1970s. The purpose of this report is to give a summary description of a revised and updated version of the original ORIGEN computer code, which has been designated ORIGEN2. A detailed description of the computer code ORIGEN2 is presented. The methods used by ORIGEN2 to solve the nuclear depletion and decay equations are included. Input information necessary to use ORIGEN2 that has not been documented in supporting reports is documented

  19. The pseudo-harmonics method applied to depletion calculation

    International Nuclear Information System (INIS)

    Silva, F.C. da; Amaral, J.A.C.; Thome, Z.D.

    1989-01-01

    In this paper, a new method for performing depletion calculations, based on the use of the Pseudo-Harmonics perturbation method, was developed. The fuel burnup was considered as a global perturbation and the multigroup difusion equations were rewriten in such a way as to treat the soluble boron concentration as the eigenvalue. By doing this, the critical boron concentration can be obtained by a perturbation method. A test of the new method was performed for a H 2 O-colled, D 2 O-moderated reactor. Comparison with direct calculation showed that this method is very accurate and efficient. (author) [pt

  20. Acceleration of the MCNP branch of the OCTOPUS depletion code system

    Energy Technology Data Exchange (ETDEWEB)

    Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J. [Section Nuclear and Reactor Physics, ECN Nuclear Research, Petten (Netherlands)

    1998-09-01

    OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs.

  1. Acceleration of the MCNP branch of the OCTOPUS depletion code system

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J.

    1998-09-01

    OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs

  2. DANDE-a linked code system for core neutronics/depletion analysis

    International Nuclear Information System (INIS)

    LaBauve, R.J.; England, T.R.; George, D.C.; MacFarlane, R.E.; Wilson, W.B.

    1986-01-01

    This report describes DANDE-a modular neutronics, depletion code system for reactor analysis. It consists of nuclear data processing, core physics, and fuel depletion modules, and allows one to use diffusion and transport methods interchangeably in core neutronics calculations. This latter capability is especially important in the design of small modular cores. Additional unique features include the capability of updating the nuclear data file during a calculation; a detailed treatment of depletion, burnable poisons as well as fuel; and the ability to make geometric changes such as control rod repositioning and fuel relocation in the course of a calculation. The detailed treatment of reactor fuel burnup, fission-product creation and decay, as well as inventories of higher-order actinides is a necessity when predicting the behavior of the reactor fuel under increased burn conditions. The operation of the code system is illustrated in this report by two actual problems

  3. DANDE: a linked code system for core neutronics/depletion analysis

    International Nuclear Information System (INIS)

    LaBauve, R.J.; England, T.R.; George, D.C.; MacFarlane, R.E.; Wilson, W.B.

    1986-01-01

    This report describes DANDE - a modular neutronics, depletion code system for reactor analysis. It consists of nuclear data processing, core physics, and fuel depletion modules, and allows one to use diffusion and transport methods interchangeably in core neutronics calculations. This latter capability is especially important in the design of small modular cores. Additional unique features include the capability of updating the nuclear data file during a calculation; a detailed treatment of depletion, burnable poisons as well as fuel; and the ability to make geometric changes such as control rod repositioning and fuel relocation in the cource of a calculation. The detailed treatment of reactor fuel burnup, fission-product creation and decay, as well as inventories of higher-order actinides is a necessity when predicting the behavior of reactor fuel under increased burn conditions. The operation of the code system is illustrated in this report by two sample problems. 25 refs

  4. DANDE: a linked code system for core neutronics/depletion analysis

    International Nuclear Information System (INIS)

    LaBauve, R.J.; England, T.R.; George, D.C.; MacFarlane, R.E.; Wilson, W.B.

    1985-06-01

    This report describes DANDE - a modular neutronics, depletion code system for reactor analysis. It consists of nuclear data processing, core physics, and fuel depletion modules, and allows one to use diffusion and transport methods interchangeably in core neutronics calculations. This latter capability is especially important in the design of small modular cores. Additional unique features include the capability of updating the nuclear data file during a calculation; a detailed treatment of depletion, burnable poisons as well as fuel; and the ability to make geometric changes such as control rod repositioning and fuel relocation in the course of a calculation. The detailed treatment of reactor fuel burnup, fission-product creation and decay, as well as inventories of higher-order actinides is a necessity when predicting the behavior of reactor fuel under increased burn conditions. The operation of the code system is made clear in this report by following a sample problem

  5. NULIF: neutron spectrum generator, few-group constant calculator, and fuel depletion code

    International Nuclear Information System (INIS)

    Wittkopf, W.A.; Tilford, J.M.; Andrews, J.B. II; Kirschner, G.; Hassan, N.M.; Colpo, P.N.

    1977-02-01

    The NULIF code generates a microgroup neutron spectrum and calculates spectrum-weighted few-group parameters for use in a spatial diffusion code. A wide variety of fuel cells, non-fuel cells, and fuel lattices, typical of PWR (or BWR) lattices, are treated. A fuel depletion routine and change card capability allow a broad range of problems to be studied. Coefficient variation with fuel burnup, fuel temperature change, moderator temperature change, soluble boron concentration change, burnable poison variation, and control rod insertion are readily obtained. Heterogeneous effects, including resonance shielding and thermal flux depressions, are treated. Coefficients are obtained for one thermal group and up to three epithermal groups. A special output routine writes the few-group coefficient data in specified format on an output tape for automated fitting in the PDQ07-HARMONY system of spatial diffusion-depletion codes

  6. Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tippayakul, C.; Ivanov, K. [Pennsylvania State Univ., Univ. Park (United States); Misu, S. [AREVA NP GmbH, An AREVA and SIEMENS Company, Erlangen (Germany)

    2006-07-01

    This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross section library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)

  7. Citham a computer code for calculating fuel depletion-description, tests, modifications and evaluation

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1984-12-01

    The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author) [pt

  8. A perturbation-based susbtep method for coupled depletion Monte-Carlo codes

    International Nuclear Information System (INIS)

    Kotlyar, Dan; Aufiero, Manuele; Shwageraus, Eugene; Fratoni, Massimiliano

    2017-01-01

    Highlights: • The GPT method allows to calculate the sensitivity coefficients to any perturbation. • Full Jacobian of sensitivities, cross sections (XS) to concentrations, may be obtained. • The time dependent XS is obtained by combining the GPT and substep methods. • The proposed GPT substep method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. - Abstract: Coupled Monte Carlo (MC) methods are becoming widely used in reactor physics analysis and design. Many research groups therefore, developed their own coupled MC depletion codes. Typically, in such coupled code systems, neutron fluxes and cross sections are provided to the depletion module by solving a static neutron transport problem. These fluxes and cross sections are representative only of a specific time-point. In reality however, both quantities would change through the depletion time interval. Recently, Generalized Perturbation Theory (GPT) equivalent method that relies on collision history approach was implemented in Serpent MC code. This method was used here to calculate the sensitivity of each nuclide and reaction cross section due to the change in concentration of every isotope in the system. The coupling method proposed in this study also uses the substep approach, which incorporates these sensitivity coefficients to account for temporal changes in cross sections. As a result, a notable improvement in time dependent cross section behavior was obtained. The method was implemented in a wrapper script that couples Serpent with an external depletion solver. The performance of this method was compared with other existing methods. The results indicate that the proposed method requires substantially less MC transport solutions to achieve the same accuracy.

  9. Power distribution and fuel depletion calculation for a PWR, using LEOPARD and CITATION codes

    International Nuclear Information System (INIS)

    Batista, J.L.

    1982-01-01

    By modifying LEOPARD a new program, LEOCIT, has been developed in which additional subroutines prepare cross-section libraries in 1, 2 or 4 energy groups and subsequently record these on disc or tape in a format appropriate for direct input to the CITATION code. Use of LEOCIT in conjunction with CITATION is demonstrated by simulating the first depletion cycle of Angra Unit 1. In these calculations two energy groups are used in 1/4, X - Y geometry to give the soluble boron curve, the fuel depletion and the point to point power distribution in Angra 1. Finally relevant results obtained here are compared with those published by Westinghouse, CNEN and Furnas and recommendations are made to improve the system of neutronic calculation developed in this work. (Author) [pt

  10. Sub-step methodology for coupled Monte Carlo depletion and thermal hydraulic codes

    International Nuclear Information System (INIS)

    Kotlyar, D.; Shwageraus, E.

    2016-01-01

    Highlights: • Discretization of time in coupled MC codes determines the results’ accuracy. • The error is due to lack of information regarding the time-dependent reaction rates. • The proposed sub-step method considerably reduces the time discretization error. • No additional MC transport solutions are required within the time step. • The reaction rates are varied as functions of nuclide densities and TH conditions. - Abstract: The governing procedure in coupled Monte Carlo (MC) codes relies on discretization of the simulation time into time steps. Typically, the MC transport solution at discrete points will generate reaction rates, which in most codes are assumed to be constant within the time step. This assumption can trigger numerical instabilities or result in a loss of accuracy, which, in turn, would require reducing the time steps size. This paper focuses on reducing the time discretization error without requiring additional MC transport solutions and hence with no major computational overhead. The sub-step method presented here accounts for the reaction rate variation due to the variation in nuclide densities and thermal hydraulic (TH) conditions. This is achieved by performing additional depletion and TH calculations within the analyzed time step. The method was implemented in BGCore code and subsequently used to analyze a series of test cases. The results indicate that computational speedup of up to a factor of 10 may be achieved over the existing coupling schemes.

  11. Utility subroutine package used by Applied Physics Division export codes

    International Nuclear Information System (INIS)

    Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.

    1983-04-01

    This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers

  12. Requests from use experience of ORIGEN code. Activity of the working group on evaluation of nuclide generation and depletion

    International Nuclear Information System (INIS)

    Matsumura, Tetsuo

    2005-01-01

    A questionnaire survey was carried out through the committee members of the working group on evaluation of nuclide generation and depletion about the demand accuracy of the ORIGEN code which is used widely in various fields of design analysis and evaluation. WG committee asked each organization's ORIGEN user, and obtained the replay from various fields. (author)

  13. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  14. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  15. FORIG: a computer code for calculating radionuclide generation and depletion in fusion and fission reactors. User's manual

    International Nuclear Information System (INIS)

    Blink, J.A.

    1985-03-01

    In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs

  16. First steps towards a validation of the new burnup and depletion code TNT

    Energy Technology Data Exchange (ETDEWEB)

    Herber, S.C.; Allelein, H.J. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6); Friege, N. [RWTH Aachen (Germany). Inst. for Reactor Safety and Reactor Technology; Kasselmann, S. [Research Center Juelich (Germany). Inst. for Energy and Climate Research - Nuclear Waste Disposal and Reactor Safety (IEK-6)

    2012-11-01

    In the frame of the fusion of the core design calculation capabilities, represented by V.S.O.P., and the accident calculation capabilities, represented by MGT(-3D), the successor of the TINTE code, difficulties were observed in defining an interface between a program backbone and the ORIGEN code respectively the ORIGENJUEL code. The estimation of the effort of refactoring the ORIGEN code or to write a new burnup code from scratch, led to the decision that it would be more efficient writing a new code, which could benefit from existing programming and software engineering tools from the computer code side and which can use the latest knowledge of nuclear reactions, e.g. consider all documented reaction channels. Therefore a new code with an object-oriented approach was developed at IEK-6. Object-oriented programming is currently state of the art and provides mostly an improved extensibility and maintainability. The new code was named TNT which stands for Topological Nuclide Transformation, since the code makes use of the real topology of the nuclear reactions. Here we want to present some first validation results from code to code benchmarks with the codes ORIGEN V2.2 and FISPACT2005 and whenever possible analytical results also used for the comparison. The 2 reference codes were chosen due to their high reputation in the field of fission reactor analysis (ORIGEN) and fusion facilities (FISPACT). (orig.)

  17. Development of a fuel depletion sensitivity calculation module for multi-cell problems in a deterministic reactor physics code system CBZ

    International Nuclear Information System (INIS)

    Chiba, Go; Kawamoto, Yosuke; Narabayashi, Tadashi

    2016-01-01

    Highlights: • A new functionality of fuel depletion sensitivity calculations is developed in a code system CBZ. • This is based on the generalized perturbation theory for fuel depletion problems. • The theory with a multi-layer depletion step division scheme is described. • Numerical techniques employed in actual implementation are also provided. - Abstract: A new functionality of fuel depletion sensitivity calculations is developed as one module in a deterministic reactor physics code system CBZ. This is based on the generalized perturbation theory for fuel depletion problems. The theory for fuel depletion problems with a multi-layer depletion step division scheme is described in detail. Numerical techniques employed in actual implementation are also provided. Verification calculations are carried out for a 3 × 3 multi-cell problem consisting of two different types of fuel pins. It is shown that the sensitivities of nuclide number densities after fuel depletion with respect to the nuclear data calculated by the new module agree well with reference sensitivities calculated by direct numerical differentiation. To demonstrate the usefulness of the new module, fuel depletion sensitivities in different multi-cell arrangements are compared and non-negligible differences are observed. Nuclear data-induced uncertainties of nuclide number densities obtained with the calculated sensitivities are also compared.

  18. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and γ ray spectrum. FPGS90

    International Nuclear Information System (INIS)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo

    1995-11-01

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting γ ray and β ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted γ ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library 'JNDC Nuclear Data Library of Fission Products - second version -', which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author)

  19. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and {gamma} ray spectrum. FPGS90

    Energy Technology Data Exchange (ETDEWEB)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting {gamma} ray and {beta} ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted {gamma} ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library `JNDC Nuclear Data Library of Fission Products - second version -`, which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author).

  20. Girls Who Code Club | College of Engineering & Applied Science

    Science.gov (United States)

    Olympiad Girls Who Code Club FIRST Tech Challenge NSF I-Corps Site of Southeastern Wisconsin UW-Milwaukee elizabeth_andrews Join UWM's 2017-18 Girls Who Code Club Click above to let us remind you of registration on August 1, 2016! Our Girls Who Code Club will resume in Spring 2018. The Fall 2017 Level 1A and 2A students

  1. Hybrid Micro-Depletion method in the DYN3D code

    Energy Technology Data Exchange (ETDEWEB)

    Bilodid, Yurii [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Div. Reactor Safety

    2016-07-01

    A new method for accounting spectral history effects was developed and implemented in the reactor dynamics code DYN3D. Detailed nuclide content is calculated for each region of the reactor core and used to correct fuel properties. The new method demonstrates excellent results in test cases.

  2. Fast frequency hopping codes applied to SAC optical CDMA network

    Science.gov (United States)

    Tseng, Shin-Pin

    2015-06-01

    This study designed a fast frequency hopping (FFH) code family suitable for application in spectral-amplitude-coding (SAC) optical code-division multiple-access (CDMA) networks. The FFH code family can effectively suppress the effects of multiuser interference and had its origin in the frequency hopping code family. Additional codes were developed as secure codewords for enhancing the security of the network. In considering the system cost and flexibility, simple optical encoders/decoders using fiber Bragg gratings (FBGs) and a set of optical securers using two arrayed-waveguide grating (AWG) demultiplexers (DeMUXs) were also constructed. Based on a Gaussian approximation, expressions for evaluating the bit error rate (BER) and spectral efficiency (SE) of SAC optical CDMA networks are presented. The results indicated that the proposed SAC optical CDMA network exhibited favorable performance.

  3. Coding aperture applied to X-ray imaging

    International Nuclear Information System (INIS)

    Brunol, J.; Sauneuf, R.; Gex, J.P.

    1980-05-01

    We present some X-ray images of grids and plasmas. These images were obtained by using a single circular slit (annular code) as coding aperture and a computer decoding process. The experimental resolution is better than 10μm and it is expected to be in the order of 2 or 3 μm with the same code and an improved decoding process

  4. ACTRAN: a code for depletion calculations in PWR cores aiming the production of minor actinide for using in ADS

    International Nuclear Information System (INIS)

    Santos, Rubens Souza dos

    2009-01-01

    Despite of the renewed willing to accept nuclear power as a mean of mitigate the climate changing, to deal with the long lived waste still cause some concerning in relation to maintain in safety condition, during so many years. A technological solution to overcome this leg of time is to use a facility that burn these waste, besides to generate electricity. This is the idea built in the accelerator driven systems (ADS). This technology is being though to use some minor actinides (MAs) as fuel. This work presents a program to assess actinide concentrations, aiming a fertile-free fuel to be used in the future ADS technology. For that, use was made of a numerical code to solve the steady-state multigroup diffusion equation 3D to calculate the neutron fluxes, coupled it with a new code to solve, also numerically, depletion equations, named ACTRAN code. This paper shows the simulation of a PWR core during the residence time of the nuclear fuel, for three years, and after, for almost four hundred years, to assess the MAs production. The results show some insight in the best management to get a minimum amount of some MAs to use in the future generations of ADS. (author)

  5. 40 CFR 372.23 - SIC and NAICS codes to which this Part applies.

    Science.gov (United States)

    2010-07-01

    ... codes 20 through 39 to which this part applies. (a) SIC codes. Major group or industry code Exceptions... industry code Exceptions and/or limitations 113310Logging 311Food Manufacturing Except 311119—Exception is... Except facilities primarily engaged in Music copyright authorizing use, Music copyright buying and...

  6. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...

  7. ORIGEN-2.2, Isotope Generation and Depletion Code Matrix Exponential Method

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of problem or function: ORIGEN is a computer code system for calculating the buildup, decay, and processing of radioactive materials. ORIGEN2 is a revised version of ORIGEN and incorporates updates of the reactor models, cross sections, fission product yields, decay data, and decay photon data, as well as the source code. ORIGEN-2.1 replaces ORIGEN and includes additional libraries for standard and extended-burnup PWR and BWR calculations, which are documented in ORNL/TM-11018. ORIGEN2.1 was first released in August 1991 and was replaced with ORIGEN2 Version 2.2 in June 2002. Version 2.2 was the first update to ORIGEN2 in over 10 years and was stimulated by a user discovering a discrepancy in the mass of fission products calculated using ORIGEN2 V2.1. Code modifications, as well as reducing the irradiation time step to no more than 100 days/step reduced the discrepancy from ∼10% to 0.16%. The bug does not noticeably affect the fission product mass in typical ORIGEN2 calculations involving reactor fuels because essentially all of the fissions come from actinides that have explicit fission product yield libraries. Thus, most previous ORIGEN2 calculations that were otherwise set up properly should not be affected. 2 - Method of solution: ORIGEN uses a matrix exponential method to solve a large system of coupled, linear, first-order ordinary differential equations with constant coefficients. ORIGEN2 has been variably dimensioned to allow the user to tailor the size of the executable module to the problem size and/or the available computer space. Dimensioned arrays have been set large enough to handle almost any size problem, using virtual memory capabilities available on most mainframe and 386/486 based PCS. The user is provided with much of the framework necessary to put some of the arrays to several different uses, call for the subroutines that perform the desired operations, and provide a mechanism to execute multiple ORIGEN2 problems with a single

  8. Lattices applied to coding for reliable and secure communications

    CERN Document Server

    Costa, Sueli I R; Campello, Antonio; Belfiore, Jean-Claude; Viterbo, Emanuele

    2017-01-01

    This book provides a first course on lattices – mathematical objects pertaining to the realm of discrete geometry, which are of interest to mathematicians for their structure and, at the same time, are used by electrical and computer engineers working on coding theory and cryptography. The book presents both fundamental concepts and a wealth of applications, including coding and transmission over Gaussian channels, techniques for obtaining lattices from finite prime fields and quadratic fields, constructions of spherical codes, and hard lattice problems used in cryptography. The topics selected are covered in a level of detail not usually found in reference books. As the range of applications of lattices continues to grow, this work will appeal to mathematicians, electrical and computer engineers, and graduate or advanced undergraduate in these fields.

  9. Applying a rateless code in content delivery networks

    Science.gov (United States)

    Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan

    2017-09-01

    Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.

  10. PLUTON: Three-group neutronic code for burnup analysis of isotope generation and depletion in highly irradiated LWR fuel rods

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei E; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    PLUTON is a three-group neutronic code analyzing, as functions of time and burnup, the change of radial profiles, together with average values, of power density, burnup, concentration of trans-uranium elements, plutonium buildup, depletion of fissile elements, and fission product generation in water reactor fuel rod with standard UO{sub 2}, UO{sub 2}-Gd{sub 2}O{sub 3}, inhomogeneous MOX, and UO{sub 2}-ThO{sub 2}. The PLUTON code, which has been designed to be run on Windows PC, has adopted a theoretical shape function of neutron attenuation in pellet, which enables users to perform a very fast and accurate calculation easily. The present code includes the irradiation conditions of the Halden Reactor which gives verification data for the code. The total list of trans-uranium elements included in the calculations consists of {sub 92}U{sup 233-239}, {sub 93}Np{sup 237-239}, {sub 94}Pu{sup 238-243}, {sub 95}Am{sup 241-244} (including isomers), and {sub 96}Cm{sup 242-245}. Poisoning fission products are represented by {sub 54}Xe{sup 131,133,135}, {sub 48}Cd{sup 113}, {sub 62}Sm{sup 149,151,152}, {sub 64}Gd{sup 154-160}, {sub 63}Eu{sup 153,155}, {sub 36}Kr{sup 83,85}, {sub 42}Mo{sup 95}, {sub 43}Tc{sup 99}, {sub 45}Rh{sup 103}, {sub 47}Ag{sup 109}, {sub 53}I{sup 127,129,131}, {sub 55}Cs{sup 133}, {sub 57}La{sup 139}, {sub 59}Pr{sup 141}, {sub 60}Nd{sup 143-150}, {sub 61}Pm{sup 147}. Fission gases and volatiles included in the code are {sub 36}Kr{sup 83-86}, {sub 54}Xe{sup 129-136}, {sub 52}Te{sup 125-130}, {sub 53}I{sup 127-131}, {sub 55}Cs{sup 133-137}, and {sub 56}Ba{sup 135-140}. Verification has been performed up to 83 GWd/tU, and a satisfactory agreement has been obtained. (author)

  11. Adaptive Wavelet Coding Applied in a Wireless Control System.

    Science.gov (United States)

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  12. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  13. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    Energy Technology Data Exchange (ETDEWEB)

    Kasselmann, S., E-mail: s.kasselmann@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Schitthelm, O. [Forschungszentrum Jülich, 52425 Jülich (Germany); Tantillo, F. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany); Scholthaus, S.; Rössel, C. [Forschungszentrum Jülich, 52425 Jülich (Germany); Allelein, H.-J. [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH-Aachen, 52064 Aachen (Germany)

    2016-09-15

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains and therefore speeds up the calculation scheme. Highest priority has been given to the existence of a generic software interface well as an easy handling by making use of XML files for the user input. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach.

  14. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    International Nuclear Information System (INIS)

    Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.

    2014-01-01

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)

  15. A new 3D maser code applied to flaring events

    Science.gov (United States)

    Gray, M. D.; Mason, L.; Etoka, S.

    2018-06-01

    We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth was broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.

  16. Lightening protection, techniques, applied codes and standards. Vol. 4

    International Nuclear Information System (INIS)

    Mahmoud, M.; Shaaban, H.; Lamey, S.

    1996-01-01

    Lightening is the only natural disaster that protection against is highly effective. Therefore for the safety of critical installations specifically nuclear, an effective lightening protection system (LPS) is required. The design and installation of LPS's have been addressed by many international codes and standards. In this paper, the various LPS's are discussed and compared, including radioactive air terminals, ionizing air terminals, and terminals equipped with electrical trigging devices. Also, the so-called dissipation array systems are discussed and compared to other systems technically and economically. Moreover, the available international codes and standards related to the lightening protection are discussed. such standards include those published by the national fire protection association (NFPA), lightening protection institute (LPI), underwriters laboratories (UL), and british standards Finally, the possibility of developing an egyptian national standards is discussed

  17. Extension of hybrid micro-depletion model for decay heat calculation in the DYN3D code

    International Nuclear Information System (INIS)

    Bilodid, Yurii; Fridman, Emil; Shwageraus, E.

    2017-01-01

    This work extends the hybrid micro-depletion methodology, recently implemented in DYN3D, to the decay heat calculation by accounting explicitly for the heat contribution from the decay of each nuclide in the fuel.

  18. Extension of hybrid micro-depletion model for decay heat calculation in the DYN3D code

    Energy Technology Data Exchange (ETDEWEB)

    Bilodid, Yurii; Fridman, Emil [Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden (Germany). Reactor Safety; Kotlyar, D. [Georgia Institute of Technology, Atlanta, GA (United States); Shwageraus, E. [Cambridge Univ. (United Kingdom)

    2017-06-01

    This work extends the hybrid micro-depletion methodology, recently implemented in DYN3D, to the decay heat calculation by accounting explicitly for the heat contribution from the decay of each nuclide in the fuel.

  19. Row Reduction Applied to Decoding of Rank Metric and Subspace Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Nielsen, Johan Sebastian Rosenkilde; Li, Wenhui

    2017-01-01

    We show that decoding of ℓ-Interleaved Gabidulin codes, as well as list-ℓ decoding of Mahdavifar–Vardy (MV) codes can be performed by row reducing skew polynomial matrices. Inspired by row reduction of F[x] matrices, we develop a general and flexible approach of transforming matrices over skew...... polynomial rings into a certain reduced form. We apply this to solve generalised shift register problems over skew polynomial rings which occur in decoding ℓ-Interleaved Gabidulin codes. We obtain an algorithm with complexity O(ℓμ2) where μ measures the size of the input problem and is proportional...... to the code length n in the case of decoding. Further, we show how to perform the interpolation step of list-ℓ-decoding MV codes in complexity O(ℓn2), where n is the number of interpolation constraints....

  20. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  1. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    International Nuclear Information System (INIS)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin

    2013-01-01

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors

  2. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin [POSCO Nuclear Technology, Seoul (Korea, Republic of)

    2013-05-15

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors.

  3. Uncertainty and sensitivity analysis applied to coupled code calculations for a VVER plant transient

    International Nuclear Information System (INIS)

    Langenbuch, S.; Krzykacz-Hausmann, B.; Schmidt, K. D.

    2004-01-01

    The development of coupled codes, combining thermal-hydraulic system codes and 3D neutron kinetics, is an important step to perform best-estimate plant transient calculations. It is generally agreed that the application of best-estimate methods should be supplemented by an uncertainty and sensitivity analysis to quantify the uncertainty of the results. The paper presents results from the application of the GRS uncertainty and sensitivity method for a VVER-440 plant transient, which was already studied earlier for the validation of coupled codes. For this application, the main steps of the uncertainty method are described. Typical results of the method applied to the analysis of the plant transient by several working groups using different coupled codes are presented and discussed The results demonstrate the capability of an uncertainty and sensitivity analysis. (authors)

  4. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  5. Depleted uranium

    International Nuclear Information System (INIS)

    Huffer, E.; Nifenecker, H.

    2001-02-01

    This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)

  6. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

    Science.gov (United States)

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-04-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

  7. Uncertainty propagation applied to multi-scale thermal-hydraulics coupled codes. A step towards validation

    Energy Technology Data Exchange (ETDEWEB)

    Geffray, Clotaire Clement

    2017-03-20

    The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.

  8. Applying a System Dynamics Approach for Modeling Groundwater Dynamics to Depletion under Different Economical and Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Hamid Balali

    2015-09-01

    Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.

  9. Section 525(a) of the bankruptcy code plainly does not apply to Medicare provider agreements.

    Science.gov (United States)

    Sperow, E H

    2001-01-01

    Section 525(a) of the Bankruptcy Code prevents government entities from discriminating against debtors based on the debtor's bankruptcy filing. This Article analyzes how this provision is applied to healthcare providers who file for bankruptcy. Some commentators have expressed concerns that because of Section 525, the federal government is unable to deny a bankrupt provider a new Medicare provider agreement due to the debtor's failure to pay debts discharged during bankruptcy. This Article, however, argues that Section 525 does not apply to a provider agreements because it is not a "license, permit, charter, franchise, or other similar grant" as defined by the statute. Therefore, the author concludes that debtor healthcare providers should not be allowed back into the Medicare program without first paying their statutorily required debts.

  10. EPRI depletion benchmark calculations using PARAGON

    International Nuclear Information System (INIS)

    Kucukboyaci, Vefa N.

    2015-01-01

    Highlights: • PARAGON depletion calculations are benchmarked against the EPRI reactivity decrement experiments. • Benchmarks cover a wide range of enrichments, burnups, cooling times, and burnable absorbers, and different depletion and storage conditions. • Results from PARAGON-SCALE scheme are more conservative relative to the benchmark data. • ENDF/B-VII based data reduces the excess conservatism and brings the predictions closer to benchmark reactivity decrement values. - Abstract: In order to conservatively apply burnup credit in spent fuel pool criticality analyses, code validation for both fresh and used fuel is required. Fresh fuel validation is typically done by modeling experiments from the “International Handbook.” A depletion validation can determine a bias and bias uncertainty for the worth of the isotopes not found in the fresh fuel critical experiments. Westinghouse’s burnup credit methodology uses PARAGON™ (Westinghouse 2-D lattice physics code) and its 70-group cross-section library, which have been benchmarked, qualified, and licensed both as a standalone transport code and as a nuclear data source for core design simulations. A bias and bias uncertainty for the worth of depletion isotopes, however, are not available for PARAGON. Instead, the 5% decrement approach for depletion uncertainty is used, as set forth in the Kopp memo. Recently, EPRI developed a set of benchmarks based on a large set of power distribution measurements to ascertain reactivity biases. The depletion reactivity has been used to create 11 benchmark cases for 10, 20, 30, 40, 50, and 60 GWd/MTU and 3 cooling times 100 h, 5 years, and 15 years. These benchmark cases are analyzed with PARAGON and the SCALE package and sensitivity studies are performed using different cross-section libraries based on ENDF/B-VI.3 and ENDF/B-VII data to assess that the 5% decrement approach is conservative for determining depletion uncertainty

  11. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  12. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  13. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  14. First Results for Fluid Dynamics, Neutronics and Fission Product Behaviour in HTR applying the HTR Code Package (HCP) Prototype

    International Nuclear Information System (INIS)

    Allelein, H.-J.; Kasselmann, S.; Xhonneux, A.; Lambertz, D.

    2014-01-01

    To simulate the different aspects of High Temperature Reactor (HTR) cores, a variety of specialized computer codes have been developed at Forschungszentrum Jülich (IEK-6) and Aachen University (LRST) in the last decades. In order to preserve knowledge, to overcome present limitations and to make these codes applicable to modern computer clusters, these individual programs are being integrated into a consistent code package. The so-called HTR code package (HCP) couples the related and recently applied physics models in a highly integrated manner and therefore allows to simulate phenomena with higher precision in space and time while at the same time applying state-of-the-art programming techniques and standards. This paper provides an overview of the status of the HCP and reports about first benchmark results for an HCP prototype which couples the fluid dynamics and time dependent neutronics code MGT-3D, the burn up code TNT and the fission product release code STACY. Due to the coupling of MGT-3D and TNT, a first step towards a new reactor operation and accident simulation code was made, where nuclide concentrations calculated by TNT are fed back into a new spectrum code of the HCP. Selected operation scenarios of the HTR-Module 200 concept plant and the HTTR were chosen to be simulated with the HCP prototype. The fission product release during normal operation conditions will be calculated with STACY based on a core status derived from SERPENT and MGT–3D. Comparisons will be shown against data generated by the legacy codes VSOP99/11, NAKURE and FRESCO-II. (author)

  15. High order depletion sensitivity analysis

    International Nuclear Information System (INIS)

    Naguib, K.; Adib, M.; Morcos, H.N.

    2002-01-01

    A high order depletion sensitivity method was applied to calculate the sensitivities of build-up of actinides in the irradiated fuel due to cross-section uncertainties. An iteration method based on Taylor series expansion was applied to construct stationary principle, from which all orders of perturbations were calculated. The irradiated EK-10 and MTR-20 fuels at their maximum burn-up of 25% and 65% respectively were considered for sensitivity analysis. The results of calculation show that, in case of EK-10 fuel (low burn-up), the first order sensitivity was found to be enough to perform an accuracy of 1%. While in case of MTR-20 (high burn-up) the fifth order was found to provide 3% accuracy. A computer code SENS was developed to provide the required calculations

  16. First results for fluid dynamics, neutronics and fission product behavior in HTR applying the HTR code package (HCP) prototype

    Energy Technology Data Exchange (ETDEWEB)

    Allelein, H.-J., E-mail: h.j.allelein@fz-juelich.de [Forschungszentrum Jülich, 52425 Jülich (Germany); Institute for Reactor Safety and Reactor Technology, RWTH Aachen University, 52064 Aachen (Germany); Kasselmann, S.; Xhonneux, A.; Tantillo, F.; Trabadela, A.; Lambertz, D. [Forschungszentrum Jülich, 52425 Jülich (Germany)

    2016-09-15

    To simulate the different aspects of High Temperature Reactor (HTR) cores, a variety of specialized computer codes have been developed at Forschungszentrum Jülich (IEK-6) and Aachen University (LRST) in the last decades. In order to preserve knowledge, to overcome present limitations and to make these codes applicable to modern computer clusters, these individual programs are being integrated into a consistent code package. The so-called HTR code package (HCP) couples the related and recently applied physics models in a highly integrated manner and therefore allows to simulate phenomena with higher precision in space and time while at the same time applying state-of-the-art programming techniques and standards. This paper provides an overview of the status of the HCP and reports about first benchmark results for an HCP prototype which couples the fluid dynamics and time dependent neutronics code MGT-3D, the burn up code TNT and the fission product release code STACY. Due to the coupling of MGT-3D and TNT, a first step towards a new reactor operation and accident simulation code was made, where nuclide concentrations calculated by TNT lead to new cross sections, which are fed back into MGT-3D. Selected operation scenarios of the HTR-Module 200 concept plant and the HTTR were chosen to be simulated with the HCP prototype. The fission product release during normal operation conditions will be calculated with STACY based on a core status derived from SERPENT and MGT-3D. Comparisons will be shown against data generated by SERPENT and the legacy codes VSOP99/11, NAKURE and FRESCO-II.

  17. Laser direct marking applied to rasterizing miniature Data Matrix Code on aluminum alloy

    Science.gov (United States)

    Li, Xia-Shuang; He, Wei-Ping; Lei, Lei; Wang, Jian; Guo, Gai-Fang; Zhang, Teng-Yun; Yue, Ting

    2016-03-01

    Precise miniaturization of 2D Data Matrix (DM) Codes on Aluminum alloy formed by raster mode laser direct part marking is demonstrated. The characteristic edge over-burn effects, which render vector mode laser direct part marking inadequate for producing precise and readable miniature codes, are minimized with raster mode laser marking. To obtain the control mechanism for the contrast and print growth of miniature DM code by raster laser marking process, the temperature field model of long pulse laser interaction with material is established. From the experimental results, laser average power and Q frequency have an important effect on the contrast and print growth of miniature DM code, and the threshold of laser average power and Q frequency for an identifiable miniature DM code are respectively 3.6 W and 110 kHz, which matches the model well within normal operating conditions. In addition, the empirical model of correlation occurring between laser marking parameters and module size is also obtained, and the optimal processing parameter values for an identifiable miniature DM code of different but certain data size are given. It is also found that an increase of the repeat scanning number effectively improves the surface finish of bore, the appearance consistency of modules, which has benefit to reading. The reading quality of miniature DM code is greatly improved using ultrasonic cleaning in water by avoiding the interference of color speckles surrounding modules.

  18. An integrated colloid fractionation approach applied to the characterisation of porewater uranium-humic interactions at a depleted uranium contaminated site

    International Nuclear Information System (INIS)

    Graham, Margaret C.; Oliver, Ian W.; MacKenzie, Angus B.; Ellam, Robert M.; Farmer, John G.

    2008-01-01

    Methods for the fractionation of aquatic colloids require careful application to ensure efficient, accurate and reproducible separations. This paper describes the novel combination of mild colloidal fractionation and characterisation methods, namely centrifugal ultrafiltration, gel electrophoresis and gel filtration along with spectroscopic (UV-visible) and elemental (Inductively Coupled Plasma-Optical Emission Spectroscopy, Inductively Coupled Plasma-Mass Spectrometry) analysis, an approach which produced highly consistent results, providing improved confidence in these methods. Application to the study of the colloidal and dissolved components of soil porewaters from one soil at a depleted uranium (DU)-contaminated site revealed uranium (U) associations with both large (100 kDa-0.2 μm) and small (3-30 kDa) humic colloids. For a nearby soil with lower organic matter content, however, association with large (100 kDa-0.2 μm) iron (Fe)-aluminium (Al) colloids in addition to an association with small (3-30 kDa) humic colloids was observed. The integrated colloid fractionation approach presented herein can now be applied with confidence to investigate U and indeed other trace metal migration in soil and aquatic systems

  19. Modification of PRETOR Code to Be Applied to Transport Simulation in Stellarators

    International Nuclear Information System (INIS)

    Fontanet, J.; Castejon, F.; Dies, J.; Fontdecaba, J.; Alejaldre, C.

    2001-01-01

    The 1.5 D transport code PRETOR, that has been previously used to simulate tokamak plasmas, has been modified to perform transport analysis in stellarator geometry. The main modifications that have been introduced in the code are related with the magnetic equilibrium and with the modelling of energy and particle transport. Therefore, PRETOR- Stellarator version has been achieved and the code is suitable to perform simulations on stellarator plasmas. As an example, PRETOR- Stellarator has been used in the transport analysis of several Heliac Flexible TJ-II shots, and the results are compared with those obtained using PROCTR code. These results are also compared with the obtained using the tokamak version of PRETOR to show the importance of the introduced changes. (Author) 18 refs

  20. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    Science.gov (United States)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  1. Development of Geometry Optimization Methodology with In-house CFD code, and Challenge in Applying to Fuel Assembly

    International Nuclear Information System (INIS)

    Jeong, J. H.; Lee, K. L.

    2016-01-01

    The wire spacer has important roles to avoid collisions between adjacent rods, to mitigate a vortex induced vibration, and to enhance convective heat transfer by wire spacer induced secondary flow. Many experimental and numerical works has been conducted to understand the thermal-hydraulics of the wire-wrapped fuel bundles. There has been enormous growth in computing capability. Recently, a huge increase of computer power allows to three-dimensional simulation of thermal-hydraulics of wire-wrapped fuel bundles. In this study, the geometry optimization methodology with RANS based in-house CFD (Computational Fluid Dynamics) code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI (General Grid Interface) function is developed for in-house CFD code. Furthermore, three-dimensional flow fields calculated with in-house CFD code are compared with those calculated with general purpose commercial CFD solver, CFX. The geometry optimization methodology with RANS based in-house CFD code has been successfully developed in air condition. In order to apply the developed methodology to fuel assembly, GGI function is developed for in-house CFD code as same as CFX. Even though both analyses are conducted with same computational meshes, numerical error due to GGI function locally occurred in only CFX solver around rod surface and boundary region between inner fluid region and outer fluid region.

  2. Comparative Analysis of VERA Depletion Problems

    International Nuclear Information System (INIS)

    Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations

  3. Sticks and Stones: Why First Amendment Absolutism Fails When Applied to Campus Harassment Codes.

    Science.gov (United States)

    Lumsden, Linda

    This paper analyzes how absolutist arguments against campus harassment codes violate the spirit of the first amendment, examining in particular the United States Supreme Court ruling in "RAV v. St. Paul." The paper begins by tracing the current development of first amendment doctrine, analyzing its inadequacy in the campus hate speech…

  4. Comparison of different LMFBR primary containment codes applied to a Benchmark problem

    International Nuclear Information System (INIS)

    Benuzzi, A.

    1986-01-01

    The Cont Benchmark calculation exercise is a project sponsored by the Containment Loading and Response Group, a subgroup of the Safety Working Group of the Fast Reactor Coordinating Committee - CEC. A full-size typical Pool type LMFBR undergoing a postulated Core Disruptive Accident (CDA) has been defined by Belgonucleaire-Brussels under a study contract financed by the CEC and has been submitted to seven containment code calculations. The results of these calculations are presented and discussed in this paper

  5. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  6. A modular simulation code applied to pressurized water nuclear power plants

    International Nuclear Information System (INIS)

    Agnoux, D.

    1992-01-01

    Analysis of the overall operation of an installation requires taking into account all couplings between the various components and integrating all the automatic actions initiated by control and instrumentation. The tool used for this analysis must be a high performing simulation model, flexible enough to be able to be quickly adapted to varying configurations. In order to study the behaviour of PWR nuclear power stations during normal or incidental operating transients, EDF-SEPTEN has developed the ERABLE code (Etudes Reacteurs a Base LEGO), based on the LEGO software package. (author)

  7. Apply or Not to Apply, That Is The Question: Sustainable Development as Solution to the Antinomy About the Application of the New Forest Code

    Directory of Open Access Journals (Sweden)

    Rafael Antonietti Matthes

    2016-10-01

    Full Text Available Starting from the Brazilian constitutional premise, through which, economic development, and social development, should strive for maintaining environmental quality for present and future generations (Article 225, heading, this study suggests a possible indicator to resolve the contradiction related to the applicability or otherwise of the new Forest Code (Law 12651 of May 25, 2012 to the terms of adjustment of conduct signed before its term, which agreed obligations should be implemented upon its validity. Apply or not apply, that is the question. On the one hand, postulate in favor of the fence thesis environmental backlash on the other, there is the provision of incentives such as propulsion protective behaviors and the factual social effectiveness of current regulations now. Using the methods of dialectical and systemic approach, with empirical notes, lists up the fundamental right to sustainable development as respond to the contrast of the legal language indicated in the methodological problem.

  8. TRIO-EF a general thermal hydraulics computer code applied to the Avlis process

    International Nuclear Information System (INIS)

    Magnaud, J.P.; Claveau, M.; Coulon, N.; Yala, P.; Guilbaud, D.; Mejane, A.

    1993-01-01

    TRIO(EF is a general purpose Fluid Mechanics 3D Finite Element Code. The system capabilities cover areas such as steady state or transient, laminar or turbulent, isothermal or temperature dependent fluid flows; it is applicable to the study of coupled thermo-fluid problems involving heat conduction and possibly radiative heat transfer. It has been used to study the thermal behaviour of the AVLIS process separation module. In this process, a linear electron beam impinges the free surface of a uranium ingot, generating a two dimensional curtain emission of vapour from a water-cooled crucible. The energy transferred to the metal causes its partial melting, forming a pool where strong convective motion increases heat transfer towards the crucible. In the upper part of the Separation Module, the internal structures are devoted to two main functions: vapor containment and reflux, irradiation and physical separation. They are subjected to very high temperature levels and heat transfer occurs mainly by radiation. Moreover, special attention has to be paid to electron backscattering. These two major points have been simulated numerically with TRIO-EF and the paper presents and comments the results of such a computation, for each of them. After a brief overview of the computer code, two examples of the TRIO-EF capabilities are given: a crucible thermal hydraulics model, a thermal analysis of the internal structures

  9. Assessment of Optical Coherence Tomography Color Probability Codes in Myopic Glaucoma Eyes After Applying a Myopic Normative Database.

    Science.gov (United States)

    Seol, Bo Ram; Kim, Dong Myung; Park, Ki Ho; Jeoung, Jin Wook

    2017-11-01

    To evaluate the optical coherence tomography (OCT) color probability codes based on a myopic normative database and to investigate whether the implementation of the myopic normative database can improve the OCT diagnostic ability in myopic glaucoma. Comparative validity study. In this study, 305 eyes (154 myopic healthy eyes and 151 myopic glaucoma eyes) were included. A myopic normative database was obtained based on myopic healthy eyes. We evaluated the agreement between OCT color probability codes after applying the built-in and myopic normative databases, respectively. Another 120 eyes (60 myopic healthy eyes and 60 myopic glaucoma eyes) were included and the diagnostic performance of OCT color codes using a myopic normative database was investigated. The mean weighted kappa (Kw) coefficients for quadrant retinal nerve fiber layer (RNFL) thickness, clock-hour RNFL thickness, and ganglion cell-inner plexiform layer (GCIPL) thickness were 0.636, 0.627, and 0.564, respectively. The myopic normative database showed a higher specificity than did the built-in normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P = .011, P = .004, P database. The implementation of a myopic normative database is needed to allow more precise interpretation of OCT color probability codes when used in myopic eyes. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Cervical vertebral maturation: An objective and transparent code staging system applied to a 6-year longitudinal investigation.

    Science.gov (United States)

    Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca

    2017-05-01

    To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  11. A computational code for resolution of general compartment models applied to internal dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  12. A computational code for resolution of general compartment models applied to internal dosimetry

    International Nuclear Information System (INIS)

    Claro, Thiago R.; Todo, Alberto S.

    2011-01-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C≠ programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  13. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  14. Numerical Analysis of Diaphragm Wall Model Executed in Poznań Clay Formation Applying Selected Fem Codes

    Directory of Open Access Journals (Sweden)

    Superczyńska M.

    2016-09-01

    Full Text Available The paper presents results of numerical calculations of a diaphragm wall model executed in Poznań clay formation. Two selected FEM codes were applied, Plaxis and Abaqus. Geological description of Poznań clay formation in Poland as well as geotechnical conditions on construction site in Warsaw city area were presented. The constitutive models of clay implemented both in Plaxis and Abaqus were discussed. The parameters of the Poznań clay constitutive models were assumed based on authors’ experimental tests. The results of numerical analysis were compared taking into account the measured values of horizontal displacements.

  15. Equilibrium optimization code OPEQ and results of applying it to HT-7U

    International Nuclear Information System (INIS)

    Zha Xuejun; Zhu Sizheng; Yu Qingquan

    2003-01-01

    The plasma equilibrium configuration has a strong impact on the confinement and MHD stability in tokamaks. For designing a tokamak device, it is an important issue to determine the sites and currents of poloidal coils which have some constraint conditions from physics and engineering with a prescribed equilibrium shape of the plasma. In this paper, an effective method based on multi-variables equilibrium optimization is given. The method can optimize poloidal coils when the previously prescribed plasma parameters are treated as an object function. We apply it to HT-7U equilibrium calculation, and obtain good results

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  17. Experimental study on full-scale ZrCo and depleted uranium beds applied for fast recovery and delivery of hydrogen isotopes

    International Nuclear Information System (INIS)

    Kou, Huaqin; Huang, Zhiyong; Luo, Wenhua; Sang, Ge; Meng, Daqiao; Luo, Deli; Zhang, Guanghui; Chen, Hao; Zhou, Ying; Hu, Changwen

    2015-01-01

    Highlights: • Thin double-layered annulus beds with ZrCo and depleted uranium were fabricated. • Depleted uranium bed delivered 16.41 mol H 2 at rate of 20 Pa m 3 /s within 30 min. • The delivery property of depleted uranium bed was very stable during the 10 cycles. - Abstract: Metal hydride bed is an important component for the deuterium–tritium fusion energy under development in International Thermonuclear Experimental Reactor (ITER), in which the hydrogen recovery and delivery properties are influenced by the bed configuration, operation conditions and the hydrogen storage materials contained in the bed. In this work, a thin double-layered annulus bed configuration was adopted and full-scale beds loaded with ZrCo and depleted uranium (DU) for fast recovery and delivery of hydrogen isotopes were fabricated. The properties of hydrogen recovery/delivery together with the inner structure variation in the fabricated beds were systematically studied. The effects of operation conditions on the performances of the bed were also investigated. It was found that both of the fabricated ZrCo and DU beds were able to achieve the hydrogen storage target of 17.5 mol with fast recovery rate. In addition, experimental results showed that operation of employing extra buffer vessel and scroll pump could not only promote the hydrogen delivery process but also reduce the possibility about disproportionation of ZrCo. Compared with ZrCo bed, DU bed exhibited superior hydrogen delivery performances in terms of fast delivery rate and high hydrogen delivery amount, which could deliver over 16.4 mol H 2 (93.7% of recovery amount) within 30 min at the average delivery rate of 20 Pa m 3 /s. Good reversibility as high as 10 cycles without obvious degradation tendency in both of hydrogen delivery amount and delivery rate for DU bed was also achieved in our study. It was suggested that the fabricated thin double-layered annulus DU bed was a good candidate to rapidly deliver and recover

  18. Radiation physics and shielding codes and analyses applied to design-assist and safety analyses of CANDUR and ACRTM reactors

    International Nuclear Information System (INIS)

    Aydogdu, K.; Boss, C. R.

    2006-01-01

    This paper discusses the radiation physics and shielding codes and analyses applied in the design of CANDU and ACR reactors. The focus is on the types of analyses undertaken rather than the inputs supplied to the engineering disciplines. Nevertheless, the discussion does show how these analyses contribute to the engineering design. Analyses in radiation physics and shielding can be categorized as either design-assist or safety and licensing (accident) analyses. Many of the analyses undertaken are designated 'design-assist' where the analyses are used to generate recommendations that directly influence plant design. These recommendations are directed at mitigating or reducing the radiation hazard of the nuclear power plant with engineered systems and components. Thus the analyses serve a primary safety function by ensuring the plant can be operated with acceptable radiation hazards to the workers and public. In addition to this role of design assist, radiation physics and shielding codes are also deployed in safety and licensing assessments of the consequences of radioactive releases of gaseous and liquid effluents during normal operation and gaseous effluents following accidents. In the latter category, the final consequences of accident sequences, expressed in terms of radiation dose to members of the public, and inputs to accident analysis, e.g., decay heat in fuel following a loss-of-coolant accident, are also calculated. Another role of the analyses is to demonstrate that the design of the plant satisfies the principle of ALARA (as low as reasonably achievable) radiation doses. This principle is applied throughout the design process to minimize worker and public doses. The principle of ALARA is an inherent part of all design-assist recommendations and safety and licensing assessments. The main focus of an ALARA exercise at the design stage is to minimize the radiation hazards at the source. This exploits material selection and impurity specifications and relies

  19. A preliminary neutronic evaluation and depletion study of VHTR and LS-VHTR reactors using the codes: WIMSD5 and MCNPX

    International Nuclear Information System (INIS)

    Silva, Fabiano C.; Pereira, Claubia; Costa, Antonella Lombardi; Veloso, Maria Auxiliadora Fortini

    2009-01-01

    It is expected that, in the future, besides electricity generation, reactors should also develop secondary activities, such as hydrogen generation and seawater desalinization. Generation IV reactors are expected to possess special characteristics, like high safety, minimization of radioactive rejects amount and ability to use reprocessed fuel with non-proliferating projects in their cycles. Among the projects of IV generation reactors available nowadays, the (High Temperature Reactors) HTR, are highlighted due to these desirable characteristics. Under such circumstances, such reactor may be able to have significant higher thermal power ratings to be used for hydrogen production, without loose of safety, even in an emergency. For this work, we have chosen two HTR concepts of a prismatic reactor: (Very High Temperature Reactor) VHTR and the (Liquid Salted -Very High Temperature Reactor) LS-VHTR. The principal difference between them is the coolant. The VHTR uses helium gas as a coolant and have a burnup of 101,661 MWd/THM while the LS-VHTR uses low-pressure liquid coolant molten fluoride salt with a boiling point near 1500 de C working at 155,946 MWd/THM. The ultimate power output is limited by the capacity of the passive decay system; this capacity is limited by the reactor vessel temperature. The goal was to evaluate the neutronic behavior and fuel composition during the burnup using the codes (Winfrith Improved Multi-Group Scheme) WIMSD5 and the MCNPX2.6. The first, deterministic and the second, stochastic. For both reactors, burned fuel type 'C' coming from Angra-I nuclear plant, in Brazil, was used with 3.1% of initial enrichment, burnup to 33,000 MWd/THM using the ORIGEN2.1 code, divided in three steps of 11,000 MWd/THM, with an average density power of 37.75 MWd/THM and 5 years of cooling in pool. Finally, the fuel was reprocessed by Purex technique extracting 99.9% of Pu, and the desired amount of fissile material (15%) to achieve the final mixed oxide was

  20. Specification for the VERA Depletion Benchmark Suite

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kang Seog [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-12-17

    CASL-X-2015-1014-000 iii Consortium for Advanced Simulation of LWRs EXECUTIVE SUMMARY The CASL neutronics simulator MPACT is under development for the neutronics and T-H coupled simulation for the pressurized water reactor. MPACT includes the ORIGEN-API and internal depletion module to perform depletion calculations based upon neutron-material reaction and radioactive decay. It is a challenge to validate the depletion capability because of the insufficient measured data. One of the detoured methods to validate it is to perform a code-to-code comparison for benchmark problems. In this study a depletion benchmark suite has been developed and a detailed guideline has been provided to obtain meaningful computational outcomes which can be used in the validation of the MPACT depletion capability.

  1. Statistical implications in Monte Carlo depletions - 051

    International Nuclear Information System (INIS)

    Zhiwen, Xu; Rhodes, J.; Smith, K.

    2010-01-01

    As a result of steady advances of computer power, continuous-energy Monte Carlo depletion analysis is attracting considerable attention for reactor burnup calculations. The typical Monte Carlo analysis is set up as a combination of a Monte Carlo neutron transport solver and a fuel burnup solver. Note that the burnup solver is a deterministic module. The statistical errors in Monte Carlo solutions are introduced into nuclide number densities and propagated along fuel burnup. This paper is towards the understanding of the statistical implications in Monte Carlo depletions, including both statistical bias and statistical variations in depleted fuel number densities. The deterministic Studsvik lattice physics code, CASMO-5, is modified to model the Monte Carlo depletion. The statistical bias in depleted number densities is found to be negligible compared to its statistical variations, which, in turn, demonstrates the correctness of the Monte Carlo depletion method. Meanwhile, the statistical variation in number densities generally increases with burnup. Several possible ways of reducing the statistical errors are discussed: 1) to increase the number of individual Monte Carlo histories; 2) to increase the number of time steps; 3) to run additional independent Monte Carlo depletion cases. Finally, a new Monte Carlo depletion methodology, called the batch depletion method, is proposed, which consists of performing a set of independent Monte Carlo depletions and is thus capable of estimating the overall statistical errors including both the local statistical error and the propagated statistical error. (authors)

  2. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  3. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  4. Feedback Codes and Action Plans: Building the Capacity of First-Year Students to Apply Feedback to a Scientific Report

    Science.gov (United States)

    Bird, Fiona L.; Yucel, Robyn

    2015-01-01

    Effective feedback can build self-assessment skills in students so that they become more competent and confident to identify and self-correct weaknesses in their work. In this study, we trialled a feedback code as part of an integrated programme of formative and summative assessment tasks, which provided feedback to first-year students on their…

  5. Applying the universal neutron transport codes to the calculation of well-logging probe response at different rock porosities

    International Nuclear Information System (INIS)

    Bogacz, J.; Loskiewicz, J.; Zazula, J.M.

    1991-01-01

    The use of universal neutron transport codes in order to calculate the parameters of well-logging probes presents a new approach first tried in U.S.A. and UK in the eighties. This paper deals with first such an attempt in Poland. The work is based on the use of MORSE code developed in Oak Ridge National Laboratory in U.S.A.. Using CG MORSE code we calculated neutron detector response when surrounded with sandstone of porosities 19% and 38%. During the work it come out that it was necessary to investigate different methods of estimation of the neutron flux. The stochastic estimation method as used currently in the original MORSE code (next collision approximation) can not be used because of slow convergence of its variance. Using the analog type of estimation (calculation of the sum of track lengths inside detector) we obtained results of acceptable variance (∼ 20%) for source-detector spacing smaller than 40 cm. The influence of porosity on detector response is correctly described for detector positioned at 27 cm from the source. At the moment the variances are quite large. (author). 33 refs, 8 figs, 8 tabs

  6. Microscopic to macroscopic depletion model development for FORMOSA-P

    International Nuclear Information System (INIS)

    Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.

    1996-01-01

    Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources

  7. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or from some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model

  8. Deuterium-depleted water

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Steflea, Dumitru; Saros-Rogobete, Irina; Titescu, Gheorghe; Tamaian, Radu

    2001-01-01

    Deuterium-depleted water represents water that has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Deuterium depleted water is produced by vacuum distillation in columns equipped with structured packing made from phosphor bronze or stainless steel. Deuterium-depleted water, the production technique and structured packing are patents of National Institute of Research - Development for Cryogenics and Isotopic Technologies at Rm. Valcea. Researches made in the last few years showed the deuterium-depleted water is a biological active product that could have many applications in medicine and agriculture. (authors)

  9. Comparisons with measured data of the simulated local core parameters by the coupled code ATHLET-BIPR-VVER applying a new enhanced model of the reactor pressure vessel

    International Nuclear Information System (INIS)

    Nikonov, S.; Pasichnyk, I.; Velkov, K.; Pautz, A.

    2011-01-01

    The paper describes the performed comparisons of measured and simulated local core data based on the OECD/NEA Benchmark on Kalinin-3 NPP: 'Switching off of one of the four operating main circulation pumps at nominal reactor power'. The local measurements of in core self-powered neutron detectors (SPND) in 64 fuel assemblies on 7 axial levels are used for the comparisons of the assemblies axial power distributions and the thermocouples readings at 93 fuel assembly heads are applied for the fuel assembly coolant temperature comparisons. The analyses are done on the base of benchmark transient calculations performed with the coupled system code ATHLET/BIPR-VVER. In order to describe more realistically the fluid mixing phenomena in a reactor pressure vessel a new enhanced nodalization scheme is being developed. It could take into account asymmetric flow behaviour in the reactor pressure vessel structures like downcomer, reactor core inlet and outlet, control rods' guided tubes, support grids etc. For this purpose details of the core geometry are modelled. About 58000 control volumes and junctions are applied. Cross connection are used to describe the interaction between the fluid objects. The performed comparisons are of great interest because they show some advantages by performing coupled code production pseudo-3D analysis of NPPs applying the parallel thermo-hydraulic channel methodology (or 1D thermo-hydraulic system code modeling). (Authors)

  10. Nuclear Fuel Depletion Analysis Using Matlab Software

    Science.gov (United States)

    Faghihi, F.; Nematollahi, M. R.

    Coupled first order IVPs are frequently used in many parts of engineering and sciences. In this article, we presented a code including three computer programs which are joint with the Matlab software to solve and plot the solutions of the first order coupled stiff or non-stiff IVPs. Some engineering and scientific problems related to IVPs are given and fuel depletion (production of the 239Pu isotope) in a Pressurized Water Nuclear Reactor (PWR) are computed by the present code.

  11. Kinetics of depletion interactions

    NARCIS (Netherlands)

    Vliegenthart, G.A.; Schoot, van der P.P.A.M.

    2003-01-01

    Depletion interactions between colloidal particles dispersed in a fluid medium are effective interactions induced by the presence of other types of colloid. They are not instantaneous but built up in time. We show by means of Brownian dynamics simulations that the static (mean-field) depletion force

  12. Management of depleted uranium

    International Nuclear Information System (INIS)

    2001-01-01

    Large stocks of depleted uranium have arisen as a result of enrichment operations, especially in the United States and the Russian Federation. Countries with depleted uranium stocks are interested in assessing strategies for the use and management of depleted uranium. The choice of strategy depends on several factors, including government and business policy, alternative uses available, the economic value of the material, regulatory aspects and disposal options, and international market developments in the nuclear fuel cycle. This report presents the results of a depleted uranium study conducted by an expert group organised jointly by the OECD Nuclear Energy Agency and the International Atomic Energy Agency. It contains information on current inventories of depleted uranium, potential future arisings, long term management alternatives, peaceful use options and country programmes. In addition, it explores ideas for international collaboration and identifies key issues for governments and policy makers to consider. (authors)

  13. Benefit using reasonable regulations in USA, how to skill up on professional engineers, apply international code, standard, and regulation

    International Nuclear Information System (INIS)

    Turner, S.L.; Morokuzu, Muneo; Amano, Osamu

    2005-01-01

    The reasonable regulations in USA consist of a graduated approach and a risk informed approach (RIA). RIA rationalizes the regulations on the basis of data of operations etc. PSA (Probabilistic Safety Assessment), a general method of RIA, is explained in detail. The benefits of nuclear power plant using RIA are increase of the rate of operation, visualization of risk, application of design standard and design, cost down of nuclear fuel cycle, waste, production and operation, and safety. RIA is supported by the field data, code, standard, regulation and professional engineers. The effects of introduction of RIA are explained. In order to introduce RIA in Japan, all the parties concerned such as the regulation authorities, the electric power industries, makers, universities, have to understand it and work together. A part of scientific society is stated. (S.Y.)

  14. Halo Star Lithium Depletion

    International Nuclear Information System (INIS)

    Pinsonneault, M. H.; Walker, T. P.; Steigman, G.; Narayanan, Vijay K.

    1999-01-01

    The depletion of lithium during the pre-main-sequence and main-sequence phases of stellar evolution plays a crucial role in the comparison of the predictions of big bang nucleosynthesis with the abundances observed in halo stars. Previous work has indicated a wide range of possible depletion factors, ranging from minimal in standard (nonrotating) stellar models to as much as an order of magnitude in models that include rotational mixing. Recent progress in the study of the angular momentum evolution of low-mass stars permits the construction of theoretical models capable of reproducing the angular momentum evolution of low-mass open cluster stars. The distribution of initial angular momenta can be inferred from stellar rotation data in young open clusters. In this paper we report on the application of these models to the study of lithium depletion in main-sequence halo stars. A range of initial angular momenta produces a range of lithium depletion factors on the main sequence. Using the distribution of initial conditions inferred from young open clusters leads to a well-defined halo lithium plateau with modest scatter and a small population of outliers. The mass-dependent angular momentum loss law inferred from open cluster studies produces a nearly flat plateau, unlike previous models that exhibited a downward curvature for hotter temperatures in the 7Li-Teff plane. The overall depletion factor for the plateau stars is sensitive primarily to the solar initial angular momentum used in the calibration for the mixing diffusion coefficients. Uncertainties remain in the treatment of the internal angular momentum transport in the models, and the potential impact of these uncertainties on our results is discussed. The 6Li/7Li depletion ratio is also examined. We find that the dispersion in the plateau and the 6Li/7Li depletion ratio scale with the absolute 7Li depletion in the plateau, and we use observational data to set bounds on the 7Li depletion in main-sequence halo

  15. Investigations of safety-related parameters applying a new multi-group diffusion code for HTR transients

    International Nuclear Information System (INIS)

    Kasselmann, S.; Druska, C.; Lauer, A.

    2010-01-01

    The energy spectra of fast and thermal neutrons from fission reactions in the FZJ code TINTE are modelled by two broad energy groups. Present demands for increased numerical accuracy led to the question of how precise the 2-group approximation is compared to a multi-group model. Therefore a new simulation program called MGT (Multi Group TINTE) has recently been developed which is able to handle up to 43 energy groups. Furthermore, an internal spectrum calculation for the determination of cross-sections can be performed for each time step and location within the reactor. In this study the multi-group energy models are compared to former calculations with only two energy groups. Different scenarios (normal operation and design-basis accidents) have been defined for a high temperature pebble bed reactor design with annular core. The effect of an increasing number of energy groups on safety-related parameters like the fuel and coolant temperature, the nuclear heat source or the xenon concentration is studied. It has been found that for the studied scenarios the use of up to 8 energy groups is a good trade-off between precision and a tolerable amount of computing time. (orig.)

  16. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  17. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  18. A SAS2H/KENO-V Methodology for 3D Full Core depletion analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.; Petrovic, B.

    2003-04-01

    This paper describes the use of a SAS2H/KENO-V methodology for 3D full core depletion analysis and illustrates its capabilities by applying it to burnup analysis of the IRIS core benchmarks. This new SAS2H/KENO-V sequence combines a 3D Monte Carlo full core calculation of node power distribution and a 1D Wigner-Seitz equivalent cell transport method for independent depletion calculation of each of the nodes. This approach reduces by more than an order of magnitude the time required for getting comparable results using the MOCUP code system. The SAS2H/KENO-V results for the asymmetric IRIS core benchmark are in good agreement with the results of the ALPHA/PHOENIX/ANC code system. (author)

  19. Depleted Reactor Analysis With MCNP-4B

    International Nuclear Information System (INIS)

    Caner, M.; Silverman, L.; Bettan, M.

    2004-01-01

    Monte Carlo neutronics calculations are mostly done for fresh reactor cores. There is today an ongoing activity in the development of Monte Carlo plus burnup code systems made possible by the fast gains in computer processor speeds. In this work we investigate the use of MCNP-4B for the calculation of a depleted core of the Soreq reactor (IRR-1). The number densities as function of burnup were taken from the WIMS-D/4 cell code calculations. This particular code coupling has been implemented before. The Monte Carlo code MCNP-4B calculates the coupled transport of neutrons and photons for complicated geometries. We have done neutronics calculations of the IRR-1 core with the WIMS and CITATION codes in the past Also, we have developed an MCNP model of the IRR-1 standard fuel for a criticality safety calculation of a spent fuel storage pool

  20. Monte Carlo Depletion with Critical Spectrum for Assembly Group Constant Generation

    International Nuclear Information System (INIS)

    Park, Ho Jin; Joo, Han Gyu; Shim, Hyung Jin; Kim, Chang Hyo

    2010-01-01

    The conventional two-step procedure has been used in practical nuclear reactor analysis. In this procedure, a deterministic assembly transport code such as HELIOS and CASMO is normally to generate multigroup flux distribution to be used in few-group cross section generation. Recently there are accuracy issues related with the resonance treatment or the double heterogeneity (DH) treatment for VHTR fuel blocks. In order to mitigate the accuracy issues, Monte Carlo (MC) methods can be used as an alternative way to generate few-group cross sections because the accuracy of the MC calculations benefits from its ability to use continuous energy nuclear data and detailed geometric information. In an earlier work, the conventional methods of obtaining multigroup cross sections and the critical spectrum are implemented into the McCARD Monte Carlo code. However, it was not complete in that the critical spectrum is not reflected in the depletion calculation. The purpose of this study is to develop a method to apply the critical spectrum to MC depletion calculations to correct for the leakage effect in the depletion calculation and then to examine the MC based group constants within the two-step procedure by comparing the two-step solution with the direct whole core MC depletion result

  1. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  2. NOMAD: a nodal microscopic analysis method for nuclear fuel depletion

    International Nuclear Information System (INIS)

    Rajic, H.L.; Ougouag, A.M.

    1987-01-01

    Recently developed assembly homogenization techniques made possible very efficient global burnup calculations based on modern nodal methods. There are two possible ways of modeling the global depletion process: macroscopic and microscopic depletion models. Using a microscopic global depletion approach NOMAD (NOdal Microscopic Analysis Method for Nuclear Fuel Depletion), a multigroup, two- and three-dimensional, multicycle depletion code was devised. The code uses the ILLICO nodal diffusion model. The formalism of the ILLICO methodology is extended to treat changes in the macroscopic cross sections during a depletion cycle without recomputing the coupling coefficients. This results in a computationally very efficient method. The code was tested against a well-known depletion benchmark problem. In this problem a two-dimensional pressurized water reactor is depleted through two cycles. Both cycles were run with 1 x 1 and 2 x 2 nodes per assembly. It is obvious that the one node per assembly solution gives unacceptable results while the 2 x 2 solution gives relative power errors consistently below 2%

  3. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models

    International Nuclear Information System (INIS)

    Fonseca, Telma Cristina Ferreira

    2009-01-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C ++ programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  4. Development of burnup methods and capabilities in Monte Carlo code RMC

    International Nuclear Information System (INIS)

    She, Ding; Liu, Yuxuan; Wang, Kan; Yu, Ganglin; Forget, Benoit; Romano, Paul K.; Smith, Kord

    2013-01-01

    Highlights: ► The RMC code has been developed aiming at large-scale burnup calculations. ► Matrix exponential methods are employed to solve the depletion equations. ► The Energy-Bin method reduces the time expense of treating ACE libraries. ► The Cell-Mapping method is efficient to handle massive amounts of tally cells. ► Parallelized depletion is necessary for massive amounts of burnup regions. -- Abstract: The Monte Carlo burnup calculation has always been a challenging problem because of its large time consumption when applied to full-scale assembly or core calculations, and thus its application in routine analysis is limited. Most existing MC burnup codes are usually external wrappers between a MC code, e.g. MCNP, and a depletion code, e.g. ORIGEN. The code RMC is a newly developed MC code with an embedded depletion module aimed at performing burnup calculations of large-scale problems with high efficiency. Several measures have been taken to strengthen the burnup capabilities of RMC. Firstly, an accurate and efficient depletion module called DEPTH has been developed and built in, which employs the rational approximation and polynomial approximation methods. Secondly, the Energy-Bin method and the Cell-Mapping method are implemented to speed up the transport calculations with large numbers of nuclides and tally cells. Thirdly, the batch tally method and the parallelized depletion module have been utilized to better handle cases with massive amounts of burnup regions in parallel calculations. Burnup cases including a PWR pin and a 5 × 5 assembly group are calculated, thereby demonstrating the burnup capabilities of the RMC code. In addition, the computational time and memory requirements of RMC are compared with other MC burnup codes.

  5. Revisiting Antarctic Ozone Depletion

    Science.gov (United States)

    Grooß, Jens-Uwe; Tritscher, Ines; Müller, Rolf

    2015-04-01

    Antarctic ozone depletion is known for almost three decades and it has been well settled that it is caused by chlorine catalysed ozone depletion inside the polar vortex. However, there are still some details, which need to be clarified. In particular, there is a current debate on the relative importance of liquid aerosol and crystalline NAT and ice particles for chlorine activation. Particles have a threefold impact on polar chlorine chemistry, temporary removal of HNO3 from the gas-phase (uptake), permanent removal of HNO3 from the atmosphere (denitrification), and chlorine activation through heterogeneous reactions. We have performed simulations with the Chemical Lagrangian Model of the Stratosphere (CLaMS) employing a recently developed algorithm for saturation-dependent NAT nucleation for the Antarctic winters 2011 and 2012. The simulation results are compared with different satellite observations. With the help of these simulations, we investigate the role of the different processes responsible for chlorine activation and ozone depletion. Especially the sensitivity with respect to the particle type has been investigated. If temperatures are artificially forced to only allow cold binary liquid aerosol, the simulation still shows significant chlorine activation and ozone depletion. The results of the 3-D Chemical Transport Model CLaMS simulations differ from purely Lagrangian longtime trajectory box model simulations which indicates the importance of mixing processes.

  6. Proposal to consistently apply the International Code of Nomenclature of Prokaryotes (ICNP) to names of the oxygenic photosynthetic bacteria (cyanobacteria), including those validly published under the International Code of Botanical Nomenclature (ICBN)/International Code of Nomenclature for algae, fungi and plants (ICN), and proposal to change Principle 2 of the ICNP.

    Science.gov (United States)

    Pinevich, Alexander V

    2015-03-01

    This taxonomic note was motivated by the recent proposal [Oren & Garrity (2014) Int J Syst Evol Microbiol 64, 309-310] to exclude the oxygenic photosynthetic bacteria (cyanobacteria) from the wording of General Consideration 5 of the International Code of Nomenclature of Prokaryotes (ICNP), which entails unilateral coverage of these prokaryotes by the International Code of Nomenclature for algae, fungi, and plants (ICN; formerly the International Code of Botanical Nomenclature, ICBN). On the basis of key viewpoints, approaches and rules in the systematics, taxonomy and nomenclature of prokaryotes it is reciprocally proposed to apply the ICNP to names of cyanobacteria including those validly published under the ICBN/ICN. For this purpose, a change to Principle 2 of the ICNP is proposed to enable validation of cyanobacterial names published under the ICBN/ICN rules. © 2015 IUMS.

  7. LAVENDER: A steady-state core analysis code for design studies of accelerator driven subcritical reactors

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shengcheng; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn; Huang, Kai; He, Mingtao; Li, Xunzhao

    2014-10-15

    Highlights: • A new code system for design studies of accelerator driven subcritical reactors (ADSRs) is developed. • S{sub N} transport solver in triangular-z meshes, fine deletion analysis and multi-channel thermal-hydraulics analysis are coupled in the code. • Numerical results indicate that the code is reliable and efficient for design studies of ADSRs. - Abstract: Accelerator driven subcritical reactors (ADSRs) have been proposed and widely investigated for the transmutation of transuranics (TRUs). ADSRs have several special characteristics, such as the subcritical core driven by spallation neutrons, anisotropic neutron flux distribution and complex geometry etc. These bring up requirements for development or extension of analysis codes to perform design studies. A code system named LAVENDER has been developed in this paper. It couples the modules for spallation target simulation and subcritical core analysis. The neutron transport-depletion calculation scheme is used based on the homogenized cross section from assembly calculations. A three-dimensional S{sub N} nodal transport code based on triangular-z meshes is employed and a multi-channel thermal-hydraulics analysis model is integrated. In the depletion calculation, the evolution of isotopic composition in the core is evaluated using the transmutation trajectory analysis algorithm (TTA) and fine depletion chains. The new code is verified by several benchmarks and code-to-code comparisons. Numerical results indicate that LAVENDER is reliable and efficient to be applied for the steady-state analysis and reactor core design of ADSRs.

  8. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  9. Capital expenditure and depletion

    International Nuclear Information System (INIS)

    Rech, O.; Saniere, A.

    2003-01-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  10. Capital expenditure and depletion

    Energy Technology Data Exchange (ETDEWEB)

    Rech, O.; Saniere, A

    2003-07-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  11. CO Depletion: A Microscopic Perspective

    Energy Technology Data Exchange (ETDEWEB)

    Cazaux, S. [Faculty of Aerospace Engineering, Delft University of Technology, Delft (Netherlands); Martín-Doménech, R.; Caro, G. M. Muñoz; Díaz, C. González [Centro de Astrobiología (INTA-CSIC), Ctra. de Ajalvir, km 4, Torrejón de Ardoz, E-28850 Madrid (Spain); Chen, Y. J. [Department of Physics, National Central University, Jhongli City, 32054, Taoyuan County, Taiwan (China)

    2017-11-10

    In regions where stars form, variations in density and temperature can cause gas to freeze out onto dust grains forming ice mantles, which influences the chemical composition of a cloud. The aim of this paper is to understand in detail the depletion (and desorption) of CO on (from) interstellar dust grains. Experimental simulations were performed under two different (astrophysically relevant) conditions. In parallel, Kinetic Monte Carlo simulations were used to mimic the experimental conditions. In our experiments, CO molecules accrete onto water ice at temperatures below 27 K, with a deposition rate that does not depend on the substrate temperature. During the warm-up phase, the desorption processes do exhibit subtle differences, indicating the presence of weakly bound CO molecules, therefore highlighting a low diffusion efficiency. IR measurements following the ice thickness during the TPD confirm that diffusion occurs at temperatures close to the desorption. Applied to astrophysical conditions, in a pre-stellar core, the binding energies of CO molecules, ranging between 300 and 850 K, depend on the conditions at which CO has been deposited. Because of this wide range of binding energies, the depletion of CO as a function of A{sub V} is much less important than initially thought. The weakly bound molecules, easily released into the gas phase through evaporation, change the balance between accretion and desorption, which result in a larger abundance of CO at high extinctions. In addition, weakly bound CO molecules are also more mobile, and this could increase the reactivity within interstellar ices.

  12. Ozone-depleting Substances (ODS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...

  13. Depleted uranium management alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.

  14. Consequences of biome depletion

    International Nuclear Information System (INIS)

    Salvucci, Emiliano

    2013-01-01

    The human microbiome is an integral part of the superorganism together with their host and they have co-evolved since the early days of the existence of the human species. The modification of the microbiome as a result changes in food and social habits of human beings throughout their life history has led to the emergence of many diseases. In contrast with the Darwinian view of nature of selfishness and competence, new holistic approaches are rising. Under these views, the reconstitution of the microbiome comes out as a fundamental therapy for emerging diseases related to biome depletion.

  15. Depleted uranium management alternatives

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process

  16. MO-G-BRE-05: Clinical Process Improvement and Billing in Radiation Oncology: A Case Study of Applying FMEA for CPT Code 77336 (continuing Medical Physics Consultation)

    International Nuclear Information System (INIS)

    Spirydovich, S; Huq, M

    2014-01-01

    Purpose: The improvement of quality in healthcare can be assessed by Failure Mode and Effects Analysis (FMEA). In radiation oncology, FMEA, as applied to the billing CPT code 77336, can improve both charge capture and, most importantly, quality of the performed services. Methods: We created an FMEA table for the process performed under CPT code 77336. For a given process step, each member of the assembled team (physicist, dosimetrist, and therapist) independently assigned numerical values for: probability of occurrence (O, 1–10), severity (S, 1–10), and probability of detection (D, 1–10) for every failure mode cause and effect combination. The risk priority number, RPN, was then calculated as a product of O, S and D from which an average RPN was calculated for each combination mentioned above. A fault tree diagram, with each process sorted into 6 categories, was created with linked RPN. For processes with high RPN recommended actions were assigned. 2 separate R and V systems (Lantis and EMR-based ARIA) were considered. Results: We identified 9 potential failure modes and corresponding 19 potential causes of these failure modes all resulting in unjustified 77336 charge and compromised quality of care. In Lantis, the range of RPN was 24.5–110.8, and of S values – 2–10. The highest ranking RPN of 110.8 came from the failure mode described as “end-of-treatment check not done before the completion of treatment”, and the highest S value of 10 (RPN=105) from “overrides not checked”. For the same failure modes, within ARIA electronic environment with its additional controls, RPN values were significantly lower (44.3 for end-of-treatment missing check and 20.0 for overrides not checked). Conclusion: Our work has shown that when charge capture was missed that also resulted in some services not being performed. Absence of such necessary services may result in sub-optimal quality of care rendered to patients

  17. MOx Depletion Calculation Benchmark

    International Nuclear Information System (INIS)

    San Felice, Laurence; Eschbach, Romain; Dewi Syarifah, Ratna; Maryam, Seif-Eddine; Hesketh, Kevin

    2016-01-01

    Under the auspices of the NEA Nuclear Science Committee (NSC), the Working Party on Scientific Issues of Reactor Systems (WPRS) has been established to study the reactor physics, fuel performance, radiation transport and shielding, and the uncertainties associated with modelling of these phenomena in present and future nuclear power systems. The WPRS has different expert groups to cover a wide range of scientific issues in these fields. The Expert Group on Reactor Physics and Advanced Nuclear Systems (EGRPANS) was created in 2011 to perform specific tasks associated with reactor physics aspects of present and future nuclear power systems. EGRPANS provides expert advice to the WPRS and the nuclear community on the development needs (data and methods, validation experiments, scenario studies) for different reactor systems and also provides specific technical information regarding: core reactivity characteristics, including fuel depletion effects; core power/flux distributions; Core dynamics and reactivity control. In 2013 EGRPANS published a report that investigated fuel depletion effects in a Pressurised Water Reactor (PWR). This was entitled 'International Comparison of a Depletion Calculation Benchmark on Fuel Cycle Issues' NEA/NSC/DOC(2013) that documented a benchmark exercise for UO 2 fuel rods. This report documents a complementary benchmark exercise that focused on PuO 2 /UO 2 Mixed Oxide (MOX) fuel rods. The results are especially relevant to the back-end of the fuel cycle, including irradiated fuel transport, reprocessing, interim storage and waste repository. Saint-Laurent B1 (SLB1) was the first French reactor to use MOx assemblies. SLB1 is a 900 MWe PWR, with 30% MOx fuel loading. The standard MOx assemblies, used in Saint-Laurent B1 reactor, include three zones with different plutonium enrichments, high Pu content (5.64%) in the center zone, medium Pu content (4.42%) in the intermediate zone and low Pu content (2.91%) in the peripheral zone

  18. Riddle of depleted uranium

    International Nuclear Information System (INIS)

    Hussein, A.S.

    2005-01-01

    Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed

  19. The enhancements and testing for the MCNPX depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; Hendricks, J. S.; Anghaie, S.

    2008-01-01

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model true system physics and better track the evolution of temporal nuclide inventory by simulating the actual physical process. The integration of INDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte- Carlo-linked depletion capability in a single Monte Carlo code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. We describe here the depletion methodology dating from the original linking of MONTEBURNS and MCNP to the first public release of the integrated capability (MCNPX 2. 6.B, June, 2006) that has been reported previously. Then we further detail the many new depletion capability enhancements since then leading to the present capability. The H.B. Robinson benchmark calculation results are also reported. The new MCNPX depletion capability enhancements include: (1) allowing the modeling of as large a system as computer memory capacity permits; (2) tracking every fission product available in ENDF/B VII. 0; (3) enabling depletion in repeated structures geometries such as repeated arrays of fuel pins; (4) including metastable isotopes in burnup; and (5) manually changing the concentrations of key isotopes during different time steps to simulate changing reactor control conditions such as dilution of poisons to maintain criticality during burnup. These enhancements allow better detail to model the true system physics and also improve the robustness of the capability. The H.B. Robinson benchmark calculation was completed in order to determine the accuracy of the depletion solution. Temporal nuclide computations of key actinide and fission products are compared to the results of other

  20. The Toxicity of Depleted Uranium

    OpenAIRE

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...

  1. Development of safety evaluation methods and analysis codes applied to the safety regulations for the design and construction stage of fast breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The purposes of this study are to develop the safety evaluation methods and analysis codes needed in the design and construction stage of fast breeder reactor (FBR). In JFY 2012, the following results are obtained. As for the development of safety evaluation methods needed in the safety examination conducted for the reactor establishment permission, development of the analysis codes, such as core damage analysis code, were carried out following the planned schedule. As for the development of the safety evaluation method needed for the risk informed safety regulation, the quantification technique of the event tree using the Continuous Markov chain Monte Carlo method (CMMC method) were studied. (author)

  2. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  3. Adaption of the PARCS Code for Core Design Audit Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyong Chol; Lee, Young Jin; Uhm, Jae Beop; Kim, Hyunjik [Nuclear Safety Evaluation, Daejeon (Korea, Republic of); Jeong, Hun Young; Ahn, Seunghoon; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2013-05-15

    The eigenvalue calculation also includes quasi-static core depletion analyses. PARCS has implemented variety of features and has been qualified as a regulatory audit code in conjunction with other NRC thermal-hydraulic codes such as TRACE or RELAP5. In this study, as an adaptation effort for audit applications, PARCS is applied for an audit analysis of a reload core design. The lattice physics code HELIOS is used for cross section generation. PARCS-HELIOS code system has been established as a core analysis tool. Calculation results have been compared on a wide spectrum of calculations such as power distribution, critical soluble boron concentration, and rod worth. A reasonable agreement between the audit calculation and the reference results has been found.

  4. Depletion optimization of lumped burnable poisons in pressurized water reactors

    International Nuclear Information System (INIS)

    Kodah, Z.H.

    1982-01-01

    Techniques were developed to construct a set of basic poison depletion curves which deplete in a monotonical manner. These curves were combined to match a required optimized depletion profile by utilizing either linear or non-linear programming methods. Three computer codes, LEOPARD, XSDRN, and EXTERMINATOR-2 were used in the analyses. A depletion routine was developed and incorporated into the XSDRN code to allow the depletion of fuel, fission products, and burnable poisons. The Three Mile Island Unit-1 reactor core was used in this work as a typical PWR core. Two fundamental burnable poison rod designs were studied. They are a solid cylindrical poison rod and an annular cylindrical poison rod with water filling the central region.These two designs have either a uniform mixture of burnable poisons or lumped spheroids of burnable poisons in the poison region. Boron and gadolinium are the two burnable poisons which were investigated in this project. Thermal self-shielding factor calculations for solid and annular poison rods were conducted. Also expressions for overall thermal self-shielding factors for one or more than one size group of poison spheroids inside solid and annular poison rods were derived and studied. Poison spheroids deplete at a slower rate than the poison mixture because each spheroid exhibits some self-shielding effects of its own. The larger the spheroid, the higher the self-shielding effects due to the increase in poison concentration

  5. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    Science.gov (United States)

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  6. The Toxicity of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Wayne Briner

    2010-01-01

    Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.

  7. Ego depletion impairs implicit learning.

    Science.gov (United States)

    Thompson, Kelsey R; Sanchez, Daniel J; Wesley, Abigail H; Reber, Paul J

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  8. The influence of ego depletion on sprint start performance in athletes without track and field experience.

    Science.gov (United States)

    Englert, Chris; Persaud, Brittany N; Oudejans, Raôul R D; Bertrams, Alex

    2015-01-01

    We tested the assumption that ego depletion would affect the sprint start in a sample of N = 38 athletes without track and field experience in an experiment by applying a mixed between- (depletion vs. non-depletion) within- (T1: before manipulation of ego depletion vs. T2: after manipulation of ego depletion) subjects design. We assumed that ego depletion would increase the possibility for a false start, as regulating the impulse to initiate the sprinting movement too soon before the starting signal requires self-control. In line with our assumption, we found a significant interaction as there was only a significant increase in the number of false starts from T1 to T2 for the depletion group while this was not the case for the non-depletion group. We conclude that ego depletion has a detrimental influence on the sprint start in athletes without track and field experience.

  9. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  10. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  11. CRDIAC: Coupled Reactor Depletion Instrument with Automated Control

    International Nuclear Information System (INIS)

    Logan, Steven K.

    2012-01-01

    When modeling the behavior of a nuclear reactor over time, it is important to understand how the isotopes in the reactor will change, or transmute, over that time. This is especially important in the reactor fuel itself. Many nuclear physics modeling codes model how particles interact in the system, but do not model this over time. Thus, another code is used in conjunction with the nuclear physics code to accomplish this. In our code, Monte Carlo N-Particle (MCNP) codes and the Multi Reactor Transmutation Analysis Utility (MRTAU) were chosen as the codes to use. In this way, MCNP would produce the reaction rates in the different isotopes present and MRTAU would use cross sections generated from these reaction rates to determine how the mass of each isotope is lost or gained. Between these two codes, the information must be altered and edited for use. For this, a Python 2.7 script was developed to aid the user in getting the information in the correct forms. This newly developed methodology was called the Coupled Reactor Depletion Instrument with Automated Controls (CRDIAC). As is the case in any newly developed methodology for modeling of physical phenomena, CRDIAC needed to be verified against similar methodology and validated against data taken from an experiment, in our case AFIP-3. AFIP-3 was a reduced enrichment plate type fuel tested in the ATR. We verified our methodology against the MCNP Coupled with ORIGEN2 (MCWO) method and validated our work against the Post Irradiation Examination (PIE) data. When compared to MCWO, the difference in concentration of U-235 throughout Cycle 144A was about 1%. When compared to the PIE data, the average bias for end of life U-235 concentration was about 2%. These results from CRDIAC therefore agree with the MCWO and PIE data, validating and verifying CRDIAC. CRDIAC provides an alternative to using ORIGEN-based methodology, which is useful because CRDIAC's depletion code, MRTAU, uses every available isotope in its depletion

  12. Development of an Application Programming Interface for Depletion Analysis (APIDA)

    International Nuclear Information System (INIS)

    Lago, Daniel; Rahnema, Farzad

    2017-01-01

    Highlights: • APIDA an Application Programming Interface tool for Depletion Analysis. • APIDA employs a matrix exponential method and a linear chain method. • A burnup solver to couple to neutron transport solvers in lattice depletion or reactor core analysis codes. - Abstract: A new utility has been developed with extensive capabilities in identifying nuclide decay and transmutation characteristics, allowing for accurate and efficient tracking of the change in isotopic concentrations in nuclear reactor fuel over time when coupled with a transport solution method. This tool, named the Application Programming Interface for Depletion Analysis (APIDA), employs both a matrix exponential method and a linear chain method to solve for the end-of-time-step nuclide concentrations for all isotopes relevant to nuclear reactors. The Chebyshev Rational Approximation Method (CRAM) was utilized to deal with the ill-conditioned matrices generated during lattice depletion calculations, and a complex linear chain solver was developed to handle isotopes reduced from the burnup matrix due to either radioactive stability or a sufficiently low neutron-induced reaction cross section. The entire tool is housed in a robust but simple application programming interface (API). The development of this API allows other codes, particularly numerical neutron transport solvers, to incorporate APIDA as the burnup solver in a lattice depletion code or reactor core analysis code in memory, without the need to write or read from the hard disk. The APIDA code was benchmarked using several decay and transmutation chains. Burnup solutions produced by APIDA were shown to provide material concentrations comparable to the analytically solved Bateman equations – well below 0.01% relative error for even the most extreme cases using isotopes with vastly different decay constants. As a first order demonstration of the API, APIDA was coupled with the transport solver in the SERPENT code for a fuel pin

  13. Ego Depletion Impairs Implicit Learning

    Science.gov (United States)

    Thompson, Kelsey R.; Sanchez, Daniel J.; Wesley, Abigail H.; Reber, Paul J.

    2014-01-01

    Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing) can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL) task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent. PMID:25275517

  14. Hsp90 depletion goes wild

    OpenAIRE

    Siegal, Mark L; Masel, Joanna

    2012-01-01

    Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...

  15. Ego depletion impairs implicit learning.

    Directory of Open Access Journals (Sweden)

    Kelsey R Thompson

    Full Text Available Implicit skill learning occurs incidentally and without conscious awareness of what is learned. However, the rate and effectiveness of learning may still be affected by decreased availability of central processing resources. Dual-task experiments have generally found impairments in implicit learning, however, these studies have also shown that certain characteristics of the secondary task (e.g., timing can complicate the interpretation of these results. To avoid this problem, the current experiments used a novel method to impose resource constraints prior to engaging in skill learning. Ego depletion theory states that humans possess a limited store of cognitive resources that, when depleted, results in deficits in self-regulation and cognitive control. In a first experiment, we used a standard ego depletion manipulation prior to performance of the Serial Interception Sequence Learning (SISL task. Depleted participants exhibited poorer test performance than did non-depleted controls, indicating that reducing available executive resources may adversely affect implicit sequence learning, expression of sequence knowledge, or both. In a second experiment, depletion was administered either prior to or after training. Participants who reported higher levels of depletion before or after training again showed less sequence-specific knowledge on the post-training assessment. However, the results did not allow for clear separation of ego depletion effects on learning versus subsequent sequence-specific performance. These results indicate that performance on an implicitly learned sequence can be impaired by a reduction in executive resources, in spite of learning taking place outside of awareness and without conscious intent.

  16. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; De Vet, Emely; Fennis, Bob M; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  17. Contributing to the design of accident tolerant fuels by applying the TRANSURANUS fuel performance code. Contribution to the design of ATF by means of TRANSURANUS

    International Nuclear Information System (INIS)

    Van Uffelen, Paul; Schubert, A.; Di Marcello, V.; Van De Laar, J.

    2013-01-01

    The TRANSURANUS fuel performance code is used by safety authorities, industry, research centre and universities in the EU and across the globe. Accordingly, only a very brief overview was provided about the structure of the code and the corresponding input requirements before summarising the needs for simulating new cladding materials such as those considered in the framework of the workshop by means of TRANSURANUS. Two concrete examples were then provided. The first deals with the implementation of the material properties from the CEA for SiC based cladding in the frame of the GoFASTR Project, which is funded by the EU. The second deals with material properties for T91, which have been implemented in the framework of a collaboration agreement between Politecnico di Milano and JRC-ITU. In order to illustrate the impact of replacing one cladding by another, an example irradiation has been selected, and some of the relevant cladding properties shown as a function of irradiation time when considering SiC-based cladding, T91 cladding, compared to standard zircaloy and stainless steel cladding. Finally, it was pointed out that despite the fact that the fuel performance codes may be very useful for the current scoping studies based on available material properties, there are limitations in terms of material properties under representative irradiation conditions, or in terms of representativeness for heterogeneous and anisotropic materials such as Complex Matrix Composite cladding materials (e.g. the so-called sandwiched SiCf-SiC material). More experimental data are therefore required for more refined and reliable predictions

  18. A reflection of the coding of meaning in patient-physician interaction: Jürgen Habermas' theory of communication applied to sequence analysis.

    Science.gov (United States)

    Skirbekk, Helge

    2004-08-01

    This paper introduces parts of Jürgen Habermas' theory of communication in an attempt to understand how meaning is coded in patient-physician communication. By having a closer look at how patients and physicians make assertions with their utterances, light will be shed on difficult aspects of reaching understanding in the clinical encounter. Habermas' theory will be used to differentiate assertions into validity claims referring to truth, truthfulness and rightness. An analysis of hypothetical physician-replies to a patient suffering from back pains will substantiate the necessity for such a theory.

  19. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently

  20. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P/sub 1/) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently.

  1. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First-order perturbation analysis capability is available at the macroscopic cross section level

  2. Enhanced Monte-Carlo-Linked Depletion Capabilities in MCNPX

    International Nuclear Information System (INIS)

    Fensin, Michael L.; Hendricks, John S.; Anghaie, Samim

    2006-01-01

    As advanced reactor concepts challenge the accuracy of current modeling technologies, a higher-fidelity depletion calculation is necessary to model time-dependent core reactivity properly for accurate cycle length and safety margin determinations. The recent integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a completely self-contained Monte-Carlo-linked depletion capability. Two advances have been made in the latest MCNPX capability based on problems observed in pre-released versions: continuous energy collision density tracking and proper fission yield selection. Pre-released versions of the MCNPX depletion code calculated the reaction rates for (n,2n), (n,3n), (n,p), (n,a), and (n,?) by matching the MCNPX steady-state 63-group flux with 63-group cross sections inherent in the CINDER90 library and then collapsing to one-group collision densities for the depletion calculation. This procedure led to inaccuracies due to the miscalculation of the reaction rates resulting from the collapsed multi-group approach. The current version of MCNPX eliminates this problem by using collapsed one-group collision densities generated from continuous energy reaction rates determined during the MCNPX steady-state calculation. MCNPX also now explicitly determines the proper fission yield to be used by the CINDER90 code for the depletion calculation. The CINDER90 code offers a thermal, fast, and high-energy fission yield for each fissile isotope contained in the CINDER90 data file. MCNPX determines which fission yield to use for a specified problem by calculating the integral fission rate for the defined energy boundaries (thermal, fast, and high energy), determining which energy range contains the majority of fissions, and then selecting the appropriate fission yield for the energy range containing the majority of fissions. The MCNPX depletion capability enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code

  3. An investigation of the constitutive relations for intersubchannel transfer mechanisms in horizontal flows as applied in the ASSERT-4 subchannel code

    International Nuclear Information System (INIS)

    Tye, P.; Teyssedou, A.; Tapucu, A.

    1994-01-01

    In this paper, the influence that the constitutive relations used to represent some of the intersubchannel transfer mechanisms has on the predictions of the ASSERT-4 subchannel code for horizontal flows is examined. In particular the choices made in the representation of the gravity driven phase separation phenomena are analyzed. This is done by comparing the predictions of the ASSERT subchannel code with experimental data on void fraction and mass flow rate, obtained for two horizontal interconnected subchannels. ASSERT uses a drift flux model which allows the two phases to have different velocities. In particular ASSERT contains models for the buoyancy effects which cause phase separation between adjacent subchannels in horizontal flows. This feature, which is of great importance in the subchannel analysis of CANDU reactors, is implemented in the constitutive relationship for the relative velocity. In order to isolate different intersubchannel transfer mechanisms, three different subchannel orientations are analyzed. These are the two subchannels at the same elevation, the high void subchannel below the low void subchannel, and the high void subchannel above the low void subchannel. It is observed that for all three subchannel orientations ASSERT does a reasonably good job of predicting the experimental trends. However, certain modifications to the representation of the gravitational phase separation effects which seem to improve the overall predictions are suggested. ((orig.))

  4. Code Calibration Applied to the TCA High-Lift Model in the 14 x 22 Wind Tunnel (Simulation With and Without Model Post-Mount)

    Science.gov (United States)

    Lessard, Wendy B.

    1999-01-01

    The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).

  5. CESAR5.3: Isotopic depletion for Research and Testing Reactor decommissioning

    Science.gov (United States)

    Ritter, Guillaume; Eschbach, Romain; Girieud, Richard; Soulard, Maxime

    2018-05-01

    CESAR stands in French for "simplified depletion applied to reprocessing". The current version is now number 5.3 as it started 30 years ago from a long lasting cooperation with ORANO, co-owner of the code with CEA. This computer code can characterize several types of nuclear fuel assemblies, from the most regular PWR power plants to the most unexpected gas cooled and graphite moderated old timer research facility. Each type of fuel can also include numerous ranges of compositions like UOX, MOX, LEU or HEU. Such versatility comes from a broad catalog of cross section libraries, each corresponding to a specific reactor and fuel matrix design. CESAR goes beyond fuel characterization and can also provide an evaluation of structural materials activation. The cross-sections libraries are generated using the most refined assembly or core level transport code calculation schemes (CEA APOLLO2 or ERANOS), based on the European JEFF3.1.1 nuclear data base. Each new CESAR self shielded cross section library benefits all most recent CEA recommendations as for deterministic physics options. Resulting cross sections are organized as a function of burn up and initial fuel enrichment which allows to condensate this costly process into a series of Legendre polynomials. The final outcome is a fast, accurate and compact CESAR cross section library. Each library is fully validated, against a stochastic transport code (CEA TRIPOLI 4) if needed and against a reference depletion code (CEA DARWIN). Using CESAR does not require any of the neutron physics expertise implemented into cross section libraries generation. It is based on top quality nuclear data (JEFF3.1.1 for ˜400 isotopes) and includes up to date Bateman equation solving algorithms. However, defining a CESAR computation case can be very straightforward. Most results are only 3 steps away from any beginner's ambition: Initial composition, in core depletion and pool decay scenario. On top of a simple utilization architecture

  6. Isotopic depletion with Monte Carlo

    International Nuclear Information System (INIS)

    Martin, W.R.; Rathkopf, J.A.

    1996-06-01

    This work considers a method to deplete isotopes during a time- dependent Monte Carlo simulation of an evolving system. The method is based on explicitly combining a conventional estimator for the scalar flux with the analytical solutions to the isotopic depletion equations. There are no auxiliary calculations; the method is an integral part of the Monte Carlo calculation. The method eliminates negative densities and reduces the variance in the estimates for the isotope densities, compared to existing methods. Moreover, existing methods are shown to be special cases of the general method described in this work, as they can be derived by combining a high variance estimator for the scalar flux with a low-order approximation to the analytical solution to the depletion equation

  7. Hsp90 depletion goes wild

    Directory of Open Access Journals (Sweden)

    Siegal Mark L

    2012-02-01

    Full Text Available Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to revealing cryptic genetic variation. See research article http://wwww.biomedcentral.com/1471-2148/12/25

  8. Depletion field focusing in semiconductors

    NARCIS (Netherlands)

    Prins, M.W.J.; Gelder, Van A.P.

    1996-01-01

    We calculate the three-dimensional depletion field profile in a semiconductor, for a planar semiconductor material with a spatially varying potential upon the surface, and for a tip-shaped semiconductor with a constant surface potential. The nonuniform electric field gives rise to focusing or

  9. Depletion interactions in lyotropic nematics

    NARCIS (Netherlands)

    Schoot, van der P.P.A.M.

    2000-01-01

    A theoretical study of depletion interactions between pairs of small, globular colloids dispersed in a lyotropic nematic of hard, rodlike particles is presented. We find that both the strength and range of the interaction crucially depends on the configuration of the spheres relative to the nematic

  10. Depleted uranium: an explosive dossier

    International Nuclear Information System (INIS)

    Barrillot, B.

    2001-01-01

    This book relates the history of depleted uranium, contemporaneous with the nuclear bomb history. Initially used in nuclear weapons and in experiments linked with nuclear weapons development, this material has been used also in civil industry, in particular in aeronautics. However, its properties made it interesting for military applications all along the 'cold war'. (J.S.)

  11. Global depletion of groundwater resources

    NARCIS (Netherlands)

    Wada, Y.; Beek, L.P.H. van; van Kempen, C.M.; Reckman, J.W.T.M.; Vasak, S.; Bierkens, M.F.P.

    2010-01-01

    In regions with frequent water stress and large aquifer systems groundwater is often used as an additional water source. If groundwater abstraction exceeds the natural groundwater recharge for extensive areas and long times, overexploitation or persistent groundwater depletion occurs. Here we

  12. Impact of mineral resource depletion

    CSIR Research Space (South Africa)

    Brent, AC

    2006-09-01

    Full Text Available In a letter to the editor, the authors comment on BA Steen's article on "Abiotic Resource Depletion: different perceptions of the problem with mineral deposits" published in the special issue of the International Journal of Life Cycle Assessment...

  13. Depleted depletion drives polymer swelling in poor solvent mixtures.

    Science.gov (United States)

    Mukherji, Debashish; Marques, Carlos M; Stuehn, Torsten; Kremer, Kurt

    2017-11-09

    Establishing a link between macromolecular conformation and microscopic interaction is a key to understand properties of polymer solutions and for designing technologically relevant "smart" polymers. Here, polymer solvation in solvent mixtures strike as paradoxical phenomena. For example, when adding polymers to a solvent, such that all particle interactions are repulsive, polymer chains can collapse due to increased monomer-solvent repulsion. This depletion induced monomer-monomer attraction is well known from colloidal stability. A typical example is poly(methyl methacrylate) (PMMA) in water or small alcohols. While polymer collapse in a single poor solvent is well understood, the observed polymer swelling in mixtures of two repulsive solvents is surprising. By combining simulations and theoretical concepts known from polymer physics and colloidal science, we unveil the microscopic, generic origin of this collapse-swelling-collapse behavior. We show that this phenomenon naturally emerges at constant pressure when an appropriate balance of entropically driven depletion interactions is achieved.

  14. WWER reactor physics code applications

    International Nuclear Information System (INIS)

    Gado, J.; Kereszturi, A.; Gacs, A.; Telbisz, M.

    1994-01-01

    The coupled steady-state reactor physics and thermohydraulic code system KARATE has been developed and applied for WWER-1000 and WWER-440 operational calculations. The 3 D coupled kinetic code KIKO3D has been developed and validated for WWER-440 accident analysis applications. The coupled kinetic code SMARTA developed by VTT Helsinki has been applied for WWER-440 accident analysis. The paper gives a summary of the experience in code development and application. (authors). 10 refs., 2 tabs., 5 figs

  15. An Applied Study of Implementation of the Advanced Decommissioning Costing Methodology for Intermediate Storage Facility for Spent Fuel in Studsvik, Sweden with special emphasis to the application of the Omega code

    Energy Technology Data Exchange (ETDEWEB)

    Kristofova, Kristina; Vasko, Marek; Daniska, Vladimir; Ondra, Frantisek; Bezak, Peter [DECOM Slovakia, spol. s.r.o., J. Bottu 2, SK-917 01 Trnava (Slovakia); Lindskog, Staffan [Swedish Nuclear Power Inspectorate, Stockholm (Sweden)

    2007-01-15

    The presented study is focused on an analysis of decommissioning costs for the Intermediate Storage Facility for Spent Fuel (FA) facility in Studsvik prepared by SVAFO and a proposal of the advanced decommissioning costing methodology application. Therefore, this applied study concentrates particularly in the following areas: 1. Analysis of FA facility cost estimates prepared by SVAFO including description of FA facility in Studsvik, summarised input data, applied cost estimates methodology and summarised results from SVAFO study. 2. Discussion of results of the SVAFO analysis, proposals for enhanced cost estimating methodology and upgraded structure of inputs/outputs for decommissioning study for FA facility. 3. Review of costing methodologies with the special emphasis on the advanced costing methodology and cost calculation code OMEGA. 4. Discussion on implementation of the advanced costing methodology for FA facility in Studsvik together with: - identification of areas of implementation; - analyses of local decommissioning infrastructure; - adaptation of the data for the calculation database; - inventory database; and - implementation of the style of work with the computer code OMEGA.

  16. An Applied Study of Implementation of the Advanced Decommissioning Costing Methodology for Intermediate Storage Facility for Spent Fuel in Studsvik, Sweden with special emphasis to the application of the Omega code

    International Nuclear Information System (INIS)

    Kristofova, Kristina; Vasko, Marek; Daniska, Vladimir; Ondra, Frantisek; Bezak, Peter; Lindskog, Staffan

    2007-01-01

    The presented study is focused on an analysis of decommissioning costs for the Intermediate Storage Facility for Spent Fuel (FA) facility in Studsvik prepared by SVAFO and a proposal of the advanced decommissioning costing methodology application. Therefore, this applied study concentrates particularly in the following areas: 1. Analysis of FA facility cost estimates prepared by SVAFO including description of FA facility in Studsvik, summarised input data, applied cost estimates methodology and summarised results from SVAFO study. 2. Discussion of results of the SVAFO analysis, proposals for enhanced cost estimating methodology and upgraded structure of inputs/outputs for decommissioning study for FA facility. 3. Review of costing methodologies with the special emphasis on the advanced costing methodology and cost calculation code OMEGA. 4. Discussion on implementation of the advanced costing methodology for FA facility in Studsvik together with: - identification of areas of implementation; - analyses of local decommissioning infrastructure; - adaptation of the data for the calculation database; - inventory database; and - implementation of the style of work with the computer code OMEGA

  17. Physics of fully depleted CCDs

    International Nuclear Information System (INIS)

    Holland, S E; Bebek, C J; Kolbe, W F; Lee, J S

    2014-01-01

    In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photo-generated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully depleted substrates arising from resistivity variations inherent to the growth of the high-resistivity silicon used to fabricate the CCDs

  18. Uranium Dispersion and Dosimetry (UDAD) Code

    International Nuclear Information System (INIS)

    Momeni, M.H.; Yuan, Y.; Zielen, A.J.

    1979-05-01

    The Uranium Dispersion and Dosimetry (UDAD) Code provides estimates of potential radiation exposure to individuals and to the general population in the vicinity of a uranium processing facility. The UDAD Code incorporates the radiation dose from the airborne release of radioactive materials, and includes dosimetry of inhalation, ingestion, and external exposures. The removal of raioactive particles from a contaminated area by wind action is estimated, atmospheric concentrations of radioactivity from specific sources are calculated, and source depletion as a result of deposition, fallout, and ingrowth of radon daughters are included in a sector-averaged Gaussian plume dispersion model. The average air concentration at any given receptor location is assumed to be constant during each annual release period, but to increase from year to year because of resuspension. Surface contamination and deposition velocity are estimated. Calculation of the inhalation dose and dose rate to an individual is based on the ICRP Task Group Lung Model. Estimates of the dose to the bronchial epithelium of the lung from inhalation of radon and its short-lived daughters are calculated based on a dose conversion factor from the BEIR report. External radiation exposure includes radiation from airborne radionuclides and exposure to radiation from contaminated ground. Terrestrial food pathways include vegetation, meat, milk, poultry, and eggs. Internal dosimetry is based on ICRP recommendations. In addition, individual dose commitments, population dose commitments, and environmental dose commitments are computed. This code also may be applied to dispersion of any other pollutant

  19. Issues in Stratospheric Ozone Depletion.

    Science.gov (United States)

    Lloyd, Steven Andrew

    Following the announcement of the discovery of the Antarctic ozone hole in 1985 there have arisen a multitude of questions pertaining to the nature and consequences of polar ozone depletion. This thesis addresses several of these specific questions, using both computer models of chemical kinetics and the Earth's radiation field as well as laboratory kinetic experiments. A coupled chemical kinetic-radiative numerical model was developed to assist in the analysis of in situ field measurements of several radical and neutral species in the polar and mid-latitude lower stratosphere. Modeling was used in the analysis of enhanced polar ClO, mid-latitude diurnal variation of ClO, and simultaneous measurements of OH, HO_2, H_2 O and O_3. Most importantly, such modeling was instrumental in establishing the link between the observed ClO and BrO concentrations in the Antarctic polar vortex and the observed rate of ozone depletion. The principal medical concern of stratospheric ozone depletion is that ozone loss will lead to the enhancement of ground-level UV-B radiation. Global ozone climatology (40^circS to 50^ circN latitude) was incorporated into a radiation field model to calculate the biologically accumulated dosage (BAD) of UV-B radiation, integrated over days, months, and years. The slope of the annual BAD as a function of latitude was found to correspond to epidemiological data for non-melanoma skin cancers for 30^circ -50^circN. Various ozone loss scenarios were investigated. It was found that a small ozone loss in the tropics can provide as much additional biologically effective UV-B as a much larger ozone loss at higher latitudes. Also, for ozone depletions of > 5%, the BAD of UV-B increases exponentially with decreasing ozone levels. An important key player in determining whether polar ozone depletion can propagate into the populated mid-latitudes is chlorine nitrate, ClONO_2 . As yet this molecule is only indirectly accounted for in computer models and field

  20. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  1. MCNP evaluation of top node control rod depletion below the core in KKL

    International Nuclear Information System (INIS)

    Beran, Tâm; Seltborg, Per; Lindahl, Sten-Örjan; Bieli, Roger; Ledergerber, Guido

    2014-01-01

    In previous studies, there has been identified a significant discrepancy in the BWR control rod top node depletion between the two core simulator nodal codes POLCA7 and PRESTO-2, which indicates that there is a large general uncertainty in nodal codes in calculating the top node depletion of fully withdrawn control rods. In this study, the stochastic Monte Carlo code MCNP has been used to calculate the top node control rod depletion for benchmarking the nodal codes. By using the TIP signal obtained from an extended TIP campaign below the core performed in the KKL reactor, the MCNP model has been verified by comparing the axial profile between the TIP data and the gamma flux calculated by MCNP. The MCNP results have also been compared with calculations from POLCA7, which was found to yield slightly higher depletion rates than MCNP. It was also found that the 10 B depletion in the top node is very sensitive to the exact axial location of the control rod top when it is fully withdrawn. By using the MCNP results, the neutron flux model below the core in the nodal codes can be improved by implementing an exponential function for the neutron flux. (author)

  2. Exposure to nature counteracts aggression after depletion.

    Science.gov (United States)

    Wang, Yan; She, Yihan; Colarelli, Stephen M; Fang, Yuan; Meng, Hui; Chen, Qiuju; Zhang, Xin; Zhu, Hongwei

    2018-01-01

    Acts of self-control are more likely to fail after previous exertion of self-control, known as the ego depletion effect. Research has shown that depleted participants behave more aggressively than non-depleted participants, especially after being provoked. Although exposure to nature (e.g., a walk in the park) has been predicted to replenish resources common to executive functioning and self-control, the extent to which exposure to nature may counteract the depletion effect on aggression has yet to be determined. The present study investigated the effects of exposure to nature on aggression following depletion. Aggression was measured by the intensity of noise blasts participants delivered to an ostensible opponent in a competition reaction-time task. As predicted, an interaction occurred between depletion and environmental manipulations for provoked aggression. Specifically, depleted participants behaved more aggressively in response to provocation than non-depleted participants in the urban condition. However, provoked aggression did not differ between depleted and non-depleted participants in the natural condition. Moreover, within the depletion condition, participants in the natural condition had lower levels of provoked aggression than participants in the urban condition. This study suggests that a brief period of nature exposure may restore self-control and help depleted people regain control over aggressive urges. © 2017 Wiley Periodicals, Inc.

  3. The octopus burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L.; Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de

    1996-09-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  4. The OCTOPUS burnup and criticality code system

    Energy Technology Data Exchange (ETDEWEB)

    Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kuijper, J.C. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Inst.

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.).

  5. The octopus burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de.

    1996-01-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  6. The OCTOPUS burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.)

  7. Uranium, depleted uranium, biological effects

    International Nuclear Information System (INIS)

    2001-01-01

    Physicists, chemists and biologists at the CEA are developing scientific programs on the properties and uses of ionizing radiation. Since the CEA was created in 1945, a great deal of research has been carried out on the properties of natural, enriched and depleted uranium in cooperation with university laboratories and CNRS. There is a great deal of available data about uranium; thousands of analyses have been published in international reviews over more than 40 years. This presentation on uranium is a very brief summary of all these studies. (author)

  8. Efficient characterization of fuel depletion in boiling water reactor

    International Nuclear Information System (INIS)

    Kim, S.H.

    1980-01-01

    An efficient fuel depletion method for boiling water reactor (BWR) fuel assemblies has been developed for fuel cycle analysis. A computer program HISTORY based on this method was designed to carry out accurate and rapid fuel burnup calculation for the fuel assembly. It has been usefully employed to study the depletion characteristics of the fuel assemblies for the preparation of nodal code input data and the fuel management study. The adequacy and the effectiveness of the assessment of this method used in HISTORY were demonstrated by comparing HISTORY results with more detailed CASMO results. The computing cost of HISTORY typically has been less than one dollar for the fuel assembly-level depletion calculations over the full life of the assembly, in contrast to more than $1000 for CASMO. By combining CASMO and HISTORY, a large number of expensive CASMO calculations can be replaced by inexpensive HISTORY. For the depletion calculations via CASMO/HISTORY, CASMO calculations are required only for the reference conditions and just at the beginning of life for other cases such as changes in void fraction, control rod condition and temperature. The simple and inexpensive HISTORY is sufficienty accurate and fast to be used in conjunction with CASMO for fuel cycle analysis and some BWR design calculations

  9. DOUBLE-SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    International Nuclear Information System (INIS)

    OGDEN DM; KIRCH NW

    2007-01-01

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed

  10. "When the going gets tough, who keeps going?" Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T D

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  11. "When the going gets tough, who keeps going?" : Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  12. When the Going Gets Tough, Who Keeps Going? Depletion Sensitivity Moderates the Ego-Depletion Effect

    Directory of Open Access Journals (Sweden)

    Stefanie J. Salmon

    2014-06-01

    Full Text Available Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion.

  13. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  14. VERA Pin and Fuel Assembly Depletion Benchmark Calculations by McCARD and DeCART

    Energy Technology Data Exchange (ETDEWEB)

    Park, Ho Jin; Cho, Jin Young [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    Monte Carlo (MC) codes have been developed and used to simulate a neutron transport since MC method was devised in the Manhattan project. Solving the neutron transport problem with the MC method is simple and straightforward to understand. Because there are few essential approximations for the 6- dimension phase of a neutron such as the location, energy, and direction in MC calculations, highly accurate solutions can be obtained through such calculations. In this work, the VERA pin and fuel assembly (FA) depletion benchmark calculations are performed to examine the depletion capability of the newly generated DeCART multi-group cross section library. To obtain the reference solutions, MC depletion calculations are conducted using McCARD. Moreover, to scrutinize the effect by stochastic uncertainty propagation, uncertainty propagation analyses are performed using a sensitivity and uncertainty (S/U) analysis method and stochastic sampling (S.S) method. It is still expensive and challenging to perform a depletion analysis by a MC code. Nevertheless, many studies and works for a MC depletion analysis have been conducted to utilize the benefits of the MC method. In this study, McCARD MC and DeCART MOC transport calculations are performed for the VERA pin and FA depletion benchmarks. The DeCART depletion calculations are conducted to examine the depletion capability of the newly generated multi-group cross section library. The DeCART depletion calculations give excellent agreement with the McCARD reference one. From the McCARD results, it is observed that the MC depletion results depend on how to split the burnup interval. First, only to quantify the effect of the stochastic uncertainty propagation at 40 DTS, the uncertainty propagation analyses are performed using the S/U and S.S. method.

  15. Bond rupture between colloidal particles with a depletion interaction

    Energy Technology Data Exchange (ETDEWEB)

    Whitaker, Kathryn A.; Furst, Eric M., E-mail: furst@udel.edu [Department of Chemical and Biomolecular Engineering and Center for Molecular and Engineering Thermodynamics, University of Delaware, Newark, Delaware 19716 (United States)

    2016-05-15

    The force required to break the bonds of a depletion gel is measured by dynamically loading pairs of colloidal particles suspended in a solution of a nonadsorbing polymer. Sterically stabilized poly(methyl methacrylate) colloids that are 2.7 μm diameter are brought into contact in a solvent mixture of cyclohexane-cyclohexyl bromide and polystyrene polymer depletant. The particle pairs are subject to a tensile load at a constant loading rate over many approach-retraction cycles. The stochastic nature of the thermal rupture events results in a distribution of bond rupture forces with an average magnitude and variance that increases with increasing depletant concentration. The measured force distribution is described by the flux of particle pairs sampling the energy barrier of the bond interaction potential based on the Asakura–Oosawa depletion model. A transition state model demonstrates the significance of lubrication hydrodynamic interactions and the effect of the applied loading rate on the rupture force of bonds in a depletion gel.

  16. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  17. OSCAR-4 Code System Application to the SAFARI-1 Reactor

    International Nuclear Information System (INIS)

    Stander, Gerhardt; Prinsloo, Rian H.; Tomasevic, Djordje I.; Mueller, Erwin

    2008-01-01

    The OSCAR reactor calculation code system consists of a two-dimensional lattice code, the three-dimensional nodal core simulator code MGRAC and related service codes. The major difference between the new version of the OSCAR system, OSCAR-4, and its predecessor, OSCAR-3, is the new version of MGRAC which contains many new features and model enhancements. In this work some of the major improvements in the nodal diffusion solution method, history tracking, nuclide transmutation and cross section models are described. As part of the validation process of the OSCAR-4 code system (specifically the new MGRAC version), some of the new models are tested by comparing computational results to SAFARI-1 reactor plant data for a number of operational cycles and for varying applications. A specific application of the new features allows correct modeling of, amongst others, the movement of fuel-follower type control rods and dynamic in-core irradiation schedules. It is found that the effect of the improved control rod model, applied over multiple cycles of the SAFARI-1 reactor operation history, has a significant effect on in-cycle reactivity prediction and fuel depletion. (authors)

  18. Applied algebra codes, ciphers and discrete algorithms

    CERN Document Server

    Hardy, Darel W; Walker, Carol L

    2009-01-01

    This book attempts to show the power of algebra in a relatively simple setting.-Mathematical Reviews, 2010… The book supports learning by doing. In each section we can find many examples which clarify the mathematics introduced in the section and each section is followed by a series of exercises of which approximately half are solved in the end of the book. Additional the book comes with a CD-ROM containing an interactive version of the book powered by the computer algebra system Scientific Notebook. … the mathematics in the book are developed as needed and the focus of the book lies clearly o

  19. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  20. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  1. Monoamine depletion by reuptake inhibitors

    Directory of Open Access Journals (Sweden)

    Hinz M

    2011-10-01

    Full Text Available Marty Hinz1, Alvin Stein2, Thomas Uncini31Clinical Research, NeuroResearch Clinics Inc, Cape Coral, FL; 2Stein Orthopedic Associates, Plantation, FL; 3DBS Labs Inc, Duluth, MN, USABackground: Disagreement exists regarding the etiology of cessation of the observed clinical results with administration of reuptake inhibitors. Traditionally, when drug effects wane, it is known as tachyphylaxis. With reuptake inhibitors, the placebo effect is significantly greater than the drug effect in the treatment of depression and attention deficit hyperactivity disorder, leading some to assert that waning of drug effects is placebo relapse, not tachyphylaxis.Methods: Two groups were retrospectively evaluated. Group 1 was composed of subjects with depression and Group 2 was composed of bariatric subjects treated with reuptake inhibitors for appetite suppression.Results: In Group 1, 200 subjects with depression were treated with citalopram 20 mg per day. A total of 46.5% (n = 93 achieved relief of symptoms (Hamilton-D rating score ≤ 7, of whom 37 (39.8% of whom experienced recurrence of depression symptoms, at which point an amino acid precursor formula was started. Within 1–5 days, 97.3% (n = 36 experienced relief of depression symptoms. In Group 2, 220 subjects were treated with phentermine 30 mg in the morning and citalopram 20 mg at 4 pm. In this group, 90.0% (n = 198 achieved adequate appetite suppression. The appetite suppression ceased in all 198 subjects within 4–48 days. Administration of an amino acid precursor formula restored appetite suppression in 98.5% (n = 195 of subjects within 1–5 days.Conclusion: Reuptake inhibitors do not increase the total number of monoamine molecules in the central nervous system. Their mechanism of action facilitates redistribution of monoamines from one place to another. In the process, conditions are induced that facilitate depletion of monoamines. The "reuptake inhibitor monoamine depletion theory" of this paper

  2. Depleted uranium disposal options evaluation

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.; Otis, M.D.

    1994-05-01

    The Department of Energy (DOE), Office of Environmental Restoration and Waste Management, has chartered a study to evaluate alternative management strategies for depleted uranium (DU) currently stored throughout the DOE complex. Historically, DU has been maintained as a strategic resource because of uses for DU metal and potential uses for further enrichment or for uranium oxide as breeder reactor blanket fuel. This study has focused on evaluating the disposal options for DU if it were considered a waste. This report is in no way declaring these DU reserves a ''waste,'' but is intended to provide baseline data for comparison with other management options for use of DU. To PICS considered in this report include: Retrievable disposal; permanent disposal; health hazards; radiation toxicity and chemical toxicity

  3. Uranium under its depleted state

    International Nuclear Information System (INIS)

    2001-01-01

    This day organised by the SFRP, with the help of the Army Health service, the service of radiation protection of Army and IPSN is an information day to inform the public about the real toxicity of uranium, and its becoming in man and environment, about the risks during the use of depleted uranium and eventual consequences of its dispersion after a conflict, to give information on how is managed the protection of workers (civil or military ones) and what is really the situation of French military personnel in these conflicts. The news have brought to the shore cases of leukemia it is necessary to bring some information to the origin of this disease. (N.C.)

  4. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  5. Are relative depletions altered inside diffuse clouds?

    International Nuclear Information System (INIS)

    Joseph, C.L.

    1988-01-01

    The data of Jenkins, Savage, and Spitzer (1986) were used to analyze interstellar abundances and depletions of Fe, P, Mg, and Mn toward 37 stars, spanning nearly 1.0 (dex) in mean line-of-sight depletion. It was found that the depletions of these elements are linearly correlated and do not show evidence of differences in the rates of depletion or sputtering from one element to another. For a given level of overall depletion, the sightline-to-sightline rms variance in the depletion for each of these elements was less than 0.16 (dex), which is significantly smaller than is the element-to-element variance. The results suggest that, for most diffuse lines of sight, the relative abundances of these elements are set early in the lifetime of the grains and are not altered significantly thereafter. 53 references

  6. Comparison of MCNPX-C90 and TRIPOLI-4-D for fuel depletion calculations of a Gas-cooled Fast Reactor

    International Nuclear Information System (INIS)

    Reyes-Ramirez, Ricardo; Martin-del-Campo, Cecilia; Francois, Juan-Luis; Brun, Emeric; Dumonteil, Eric; Malvagi, Fausto

    2010-01-01

    The Gas-cooled Fast Reactor is one of the reactor concepts selected by the Generation IV International Forum for the next generation of innovative nuclear energy systems. Several fuel design concepts are being investigated. Burnup depletion of mixed fuel of uranium and plutonium, cooled with gas in a fast neutron energy spectrum must be simulated. Various codes are being developed and/or adapted to improve the quality of the results, and also to reduce the computing time required for the simulations. The main objective of this work is to compare the fuel depletion results obtained with MCNPX-CINDER90 code and the new TRIPOLI-4-Depletion code (developed by the Commissariat a l'Energie Atomique) of a fuel design concept for the Gas-cooled Fast Reactor. Calculations were made for an equivalent homogeneous model of fuel rods in a hexagonal mesh assembly. Total reflection conditions were applied on the six lateral faces and the two axial faces of the assembly. The materials used in the fuel assembly are: carbide of uranium and plutonium as fuel, silicon carbide as cladding, and helium gas as coolant. JEFF libraries of effective cross sections were used in both codes. Two methods of burnup step calculations were performed with TRIPOLI-4-D, the Euler and the CSADA, and their results were compared with the MCNPX-CINDER90 CSADA method. A period of 300 days of irradiation time was considered, which was divided into 12 steps. Results of the infinite multiplication factor as function of the irradiation time, and the evolution of the isotope concentrations for a selected group of nuclides were compared. The main conclusion is that very similar results were obtained for the three types of depletion calculations which were compared: (1) MCNPX-C90 CSADA; (2) TRIPOLI-4-D CSADA, and (3) TRIPOLI-4-D EULER. The best calculation time was obtained with the TRIPOLI-4-D EULER method, which needed approximately half the time than the other two. In summary, it is sufficiently good to use

  7. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  8. Is gas in the Orion nebula depleted

    International Nuclear Information System (INIS)

    Aiello, S.; Guidi, I.

    1978-01-01

    Depletion of heavy elements has been recognized to be important in the understanding of the chemical composition of the interstellar medium. This problem is also relevant to the study of H II regions. In this paper the gaseous depletion in the physical conditions of the Orion nebula is investigated. The authors reach the conclusion that very probably no depletion of heavy elements, due to sticking on dust grains, took place during the lifetime of the Orion nebula. (Auth.)

  9. Method for depleting BWRs using optimal control rod patterns

    International Nuclear Information System (INIS)

    Taner, M.S.; Levine, S.H.; Hsiao, M.Y.

    1991-01-01

    Control rod (CR) programming is an essential core management activity for boiling water reactors (BWRs). After establishing a core reload design for a BWR, CR programming is performed to develop a sequence of exposure-dependent CR patterns that assure the safe and effective depletion of the core through a reactor cycle. A time-variant target power distribution approach has been assumed in this study. The authors have developed OCTOPUS to implement a new two-step method for designing semioptimal CR programs for BWRs. The optimization procedure of OCTOPUS is based on the method of approximation programming and uses the SIMULATE-E code for nucleonics calculations

  10. Tryptophan depletion affects compulsive behaviour in rats

    DEFF Research Database (Denmark)

    Merchán, A; Navarro, S V; Klein, A B

    2017-01-01

    investigated whether 5-HT manipulation, through a tryptophan (TRP) depletion by diet in Wistar and Lister Hooded rats, modulates compulsive drinking in schedule-induced polydipsia (SIP) and locomotor activity in the open-field test. The levels of dopamine, noradrenaline, serotonin and its metabolite were......-depleted HD Wistar rats, while the LD Wistar and the Lister Hooded rats did not exhibit differences in SIP. In contrast, the TRP-depleted Lister Hooded rats increased locomotor activity compared to the non-depleted rats, while no differences were found in the Wistar rats. Serotonin 2A receptor binding...

  11. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  12. Modeling charge collection efficiency degradation in partially depleted GaAs photodiodes using the 1- and 2-carrier Hecht equations

    International Nuclear Information System (INIS)

    Auden, E.C.; Vizkelethy, G.; Serkland, D.K.; Bossert, D.J.; Doyle, B.L.

    2017-01-01

    The Hecht equation can be used to model the nonlinear degradation of charge collection efficiency (CCE) in response to radiation-induced displacement damage in both fully and partially depleted GaAs photodiodes. CCE degradation is measured for laser-generated photocurrent as a function of fluence and bias in Al_0_._3Ga_0_._7As/GaAs/Al_0_._2_5Ga_0_._7_5As p-i-n photodiodes which have been irradiated with 12 MeV C and 7.5 MeV Si ions. CCE is observed to degrade more rapidly with fluence in partially depleted photodiodes than in fully depleted photodiodes. When the intrinsic GaAs layer is fully depleted, the 2-carrier Hecht equation describes CCE degradation as photogenerated electrons and holes recombine at defect sites created by radiation damage in the depletion region. If the GaAs layer is partially depleted, CCE degradation is more appropriately modeled as the sum of the 2-carrier Hecht equation applied to electrons and holes generated within the depletion region and the 1-carrier Hecht equation applied to minority carriers that diffuse from the field-free (non-depleted) region into the depletion region. Enhanced CCE degradation is attributed to holes that recombine within the field-free region of the partially depleted intrinsic GaAs layer before they can diffuse into the depletion region.

  13. Modeling charge collection efficiency degradation in partially depleted GaAs photodiodes using the 1- and 2-carrier Hecht equations

    Energy Technology Data Exchange (ETDEWEB)

    Auden, E.C., E-mail: eauden@sandia.gov; Vizkelethy, G.; Serkland, D.K.; Bossert, D.J.; Doyle, B.L.

    2017-05-15

    The Hecht equation can be used to model the nonlinear degradation of charge collection efficiency (CCE) in response to radiation-induced displacement damage in both fully and partially depleted GaAs photodiodes. CCE degradation is measured for laser-generated photocurrent as a function of fluence and bias in Al{sub 0.3}Ga{sub 0.7}As/GaAs/Al{sub 0.25}Ga{sub 0.75}As p-i-n photodiodes which have been irradiated with 12 MeV C and 7.5 MeV Si ions. CCE is observed to degrade more rapidly with fluence in partially depleted photodiodes than in fully depleted photodiodes. When the intrinsic GaAs layer is fully depleted, the 2-carrier Hecht equation describes CCE degradation as photogenerated electrons and holes recombine at defect sites created by radiation damage in the depletion region. If the GaAs layer is partially depleted, CCE degradation is more appropriately modeled as the sum of the 2-carrier Hecht equation applied to electrons and holes generated within the depletion region and the 1-carrier Hecht equation applied to minority carriers that diffuse from the field-free (non-depleted) region into the depletion region. Enhanced CCE degradation is attributed to holes that recombine within the field-free region of the partially depleted intrinsic GaAs layer before they can diffuse into the depletion region.

  14. Development Status of Diffusion Code RAST-K 2.0 at UNIST

    Energy Technology Data Exchange (ETDEWEB)

    Park, Minyong; Zheng, Youqi; Choe, Jiwon; Zhang, Peng; Lee, Deokjung [UNIST, Ulsan (Korea, Republic of); Lee, Eunki; Shin, Hocheol [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The non-linear scheme was used based on the 2-group CMFD and a three dimensional multi -group unified nodal method (UNM). To consider the history effects, the main heavy isotopes were tracked by micro-depletion module using CRAM. The simplified 1-D single channel thermal hydraulic solver from nTACER is implemented. The θ method was adopted in the transient calculation. To get detailed pin-wise power and burnup distribution, Pin power reconstruction module was implemented. Also automatic control logic to calculate MTC, FTC, control rod worth was implemented. To perform multicycle analysis, restart and shuffling/rotation module has been implemented. To link between CASMO-4E and RAST-K 2.0, CATORA (CASMO TO RAST-K 2.0) code was developed. Unlike the other diffusion codes, RAST-K 2.0 depletion module uses CRAM and extended depletion chain for fission products. Most lattice codes give cumulative fission yield of Pm-149 without considering Pm-148 and Pm-149 capture reaction which will lead to the increase of Sm-149 number density. This paper reports the status of RAST-K 2.0 code development at UNIST. The new code applies a new kernel based on the two-node UNM with CMFD, and θ method for kinetic calculation. Also, the microdepletion calculation is used to consider the history effects. And other modules and functions also implemented such as pin power reconstruction, branch calculation, restart, multi-cycle, and 1-D single channel T/H solver.

  15. Simulation of groundwater conditions and streamflow depletion to evaluate water availability in a Freeport, Maine, watershed

    Science.gov (United States)

    Nielsen, Martha G.; Locke, Daniel B.

    2012-01-01

    , the public-supply withdrawals (105.5 million gallons per year (Mgal/yr)) were much greater than those for any other category, being almost 7 times greater than all domestic well withdrawals (15.3 Mgal/yr). Industrial withdrawals in the study area (2.0 Mgal/yr) are mostly by a company that withdraws from an aquifer at the edge of the Merrill Brook watershed. Commercial withdrawals are very small (1.0 Mgal/yr), and no irrigation or other agricultural withdrawals were identified in this study area. A three-dimensional, steady-state groundwater-flow model was developed to evaluate stream-aquifer interactions and streamflow depletion from pumping, to help refine the conceptual model, and to predict changes in streamflow resulting from changes in pumping and recharge. Groundwater levels and flow in the Freeport aquifer study area were simulated with the three-dimensional, finite-difference groundwater-flow modeling code, MODFLOW-2005. Study area hydrology was simulated with a 3-layer model, under steady-state conditions. The groundwater model was used to evaluate changes that could occur in the water budgets of three parts of the local hydrologic system (the Harvey Brook watershed, the Merrill Brook watershed, and the buried aquifer from which pumping occurs) under several different climatic and pumping scenarios. The scenarios were (1) no pumping well withdrawals; (2) current (2009) pumping, but simulated drought conditions (20-percent reduction in recharge); (3) current (2009) recharge, but a 50-percent increase in pumping well withdrawals for public supply; and (4) drought conditions and increased pumping combined. In simulated drought situations, the overall recharge to the buried valley is about 15 percent less and the total amount of streamflow in the model area is reduced by about 19 percent. Without pumping, infiltration to the buried valley aquifer around the confining unit decreased by a small amount (0.05 million gallons per day (Mgal/d)), and discharge to the

  16. Radiation survey and decontamination of cape Arza from depleted uranium

    Directory of Open Access Journals (Sweden)

    Vukotić Perko

    2003-01-01

    Full Text Available In the action of NATO A-10 airplanes in 1999, the cape Arza, Serbia and Montenegro was contaminated by depleted uranium. The clean-up operations were undertaken at the site, and 242 uranium projectiles and their 49 larger fragments were removed from the cape. That is about 85% of the total number of projectiles by which Arza was contaminated. Here are described details of the applied procedures and results of the soil radioactivity measurements after decontamination.

  17. Application of depletion perturbation theory to fuel cycle burnup analysis

    International Nuclear Information System (INIS)

    White, J.R.

    1979-01-01

    Over the past several years static perturbation theory methods have been increasingly used for reactor analysis in lieu of more detailed and costly direct computations. Recently, perturbation methods incorporating time dependence have also received attention, and several authors have demonstrated their applicability to fuel burnup analysis. The objective of the work described here is to demonstrate that a time-dependent perturbation method can be easily and accurately applied to realistic depletion problems

  18. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  19. Depleted Uranium and Human Health.

    Science.gov (United States)

    Faa, Armando; Gerosa, Clara; Fanni, Daniela; Floris, Giuseppe; Eyken, Peter V; Lachowicz, Joanna I; Nurchi, Valeria M

    2018-01-01

    Depleted uranium (DU) is generally considered an emerging pollutant, first extensively introduced into environment in the early nineties in Iraq, during the military operation called "Desert Storm". DU has been hypothesized to represent a hazardous element both for soldiers exposed as well as for the inhabitants of the polluted areas in the war zones. In this review, the possible consequences on human health of DU released in the environment are critically analyzed. In the first part, the chemical properties of DU and the principal civil and military uses are summarized. A concise analysis of the mechanisms underlying absorption, blood transport, tissue distribution and excretion of DU in the human body is the subject of the second part of this article. The following sections deal with pathological condition putatively associated with overexposure to DU. Developmental and birth defects, the Persian Gulf syndrome, and kidney diseases that have been associated to DU are the arguments treated in the third section. Finally, data regarding DU exposure and cancer insurgence will be critically analyzed, including leukemia/lymphoma, lung cancer, uterine cervix cancer, breast cancer, bladder cancer and testicular cancer. The aim of the authors is to give a contribution to the debate on DU and its effects on human health and disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  20. Ozone depletion potentials of halocarbons

    International Nuclear Information System (INIS)

    Karol, I.L.; Kiselev, A.A.

    1992-01-01

    The concept of ozone depletion potential (ODP) is widely used in the evaluation of numerous halocarbons and of their replacements for effects on ozone, but the methods, model assumptions and conditions of ODP calculation have not been analyzed adequately. In this paper, a model study of effects on ozone after the instantaneous releases of various amounts of CH 3 CCl 3 and of CHF 2 Cl(HCFC-22) in the several conditions of the background atmosphere are presented, aimed to understand the main connections of ODP values with the methods of their calculations. To facilitate the ODP computation in numerous versions for long after the releases, the above rather short-lived gases have been used. The variation of released gas global mass from 1 Mt to 1 Gt leads to ODP value increase atmosphere. The same variations are analyzed for the CFC-free atmosphere of 1960s conditions for the anthropogenically loaded atmosphere in the 21st century according to the known IPCC- A scenario (business as usual). Recommendations of proper ways of ODP calculations are proposed for practically important cases

  1. Plutonium in depleted uranium penetrators

    International Nuclear Information System (INIS)

    McLaughlin, J.P.; Leon-Vintro, L.; Smith, K.; Mitchell, P.I.; Zunic, Z.S.

    2002-01-01

    Depleted Uranium (DU) penetrators used in the recent Balkan conflicts have been found to be contaminated with trace amounts of transuranic materials such as plutonium. This contamination is usually a consequence of DU fabrication being carried out in facilities also using uranium recycled from spent military and civilian nuclear reactor fuel. Specific activities of 239+240 Plutonium generally in the range 1 to 12 Bq/kg have been found to be present in DU penetrators recovered from the attack sites of the 1999 NATO bombardment of Kosovo. A DU penetrator recovered from a May 1999 attack site at Bratoselce in southern Serbia and analysed by University College Dublin was found to contain 43.7 +/- 1.9 Bq/kg of 239+240 Plutonium. This analysis is described. An account is also given of the general population radiation dose implications arising from both the DU itself and from the presence of plutonium in the penetrators. According to current dosimetric models, in all scenarios considered likely ,the dose from the plutonium is estimated to be much smaller than that due to the uranium isotopes present in the penetrators. (author)

  2. Benefits of the delta K of depletion benchmarks for burnup credit validation

    International Nuclear Information System (INIS)

    Lancaster, D.; Machiels, A.

    2012-01-01

    Pressurized Water Reactor (PWR) burnup credit validation is demonstrated using the benchmarks for quantifying fuel reactivity decrements, published as 'Benchmarks for Quantifying Fuel Reactivity Depletion Uncertainty,' EPRI Report 1022909 (August 2011). This demonstration uses the depletion module TRITON available in the SCALE 6.1 code system followed by criticality calculations using KENO-Va. The difference between the predicted depletion reactivity and the benchmark's depletion reactivity is a bias for the criticality calculations. The uncertainty in the benchmarks is the depletion reactivity uncertainty. This depletion bias and uncertainty is used with the bias and uncertainty from fresh UO 2 critical experiments to determine the criticality safety limits on the neutron multiplication factor, k eff . The analysis shows that SCALE 6.1 with the ENDF/B-VII 238-group cross section library supports the use of a depletion bias of only 0.0015 in delta k if cooling is ignored and 0.0025 if cooling is credited. The uncertainty in the depletion bias is 0.0064. Reliance on the ENDF/B V cross section library produces much larger disagreement with the benchmarks. The analysis covers numerous combinations of depletion and criticality options. In all cases, the historical uncertainty of 5% of the delta k of depletion ('Kopp memo') was shown to be conservative for fuel with more than 30 GWD/MTU burnup. Since this historically assumed burnup uncertainty is not a function of burnup, the Kopp memo's recommended bias and uncertainty may be exceeded at low burnups, but its absolute magnitude is small. (authors)

  3. Depletion sensitivity predicts unhealthy snack purchases

    NARCIS (Netherlands)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; Fennis, Bob M.; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose

  4. The Chemistry and Toxicology of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Sidney A. Katz

    2014-03-01

    Full Text Available Natural uranium is comprised of three radioactive isotopes: 238U, 235U, and 234U. Depleted uranium (DU is a byproduct of the processes for the enrichment of the naturally occurring 235U isotope. The world wide stock pile contains some 1½ million tons of depleted uranium. Some of it has been used to dilute weapons grade uranium (~90% 235U down to reactor grade uranium (~5% 235U, and some of it has been used for heavy tank armor and for the fabrication of armor-piercing bullets and missiles. Such weapons were used by the military in the Persian Gulf, the Balkans and elsewhere. The testing of depleted uranium weapons and their use in combat has resulted in environmental contamination and human exposure. Although the chemical and the toxicological behaviors of depleted uranium are essentially the same as those of natural uranium, the respective chemical forms and isotopic compositions in which they usually occur are different. The chemical and radiological toxicity of depleted uranium can injure biological systems. Normal functioning of the kidney, liver, lung, and heart can be adversely affected by depleted uranium intoxication. The focus of this review is on the chemical and toxicological properties of depleted and natural uranium and some of the possible consequences from long term, low dose exposure to depleted uranium in the environment.

  5. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  6. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  7. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  8. Deuterium - depleted water. Achievements and perspectives

    International Nuclear Information System (INIS)

    Titescu, Gh.; Stefanescu, I.; Saros-Rogobete, I.

    2001-01-01

    Deuterium - depleted water represents water that has an isotopic content lower than 145 ppm D/(D+H) which is the natural isotopic content of water. The research conducted at ICSI Ramnicu Valcea, regarding deuterium - depleted water were completed by the following patents: - technique and installation for deuterium - depleted water production; - distilled water with low deuterium content; - technique and installation for the production of distilled water with low deuterium content; - mineralized water with low deuterium content and technique to produce it. The gold and silver medals won at international salons for inventions confirmed the novelty of these inventions. Knowing that deuterium content of water has a big influence on living organisms, beginning with 1996, the ICSI Ramnicu Valcea, deuterium - depleted water producer, co-operated with Romanian specialized institutes for biological effects' evaluation of deuterium - depleted water. The role of natural deuterium in living organisms was examined by using deuterium - depleted water instead of natural water. These investigations led to the following conclusions: 1. deuterium - depleted water caused a tendency towards the increase of the basal tone, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tone and vascular reactivity produced by the deuterium - depleted water persists after the removal of the vascular endothelium; -2. animals treated with deuterium - depleted water showed an increase of the resistance both to sublethal and to lethal gamma radiation doses, suggesting a radioprotective action by the stimulation of non-specific immune defence mechanism; 3, deuterium - depleted water stimulates immune defence reactions, represented by the opsonic, bactericidal and phagocyte capacity of the immune system, together with increase in the numbers of polymorphonuclear neutrophils; 4. investigations regarding artificial

  9. Development, implementation, and verification of multicycle depletion perturbation theory for reactor burnup analysis

    Energy Technology Data Exchange (ETDEWEB)

    White, J.R.

    1980-08-01

    A generalized depletion perturbation formulation based on the quasi-static method for solving realistic multicycle reactor depletion problems is developed and implemented within the VENTURE/BURNER modular code system. The present development extends the original formulation derived by M.L. Williams to include nuclide discontinuities such as fuel shuffling and discharge. This theory is first described in detail with particular emphasis given to the similarity of the forward and adjoint quasi-static burnup equations. The specific algorithm and computational methods utilized to solve the adjoint problem within the newly developed DEPTH (Depletion Perturbation Theory) module are then briefly discussed. Finally, the main features and computational accuracy of this new method are illustrated through its application to several representative reactor depletion problems.

  10. Verification of the cross-section and depletion chain processing module of DRAGON 3.06

    International Nuclear Information System (INIS)

    Chambon, R.; Marleau, G.; Zkiek, A.

    2008-01-01

    In this paper we present a verification of the module of the lattice code DRAGON 3.06 used for processing microscopic cross-section libraries, including their associated depletion chain. This verification is performed by reprogramming the capabilities of DRAGON in another language (MATLAB) and testing them on different problems typical of the CANDU reactor. The verification procedure consists in first programming MATLAB m-files to read the different cross section libraries in ASCII format and to compute the reference cross-sections and depletion chains. The same information is also recovered from the output files of DRAGON (using different m-files) and the resulting cross sections and depletion chain are compared with the reference library, the differences being evaluated and tabulated. The results show that the cross-section calculations and the depletion chains are correctly processed in version 3.06 of DRAGON. (author)

  11. Interstellar depletion anomalies and ionization potentials

    International Nuclear Information System (INIS)

    Tabak, R.G.

    1979-01-01

    Satellite observations indicate that (1) most elements are depleted from the gas phase when compared to cosmic abundances, (2) some elements are several orders of magnitude more depleted than others, and (3) these depletions vary from cloud to cloud. Since the most likely possibility is that the 'missing' atoms are locked into grains, depletions occur either by accretion onto core particles in interstellar clouds or earlier, during the period of primary grain formation. If the latter mechanism is dominant, then the most important depletion parameter is the condensation temperature of the elements and their various compounds. However, this alone is not sufficient to explain all the observed anomalies. It is shown that electrostatic effects - under a wide variety of conditions- can enormously enhance the capture cross-section of the grain. It is suggested that this mechanism can also account for such anomalies as the apparent 'overabundance' of the alkali metals in the gas phase. (orig.)

  12. Gulf war depleted uranium risks.

    Science.gov (United States)

    Marshall, Albert C

    2008-01-01

    US and British forces used depleted uranium (DU) in armor-piercing rounds to disable enemy tanks during the Gulf and Balkan Wars. Uranium particulate is generated by DU shell impact and particulate entrained in air may be inhaled or ingested by troops and nearby civilian populations. As uranium is slightly radioactive and chemically toxic, a number of critics have asserted that DU exposure has resulted in a variety of adverse health effects for exposed veterans and nearby civilian populations. The study described in this paper used mathematical modeling to estimate health risks from exposure to DU during the 1991 Gulf War for both US troops and nearby Iraqi civilians. The analysis found that the risks of DU-induced leukemia or birth defects are far too small to result in an observable increase in these health effects among exposed veterans or Iraqi civilians. The analysis indicated that only a few ( approximately 5) US veterans in vehicles accidentally targeted by US tanks received significant exposure levels, resulting in about a 1.4% lifetime risk of DU radiation-induced fatal cancer (compared with about a 24% risk of a fatal cancer from all other causes). These veterans may have also experienced temporary kidney damage. Iraqi children playing for 500 h in DU-destroyed vehicles are predicted to incur a cancer risk of about 0.4%. In vitro and animal tests suggest the possibility of chemically induced health effects from DU internalization, such as immune system impairment. Further study is needed to determine the applicability of these findings for Gulf War exposure to DU. Veterans and civilians who did not occupy DU-contaminated vehicles are unlikely to have internalized quantities of DU significantly in excess of normal internalization of natural uranium from the environment.

  13. Decommissioning plan depleted uranium manufacturing facility

    International Nuclear Information System (INIS)

    Bernhardt, D.E.; Pittman, J.D.; Prewett, S.V.

    1987-01-01

    Aerojet Ordnance Tennessee, Inc. (Aerojet) is decommissioning its California depleted uranium (DU) manufacturing facility. Aerojet has conducted manufacturing and research and development activities at the facility since 1977 under a State of California Source Materials License. The decontamination is being performed by a contractor selector for technical competence through competitive bid. Since the facility will be released for uncontrolled use it will be decontaminated to levels as low as reasonably achievable (ALARA). In order to fully apply the principles of ALARA, and ensure the decontamination is in full compliance with appropriate guides, Aerojet has retained Rogers and Associaties Engineering Corporation (RAE) to assist in the decommissioning. RAE has assisted in characterizing the facility and preparing contract bid documents and technical specifications to obtain a qualified decontamination contractor. RAE will monitor the decontamination work effort to assure the contractor's performance complies with the contract specifications and the decontamination plan. The specifications require a thorough cleaning and decontamination of the facility, not just sufficient cleaning to meet the numeric cleanup criteria

  14. Depletion GPT-free sensitivity analysis for reactor eigenvalue problems

    International Nuclear Information System (INIS)

    Kennedy, C.; Abdel-Khalik, H.

    2013-01-01

    This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)

  15. The scale analysis sequence for LWR fuel depletion

    International Nuclear Information System (INIS)

    Hermann, O.W.; Parks, C.V.

    1991-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs

  16. Creatine pretreatment protects cortical axons from energy depletion in vitro

    Science.gov (United States)

    Shen, Hua; Goldberg, Mark P.

    2012-01-01

    Creatine is a natural nitrogenous guanidino compound involved in bioenergy metabolism. Although creatine has been shown to protect neurons of the central nervous system (CNS) from experimental hypoxia/ischemia, it remains unclear if creatine may also protect CNS axons, and if the potential axonal protection depends on glial cells. To evaluate the direct impact of creatine on CNS axons, cortical axons were cultured in a separate compartment from their somas and proximal neurites using a modified two-compartment culture device. Axons in the axon compartment were subjected to acute energy depletion, an in vitro model of white matter ischemia, by exposure to 6 mM sodium azide for 30 min in the absence of glucose and pyruvate. Energy depletion reduced axonal ATP by 65%, depolarized axonal resting potential, and damaged 75% of axons. Application of creatine (10 mM) to both compartments of the culture at 24 h prior to energy depletion significantly reduced axonal damage by 50%. In line with the role of creatine in the bioenergy metabolism, this application also alleviated the axonal ATP loss and depolarization. Inhibition of axonal depolarization by blocking sodium influx with tetrodotoxin also effectively reduced the axonal damage caused by energy depletion. Further study revealed that the creatine effect was independent of glial cells, as axonal protection was sustained even when creatine was applied only to the axon compartment (free from somas and glial cells) for as little as 2 h. In contrast, application of creatine after energy depletion did not protect axons. The data provide the first evidence that creatine pretreatment may directly protect CNS axons from energy deficiency. PMID:22521466

  17. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  18. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  19. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  20. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  1. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  2. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  3. PAPIRUS - a computer code for FBR fuel performance analysis

    International Nuclear Information System (INIS)

    Kobayashi, Y.; Tsuboi, Y.; Sogame, M.

    1991-01-01

    The FBR fuel performance analysis code PAPIRUS has been developed to design fuels for demonstration and future commercial reactors. A pellet structural model was developed to describe the generation, depletion and transport of vacancies and atomic elements in unified fashion. PAPIRUS results in comparison with the power - to - melt test data from HEDL showed validity of the code at the initial reactor startup. (author)

  4. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  5. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  6. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  7. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  8. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  9. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  10. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  11. Comparison of KANEXT and SERPENT for fuel depletion calculations of a sodium fast reactor

    International Nuclear Information System (INIS)

    Lopez-Solis, R.C.; Francois, J.L.; Becker, M.; Sanchez-Espinoza, V.H.

    2014-01-01

    As most of Generation-IV systems are in development, efficient and reliable computational tools are needed to obtain accurate results in reasonably computer time. In this study, KANEXT code system is presented and validated against the well-known Monte Carlo SERPENT code, for fuel depletion calculations of a sodium fast reactor (SFR). The KArlsruhe Neutronic EXtended Tool (KANEXT) is a modular code system for deterministic reactor calculations, consisting of one kernel and several modules. Results obtained with KANEXT for the SFR core are in good agreement with the ones of SERPENT, e.g. the neutron multiplication factor and the isotopes evolution with burn-up. (author)

  12. A comparative study of MONTEBURNS and MCNPX 2.6.0 codes in ADS simulations

    International Nuclear Information System (INIS)

    Barros, Graiciany P.; Pereira, Claubia; Veloso, Maria A.F.; Velasquez, Carlos E.; Costa, Antonella L.

    2013-01-01

    The possible use of the MONTEBURNS and MCNPX 2.6.0 codes in Accelerator-driven systems (ADSs) simulations for fuel evolution description is discussed. ADSs are investigated for fuel breeding and long-lived fission product transmutation so simulations of fuel evolution have a great relevance. The burnup/depletion capability is present in both studied codes. MONTEBURNS code links Monte Carlo N-Particle Transport Code (MCNP) to the radioactive decay burnup code ORIGEN2, whereas MCNPX depletion/ burnup capability is a linked process involving steady-state flux calculations by MCNPX and nuclide depletion calculations by CINDER90. A lead-cooled accelerator-driven system fueled with thorium was simulated and the results obtained using MONTEBURNS code and the results from MCNPX 2.6.0 code were compared. The system criticality and the variation of the actinide inventory during the burnup were evaluated and the results indicate a similar behavior between the results of each code. (author)

  13. Fully Depleted Charge-Coupled Devices

    International Nuclear Information System (INIS)

    Holland, Stephen E.

    2006-01-01

    We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications

  14. Plasmonic Nanoprobes for Stimulated Emission Depletion Nanoscopy.

    Science.gov (United States)

    Cortés, Emiliano; Huidobro, Paloma A; Sinclair, Hugo G; Guldbrand, Stina; Peveler, William J; Davies, Timothy; Parrinello, Simona; Görlitz, Frederik; Dunsby, Chris; Neil, Mark A A; Sivan, Yonatan; Parkin, Ivan P; French, Paul M W; Maier, Stefan A

    2016-11-22

    Plasmonic nanoparticles influence the absorption and emission processes of nearby emitters due to local enhancements of the illuminating radiation and the photonic density of states. Here, we use the plasmon resonance of metal nanoparticles in order to enhance the stimulated depletion of excited molecules for super-resolved nanoscopy. We demonstrate stimulated emission depletion (STED) nanoscopy with gold nanorods with a long axis of only 26 nm and a width of 8 nm. These particles provide an enhancement of up to 50% of the resolution compared to fluorescent-only probes without plasmonic components irradiated with the same depletion power. The nanoparticle-assisted STED probes reported here represent a ∼2 × 10 3 reduction in probe volume compared to previously used nanoparticles. Finally, we demonstrate their application toward plasmon-assisted STED cellular imaging at low-depletion powers, and we also discuss their current limitations.

  15. Depleted UF6 programmatic environmental impact statement

    International Nuclear Information System (INIS)

    1997-01-01

    The US Department of Energy has developed a program for long-term management and use of depleted uranium hexafluoride, a product of the uranium enrichment process. As part of this effort, DOE is preparing a Programmatic Environmental Impact Statement (PEIS) for the depleted UF 6 management program. This report duplicates the information available at the web site (http://www.ead.anl.gov/web/newduf6) set up as a repository for the PEIS. Options for the web site include: reviewing recent additions or changes to the web site; learning more about depleted UF 6 and the PEIS; browsing the PEIS and related documents, or submitting official comments on the PEIS; downloading all or part of the PEIS documents; and adding or deleting one's name from the depleted UF 6 mailing list

  16. Ecological considerations of natural and depleted uranium

    International Nuclear Information System (INIS)

    Hanson, W.C.

    1980-01-01

    Depleted 238 U is a major by-product of the nuclear fuel cycle for which increasing use is being made in counterweights, radiation shielding, and ordnance applications. This paper (1) summarizes the pertinent literature on natural and depleted uranium in the environment, (2) integrates results of a series of ecological studies conducted at Los Alamos Scientific Laboratory (LASL) in New Mexico where 70,000 kg of depleted and natural uranium has been expended to the environment over the past 34 years, and (3) synthesizes the information into an assessment of the ecological consequences of natural and depleted uranium released to the environment by various means. Results of studies of soil, plant, and animal communities exposed to this radiation and chemical environment over a third of a century provide a means of evaluating the behavior and effects of uranium in many contexts

  17. Stimulated emission depletion following two photon excitation

    OpenAIRE

    Marsh, R. J.; Armoogum, D. A.; Bain, A. J.

    2002-01-01

    The technique of stimulated emission depletion of fluorescence (STED) from a two photon excited molecular population is demonstrated in the S, excited state of fluorescein in ethylene glycol and methanol. Two photon excitation (pump) is achieved using the partial output of a regeneratively amplified Ti:Sapphire laser in conjunction with an optical parametric amplifier whose tuneable output provides a synchronous depletion (dump) pulse. Time resolved fluorescence intensity and anisotropy measu...

  18. Depleted uranium: A DOE management guide

    International Nuclear Information System (INIS)

    1995-10-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks

  19. Depleted Bulk Heterojunction Colloidal Quantum Dot Photovoltaics

    KAUST Repository

    Barkhouse, D. Aaron R.

    2011-05-26

    The first solution-processed depleted bulk heterojunction colloidal quantum dot solar cells are presented. The architecture allows for high absorption with full depletion, thereby breaking the photon absorption/carrier extraction compromise inherent in planar devices. A record power conversion of 5.5% under simulated AM 1.5 illumination conditions is reported. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. ANATOMY OF DEPLETED INTERPLANETARY CORONAL MASS EJECTIONS

    Energy Technology Data Exchange (ETDEWEB)

    Kocher, M.; Lepri, S. T.; Landi, E.; Zhao, L.; Manchester, W. B. IV, E-mail: mkocher@umich.edu [Department of Climate and Space Sciences and Engineering, University of Michigan, 2455 Hayward Street, Ann Arbor, MI 48109-2143 (United States)

    2017-01-10

    We report a subset of interplanetary coronal mass ejections (ICMEs) containing distinct periods of anomalous heavy-ion charge state composition and peculiar ion thermal properties measured by ACE /SWICS from 1998 to 2011. We label them “depleted ICMEs,” identified by the presence of intervals where C{sup 6+}/C{sup 5+} and O{sup 7+}/O{sup 6+} depart from the direct correlation expected after their freeze-in heights. These anomalous intervals within the depleted ICMEs are referred to as “Depletion Regions.” We find that a depleted ICME would be indistinguishable from all other ICMEs in the absence of the Depletion Region, which has the defining property of significantly low abundances of fully charged species of helium, carbon, oxygen, and nitrogen. Similar anomalies in the slow solar wind were discussed by Zhao et al. We explore two possibilities for the source of the Depletion Region associated with magnetic reconnection in the tail of a CME, using CME simulations of the evolution of two Earth-bound CMEs described by Manchester et al.

  1. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  2. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  3. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  4. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  5. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  6. Qualification of the nuclear reactor core model DYN3D coupled to the thermohydraulic system code ATHLET, applied as an advanced tool for accident analysis of VVER-type reactors. Final report

    International Nuclear Information System (INIS)

    Grundmann, U.; Kliem, S.; Krepper, E.; Mittag, S; Rohde, U.; Schaefer, F.; Seidel, A.

    1998-03-01

    The nuclear reactor core model DYN3D with 3D neutron kinetics has been coupled to the thermohydraulic system code ATHLET. In the report, activities on qualification of the coupled code complex ATHLET-DYN3D as a validated tool for the accident analysis of russian VVER type reactors are described. That includes: - Contributions to the validation of the single codes ATHLET and DYN3D by the analysis of experiments on natural circulation behaviour in thermohydraulic test facilities and solution of benchmark tasks on reactivity initiated transients, - the acquisition and evaluation of measurement data on transients in nuclear power plants, the validation of ATHLET-DYN3D by calculating an accident with delayed scram and a pump trip in VVER plants, - the complementary improvement of the code DYN3D by extension of the neutron physical data base, implementation of an improved coolant mixing model, consideration of decay heat release and xenon transients, - the analysis of steam leak scenarios for VVER-440 type reactors with failure of different safety systems, investigation of different model options. The analyses showed, that with realistic coolant mixing modelling in the downcomer and the lower plenum, recriticality of the scramed reactor due to overcooling can be reached. The application of the code complex ATHLET-DYN3D in Czech Republic, Bulgaria and the Ukraine has been started. Future work comprises the verification of ATHLET-DYN3D with a DYN3D version for the square fuel element geometry of western PWR. (orig.) [de

  7. Uncertainty Propagation in Monte Carlo Depletion Analysis

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Kim, Yeong-il; Park, Ho Jin; Joo, Han Gyu; Kim, Chang Hyo

    2008-01-01

    A new formulation aimed at quantifying uncertainties of Monte Carlo (MC) tallies such as k eff and the microscopic reaction rates of nuclides and nuclide number densities in MC depletion analysis and examining their propagation behaviour as a function of depletion time step (DTS) is presented. It is shown that the variance of a given MC tally used as a measure of its uncertainty in this formulation arises from four sources; the statistical uncertainty of the MC tally, uncertainties of microscopic cross sections and nuclide number densities, and the cross correlations between them and the contribution of the latter three sources can be determined by computing the correlation coefficients between the uncertain variables. It is also shown that the variance of any given nuclide number density at the end of each DTS stems from uncertainties of the nuclide number densities (NND) and microscopic reaction rates (MRR) of nuclides at the beginning of each DTS and they are determined by computing correlation coefficients between these two uncertain variables. To test the viability of the formulation, we conducted MC depletion analysis for two sample depletion problems involving a simplified 7x7 fuel assembly (FA) and a 17x17 PWR FA, determined number densities of uranium and plutonium isotopes and their variances as well as k ∞ and its variance as a function of DTS, and demonstrated the applicability of the new formulation for uncertainty propagation analysis that need be followed in MC depletion computations. (authors)

  8. Groundwater Depletion Embedded in International Food Trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-01-01

    Recent hydrological modeling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world's food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world's population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  9. Depletion sensitivity predicts unhealthy snack purchases.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; Fennis, Bob M; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose weight on snack purchase behavior were explored. Participants included in the study were instructed to report every snack they bought over the course of one week. The dependent variables were the number of healthy and unhealthy snacks purchased. The results of the present study demonstrate that depletion sensitivity predicts the amount of unhealthy (but not healthy) snacks bought. The more sensitive people are to depletion, the more unhealthy snacks they buy. Moreover, there was some tentative evidence that this relation is more pronounced for people with a weak as opposed to a strong goal to lose weight, suggesting that a strong goal to lose weight may function as a motivational buffer against self-control failures. All in all, these findings provide evidence for the external validity of depletion sensitivity and the relevance of this construct in the domain of eating behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Groundwater depletion embedded in international food trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  11. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  12. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    International Nuclear Information System (INIS)

    Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  13. The APOLLO assembly spectrum code

    International Nuclear Information System (INIS)

    Kavenoky, A.; Sanchez, R.

    1987-04-01

    The APOLLO code was originally developed as a design tool for HTR's, later it was aimed at the calculation of PWR lattices. APOLLO is a general purpose assembly spectrum code based on the multigroup integral transport equation; refined collision probability modules allow the computation of 1D geometries with linearly anisotropic scattering and two term flux expansion. In 2D geometries modules based on the substructure method provide fast and accurate design calculations and a module based on a direct discretization is devoted to reference calculations. The SPH homogenization technique provides corrected cross sections performing an equivalence between coarse and refined calculations. The post processing module of APOLLO generate either APOLLIB to be used by APOLLO or NEPLIB for reactor diffusion calculation. The cross section library of APOLLO contains data and self-shielding data for more than 400 isotopes. APOLLO is able to compute the depletion of any medium accounting for any heavy isotope or fission product chain. 21 refs

  14. Reverse depletion method for PWR core reload design

    International Nuclear Information System (INIS)

    Downar, T.J.; Kim, Y.J.

    1985-01-01

    Low-leakage fuel management is currently practiced in over half of all pressurized water reactor (PWR) cores. Prospects for even greater use of in-board fresh fuel loading are good as utilities seek to reduce core vessel fluence, mitigate pressurized thermal shock concerns, and extend vessel lifetime. Consequently, large numbers of burnable poison (BP) pins are being used to control the power peaking at the in-board fresh fuel positions. This has presented an additional complexity to the core reload design problem. In addition to determining the best location of each assembly in the core, the designer must concurrently determine the distribution of BP pins in the fresh fuel. A procedure was developed that utilizes the well-known Haling depletion to achieve an end-of-cycle (EOC) core state where the assembly pattern is configured in the absence of all control poison. This effectively separates the assembly assignment and BP distribution problems. Once an acceptable pattern at EOC is configured, the burnable and soluble poison required to control the power and core excess reactivity are solved for as unknown variables while depleting the cycle in reverse from the EOC exposure distribution to the beginning of cycle. The methods developed were implemented in an approved light water reactor licensing code to ensure the validity of the results obtained and provide for the maximum utility to PWR core reload design

  15. Contribution to fuel depletion study in PWR type reactors, reactor core with three and four regions of enrichment

    International Nuclear Information System (INIS)

    Teixeira, M.C.C.

    1977-03-01

    The main methods for calculation of fuel depletion are studied and some approaches to do it are mentioned; the LEOPARD Code is described and full details are given for each subroutine, flow charts are included; the method given by the code for calculation of fuel depletion is described; some imperfections from the IPR's version are listed, and corrected, for instance: the method for burn-up calculation of heavy isotopes; the results of calculations for a reference reactor based on data of the Preliminary Safety Analysis Report (PSAR) for Angra I Nuclear Power Plant are presented and discussed. (author)

  16. V.S.O.P.-computer code system for reactor physics and fuel cycle simulation

    International Nuclear Information System (INIS)

    Teuchert, E.; Hansen, U.; Haas, K.A.

    1980-03-01

    V.S.O.P. (Very Superior Old Programs) is a system of codes linked together for the simulation of reactor life histories. It comprises neutron cross section libraries and processing routines, repeated neutron spectrum evaluation, 2-D diffusion calculation based on neutron flux synthesis with depletion and shutdown features, incore and out-of-pile fuel management, fuel cycle cost analysis, and thermal hydraulics (at present restricted to Pebble Bed HTRs). Various techniques have been employed to accelerate the iterative processes and to optimize the internal data transfer. A limitation of the storage requirement to 360 K-bites is achieved by an overlay structure. The code system has been used extensively for comparison studies of reactors, their fuel cycles, and related detailed features. Beside its use in research and development work for the high temperature reactor the system has been applied successfully to LWR and Heavy Water Reactors. (orig.) [de

  17. Ego depletion in visual perception: Ego-depleted viewers experience less ambiguous figure reversal.

    Science.gov (United States)

    Wimmer, Marina C; Stirk, Steven; Hancock, Peter J B

    2017-10-01

    This study examined the effects of ego depletion on ambiguous figure perception. Adults (N = 315) received an ego depletion task and were subsequently tested on their inhibitory control abilities that were indexed by the Stroop task (Experiment 1) and their ability to perceive both interpretations of ambiguous figures that was indexed by reversal (Experiment 2). Ego depletion had a very small effect on reducing inhibitory control (Cohen's d = .15) (Experiment 1). Ego-depleted participants had a tendency to take longer to respond in Stroop trials. In Experiment 2, ego depletion had small to medium effects on the experience of reversal. Ego-depleted viewers tended to take longer to reverse ambiguous figures (duration to first reversal) when naïve of the ambiguity and experienced less reversal both when naïve and informed of the ambiguity. Together, findings suggest that ego depletion has small effects on inhibitory control and small to medium effects on bottom-up and top-down perceptual processes. The depletion of cognitive resources can reduce our visual perceptual experience.

  18. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  19. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  20. Depleted uranium and the Gulf War syndrome

    International Nuclear Information System (INIS)

    1999-01-01

    Some military personnel involved in the 1991Gulf War have complained of continuing stress-like symptoms for which no obvious cause has been found. These symptoms have at times been attributed to the use of depleted uranium (DU) in shell casings which are believed to have caused toxic effects. Depleted uranium is natural uranium which is depleted in the rarer U-235 isotope. It is a heavy metal and in common with other heavy metals is chemically toxic. It is also slightly radioactive and could give rise to a radiological hazard if dispersed in finely divided form so that it was inhaled. In response to concerns, the possible effects of DU have been extensively studied along with other possible contributors to G ulf War sickness . This article looks at the results of some of the research that has been done on DU. (author)

  1. Self-regulation, ego depletion, and inhibition.

    Science.gov (United States)

    Baumeister, Roy F

    2014-12-01

    Inhibition is a major form of self-regulation. As such, it depends on self-awareness and comparing oneself to standards and is also susceptible to fluctuations in willpower resources. Ego depletion is the state of reduced willpower caused by prior exertion of self-control. Ego depletion undermines inhibition both because restraints are weaker and because urges are felt more intensely than usual. Conscious inhibition of desires is a pervasive feature of everyday life and may be a requirement of life in civilized, cultural society, and in that sense it goes to the evolved core of human nature. Intentional inhibition not only restrains antisocial impulses but can also facilitate optimal performance, such as during test taking. Self-regulation and ego depletion- may also affect less intentional forms of inhibition, even chronic tendencies to inhibit. Broadly stated, inhibition is necessary for human social life and nearly all societies encourage and enforce it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Department of Energy depleted uranium recycle

    International Nuclear Information System (INIS)

    Kosinski, F.E.; Butturini, W.G.; Kurtz, J.J.

    1994-01-01

    With its strategic supply of depleted uranium, the Department of Energy is studying reuse of the material in nuclear radiation shields, military hardware, and commercial applications. the study is expected to warrant a more detailed uranium recycle plan which would include consideration of a demonstration program and a program implementation decision. Such a program, if implemented, would become the largest nuclear material recycle program in the history of the Department of Energy. The bulk of the current inventory of depleted uranium is stored in 14-ton cylinders in the form of solid uranium hexafluoride (UF 6 ). The radioactive 235 U content has been reduced to a concentration of 0.2% to 0.4%. Present estimates indicate there are about 55,000 UF 6 -filled cylinders in inventory and planned operations will provide another 2,500 cylinders of depleted uranium each year. The United States government, under the auspices of the Department of Energy, considers the depleted uranium a highly-refined strategic resource of significant value. A possible utilization of a large portion of the depleted uranium inventory is as radiation shielding for spent reactor fuels and high-level radioactive waste. To this end, the Department of Energy study to-date has included a preliminary technical review to ascertain DOE chemical forms useful for commercial products. The presentation summarized the information including preliminary cost estimates. The status of commercial uranium processing is discussed. With a shrinking market, the number of chemical conversion and fabrication plants is reduced; however, the commercial capability does exist for chemical conversion of the UF 6 to the metal form and for the fabrication of uranium radiation shields and other uranium products. Department of Energy facilities no longer possess a capability for depleted uranium chemical conversion

  3. Applying no-depletion equilibrium sampling and full-depletion bioaccessibility extraction to 35 historically polycyclic aromatic hydrocarbon contaminated soils

    DEFF Research Database (Denmark)

    Bartolomé, Nora; Hilber, Isabel; Sosa, Dayana

    2018-01-01

    Assessing the bioaccessibility of organic pollutants in contaminated soils is considered a complement to measurements of total concentrations in risk assessment and legislation. Consequently, methods for its quantification require validation with historically contaminated soils. In this study, 35...... with polyoxymethylene was used to determine freely dissolved concentrations (Cfree) of polycyclic aromatic hydrocarbons (PAHs), while sorptive bioaccessibility extraction (SBE) with silicone rods was used to determine the bioaccessible PAH concentrations (Cbioacc) of these soils. The organic carbon partition...... Capacity Ratio (SCR); particularly for soils with very high KD. The source of contamination determined bioaccessible fractions (fbioacc). The smallest fbioacc were obtained with skeet soils (15%), followed by the pyrogenically influenced soils, rural soils, and finally, the petrogenically contaminated soil...

  4. Monte Carlo burnup codes acceleration using the correlated sampling method

    International Nuclear Information System (INIS)

    Dieudonne, C.

    2013-01-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this document we present an original methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time we develop a theoretical model to study the features of the correlated sampling method to understand its effects on depletion calculations. In a third time the implementation of this method in the TRIPOLI-4 code will be discussed, as well as the precise calculation scheme used to bring important speed-up of the depletion calculation. We will begin to validate and optimize the perturbed depletion scheme with the calculation of a REP-like fuel cell depletion. Then this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes. (author) [fr

  5. Depleted uranium hexafluoride: Waste or resource?

    International Nuclear Information System (INIS)

    Schwertz, N.; Zoller, J.; Rosen, R.; Patton, S.; Bradley, C.; Murray, A.

    1995-07-01

    The US Department of Energy is evaluating technologies for the storage, disposal, or re-use of depleted uranium hexafluoride (UF 6 ). This paper discusses the following options, and provides a technology assessment for each one: (1) conversion to UO 2 for use as mixed oxide duel, (2) conversion to UO 2 to make DUCRETE for a multi-purpose storage container, (3) conversion to depleted uranium metal for use as shielding, (4) conversion to uranium carbide for use as high-temperature gas-cooled reactor (HTGR) fuel. In addition, conversion to U 3 O 8 as an option for long-term storage is discussed

  6. Depleted uranium processing and fluorine extraction

    International Nuclear Information System (INIS)

    Laflin, S.T.

    2010-01-01

    Since the beginning of the nuclear era, there has never been a commercial solution for the large quantities of depleted uranium hexafluoride generated from uranium enrichment. In the United States alone, there is already in excess of 1.6 billion pounds (730 million kilograms) of DUF_6 currently stored. INIS is constructing a commercial uranium processing and fluorine extraction facility. The INIS facility will convert depleted uranium hexafluoride and use it as feed material for the patented Fluorine Extraction Process to produce high purity fluoride gases and anhydrous hydrofluoric acid. The project will provide an environmentally friendly and commercially viable solution for DUF_6 tails management. (author)

  7. The Chemistry and Toxicology of Depleted Uranium

    OpenAIRE

    Sidney A. Katz

    2014-01-01

    Natural uranium is comprised of three radioactive isotopes: 238U, 235U, and 234U. Depleted uranium (DU) is a byproduct of the processes for the enrichment of the naturally occurring 235U isotope. The world wide stock pile contains some 1½ million tons of depleted uranium. Some of it has been used to dilute weapons grade uranium (~90% 235U) down to reactor grade uranium (~5% 235U), and some of it has been used for heavy tank armor and for the fabrication of armor-piercing bullets and missiles....

  8. ORION: a computer code for evaluating environmental concentrations and dose equivalent to human organs or tissue from airborne radionuclides

    International Nuclear Information System (INIS)

    Shinohara, K.; Nomura, T.; Iwai, M.

    1983-05-01

    The computer code ORION has been developed to evaluate the environmental concentrations and the dose equivalent to human organs or tissue from air-borne radionuclides released from multiple nuclear installations. The modified Gaussian plume model is applied to calculate the dispersion of the radionuclide. Gravitational settling, dry deposition, precipitation scavenging and radioactive decay are considered to be the causes of depletion and deposition on the ground or on vegetation. ORION is written in the FORTRAN IV language and can be run on IBM 360, 370, 303X, 43XX and FACOM M-series computers. 8 references, 6 tables

  9. 17 CFR 229.406 - (Item 406) Code of ethics.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...

  10. A calculational procedure for neutronic and depletion analysis of Molten-Salt reactors based on SCALE6/TRITON

    International Nuclear Information System (INIS)

    Sheu, R.J.; Chang, J.S.; Liu, Y.-W. H.

    2011-01-01

    Molten-Salt Reactors (MSRs) represent one of the selected categories in the GEN-IV program. This type of reactor is distinguished by the use of liquid fuel circulating in and out of the core, which makes it possible for online refueling and salt processing. However, this operation characteristic also complicates the modeling and simulation of reactor core behaviour using conventional neutronic codes. The TRITON sequence in the SCALE6 code system has been designed to provide the combined capabilities of problem-dependent cross-section processing, rigorous treatment of neutron transport, and coupled with the ORIGEN-S depletion calculations. In order to accommodate the simulation of dynamic refueling and processing scheme, an in-house program REFRESH together with a run script are developed for carrying out a series of stepwise TRITON calculations, that makes the work of analyzing the neutronic properties and performance of a MSR core design easier. As a demonstration and cross check, we have applied this method to reexamine the conceptual design of Molten Salt Actinide Recycler & Transmuter (MOSART). This paper summarizes the development of the method and preliminary results of its application on MOSART. (author)

  11. User's manual for a process model code

    International Nuclear Information System (INIS)

    Kern, E.A.; Martinez, D.P.

    1981-03-01

    The MODEL code has been developed for computer modeling of materials processing facilities associated with the nuclear fuel cycle. However, it can also be used in other modeling applications. This report provides sufficient information for a potential user to apply the code to specific process modeling problems. Several examples that demonstrate most of the capabilities of the code are provided

  12. Light-water-reactor coupled neutronic and thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Diamond, D.J.

    1982-01-01

    An overview is presented of computer codes that model light water reactor cores with coupled neutronics and thermal-hydraulics. This includes codes for transient analysis and codes for steady state analysis which include fuel depletion and fission product buildup. Applications in nuclear design, reactor operations and safety analysis are given and the major codes in use in the USA are identified. The neutronic and thermal-hydraulic methodologies and other code features are outlined for three steady state codes (PDQ7, NODE-P/B and SIMULATE) and four dynamic codes (BNL-TWIGL, MEKIN, RAMONA-3B, RETRAN-02). Speculation as to future trends with such codes is also presented

  13. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  14. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    NARCIS (Netherlands)

    Salmon, S.J.; Adriaanse, M.A.; Vet, de E.W.M.L.; Fennis, B.M.; Ridder, de D.T.D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In

  15. “When the going gets tough, who keeps going?” Depletion sensitivity moderates the ego-depletion effect

    Science.gov (United States)

    Salmon, Stefanie J.; Adriaanse, Marieke A.; De Vet, Emely; Fennis, Bob M.; De Ridder, Denise T. D.

    2014-01-01

    Self-control relies on a limited resource that can get depleted, a phenomenon that has been labeled ego-depletion. We argue that individuals may differ in their sensitivity to depleting tasks, and that consequently some people deplete their self-control resource at a faster rate than others. In three studies, we assessed individual differences in depletion sensitivity, and demonstrate that depletion sensitivity moderates ego-depletion effects. The Depletion Sensitivity Scale (DSS) was employed to assess depletion sensitivity. Study 1 employs the DSS to demonstrate that individual differences in sensitivity to ego-depletion exist. Study 2 shows moderate correlations of depletion sensitivity with related self-control concepts, indicating that these scales measure conceptually distinct constructs. Study 3 demonstrates that depletion sensitivity moderates the ego-depletion effect. Specifically, participants who are sensitive to depletion performed worse on a second self-control task, indicating a stronger ego-depletion effect, compared to participants less sensitive to depletion. PMID:25009523

  16. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  17. Evolution of depleted mantle: The lead perspective

    Science.gov (United States)

    Tilton, George R.

    1983-07-01

    Isotopic data have established that, compared to estimated bulk earth abundances, the sources of oceanic basaltic lavas have been depleted in large ion lithophile elements for at least several billions of years. Various data on the Tertiary-Mesozoic Gorgona komatiite and Cretaceous Oka carbonatite show that those rocks also sample depleted mantle sources. This information is used by analogy to compare Pb isotopic data from 2.6 billion year old komatiite and carbonatite from the Suomussalmi belt of eastern Finland and Munro Township, Ontario that are with associated granitic rocks and ores that should contain marked crustal components. Within experimental error no differences are detected in the isotopic composition of initial Pb in either of the rock suites. These observations agree closely with Sr and Nd data from other laboratories showing that depleted mantle could not have originated in those areas more than a few tenths of billions of years before the rocks were emplaced. On a world-wide basis the Pb isotope data are consistent with production of depleted mantle by continuous differentiation processes acting over approximately the past 3 billion years. The data show that Pb evolution is more complex than the simpler models derived from the Rb-Sr and Sm-Nd systems. The nature of the complexity is still poorly understood.

  18. Poroelasticity of high porosity chalk under depletion

    DEFF Research Database (Denmark)

    Andreassen, Katrine Alling; Fabricius, Ida Lykke

    2013-01-01

    on mechanical test results is found to be low-er than the pretest dynamic Biot coefficient determined from elastic wave propagation for the loading path and with less deviation under depletion. The calculated lateral stress is lower than the experimentally measured lateral stress depending on loading path...

  19. Nitrogen depletion in field red giants

    DEFF Research Database (Denmark)

    Masseron, T.; Lagarde, N.; Miglio, A.

    2017-01-01

    , the behaviour of nitrogen data along the evolution confirms the existence of non-canonical extramixing on the red giant branch (RGB) for all low-mass stars in the field. But more surprisingly, the data indicate that nitrogen has been depleted between the RGB tip and the red clump. This may suggest that some...

  20. Elephant invasion and escalated depletion of environmental ...

    African Journals Online (AJOL)

    For decades, elephants' invasion is known to be associated with severe environmental consequences leading to escalated depletion o environmental resources (plants, water, wildlife and soil). This paper examined the effects of elephants' activity on the environmental resources inHong and Gombi Local Government areas ...

  1. Depletion mode pumping of solid state lasers

    International Nuclear Information System (INIS)

    Mundinger, D.; Solarz, R.; Beach, R.; Albrecht, G.; Krupke, W.

    1990-01-01

    Depletion mode pumping of solid state lasers is a new concept which offers features that are of interest for many practical applications. In this paper the authors discuss the physical properties and mechanisms that set the design requirements, present model calculations for a practical laser design, and discuss the results of recent experiments

  2. Global Warming: Lessons from Ozone Depletion

    Science.gov (United States)

    Hobson, Art

    2010-01-01

    My teaching and textbook have always covered many physics-related social issues, including stratospheric ozone depletion and global warming. The ozone saga is an inspiring good-news story that's instructive for solving the similar but bigger problem of global warming. Thus, as soon as students in my physics literacy course at the University of…

  3. Ozone depleting substances management inventory system

    Directory of Open Access Journals (Sweden)

    Felix Ivan Romero Rodríguez

    2018-02-01

    Full Text Available Context: The care of the ozone layer is an activity that contributes to the planet's environmental stability. For this reason, the Montreal Protocol is created to control the emission of substances that deplete the ozone layer and reduce its production from an organizational point of view. However, it is also necessary to have control of those that are already circulating and those present in the equipment that cannot be replaced yet because of the context of the companies that keep it. Generally, the control mechanisms for classifying the type of substances, equipment and companies that own them, are carried in physical files, spreadsheets and text documents, which makes it difficult to control and manage the data stored in them. Method: The objective of this research is to computerize the process of control of substances that deplete the ozone layer. An evaluation and description of all process to manage Ozone-Depleting Substances (ODS, and its alternatives, is done. For computerization, the agile development methodology SCRUM is used, and for the technological solution tools and free open source technologies are used. Result: As a result of the research, a computer tool was developed that automates the process of control and management of substances that exhaust the ozone layer and its alternatives. Conclusions: The developed computer tool allows to control and manage the ozone-depleting substances and the equipment that use them. It also manages the substances that arise as alternatives to be used for the protection of the ozone layer.

  4. Application of backtracking algorithm to depletion calculations

    International Nuclear Information System (INIS)

    Wu Mingyu; Wang Shixi; Yang Yong; Zhang Qiang; Yang Jiayin

    2013-01-01

    Based on the theory of linear chain method for analytical depletion calculations, the burnup matrix is decoupled by the divide and conquer strategy and the linear chain with Markov characteristic is formed. The density, activity and decay heat of every nuclide in the chain then can be calculated by analytical solutions. Every possible reaction path of the nuclide must be considered during the linear chain establishment process. To confirm the calculation precision and efficiency, the algorithm which can cover all the reaction paths and search the paths automatically according to the problem description and precision restrictions should be found. Through analysis and comparison of several kinds of searching algorithms, the backtracking algorithm was selected to establish and calculate the linear chains in searching process using depth first search (DFS) method, forming an algorithm which can solve the depletion problem adaptively and with high fidelity. The complexity of the solution space and time was analyzed by taking into account depletion process and the characteristics of the backtracking algorithm. The newly developed depletion program was coupled with Monte Carlo program MCMG-Ⅱ to calculate the benchmark burnup problem of the first core of China Experimental Fast Reactor (CEFR) and the preliminary verification and validation of the program were performed. (authors)

  5. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  6. [Acute tryptophan depletion in eating disorders].

    Science.gov (United States)

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  7. Fuel depletion analyses for the HEU core of GHARR-1: Part II: Fission product inventory

    International Nuclear Information System (INIS)

    Anim-Sampong, S.; Akaho, E.H.K.; Boadu, H.O.; Intsiful, J.D.K.; Osae, S.

    1999-01-01

    The fission product isotopic inventories have been estimated for a 90.2% highly enriched uranium (HEU) fuel lattice cell of the Ghana Research Reactor-1 (GHARR-1) using the WIMSD/4 transport lattice code. The results indicate a gradual decrease in the Xe 135 inventory, and saturation trend for Sm 149 , Cs 134 and Cs 135 inventories as the fuel is depleted to 10,000 MWd/tU. (author)

  8. Burnup calculation code system COMRAD96

    International Nuclear Information System (INIS)

    Suyama, Kenya; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu.

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, 'Cross Section Treatment', 'Generation and Depletion Calculation', and 'Post Process'. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the γ Spectrum on a terminal. This report is the general description and user's manual of COMRAD96. (author)

  9. Burnup calculation code system COMRAD96

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, `Cross Section Treatment`, `Generation and Depletion Calculation`, and `Post Process`. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the {gamma} Spectrum on a terminal. This report is the general description and user`s manual of COMRAD96. (author)

  10. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  11. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  12. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  13. Health and environmental impact of depleted uranium

    International Nuclear Information System (INIS)

    Furitsu, Katsumi

    2010-01-01

    Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238 U and is depleted in the fissionable isotope 235 U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first

  14. Implementation of LT codes based on chaos

    International Nuclear Information System (INIS)

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  15. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  16. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  17. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  18. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  19. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  20. Neutronics/Thermo-fluid Coupled Analysis of PMR-200 Equilibrium Cycle by CAPP/GAMMA+ Code System

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyun Chul; Tak, Nam-il [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The equilibrium core was obtained by performing CAPP stand-alone multi-cycle depletion calculation with critical rod position search. In this work, a code system for coupled neutronics and thermo-fluids simulation was developed using CAPP and GAMMA+ codes. A server program, INTCA, controls the two codes for coupled calculations and performs the mapping between the variables of the two codes based on the nodalization of the two codes. In order to extend the knowledge about the coupled behavior of a prismatic VHTR, the CAPP/GAMMA+ code system was applied to steady state performance analysis of PMR-200. The coupled calculation was carried out for the equilibrium core of PMR-200 from BOC to EOC. The peak fuel temperature was predicted to be 1372 .deg. C near MOC. However, the cycle-average fuel temperature was calculated as 1230 .deg. C, which is slightly below the design target of 1250 .deg. C. In addition, significant impact of the bypass flow on the central reflector temperature was found. Without bypass flow, the temperature of the active core region was slightly decreased while the temperature of the central and side reflector region was increased much. The both changes in the temperature increase the multiplication factor and the total change of the multiplication factor was more than 300 pcm. On the other hand, the effect of the bypass flow on the power density profile was not significant.

  1. Recurrence formulas for evaluating expansion series of depletion functions

    International Nuclear Information System (INIS)

    Vukadin, Z.

    1991-01-01

    A high-accuracy analytical method for solving the depletion equations for chains of radioactive nuclides is based on the formulation of depletion functions. When all the arguments of the depletion function are too close to each other, series expansions of the depletion function have to be used. However, the high-accuracy series expressions for the depletion functions of high index become too complicated. Recursion relations are derived which enable an efficient high-accuracy evaluation of the depletion functions with high indices. (orig.) [de

  2. Depleted uranium plasma reduction system study

    International Nuclear Information System (INIS)

    Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.

    1994-12-01

    A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessary processing equipment and facilities to perform the process. The study has identified total processing cost of approximately $3.00/kg of UF 6 processed. Based on the results of this study, the development of a laboratory-scale system (1 kg/h throughput of UF6) is warranted. Further scaling of the process to pilot scale will be determined after laboratory testing is complete

  3. Depleted uranium hexafluoride: Waste or resource?

    Energy Technology Data Exchange (ETDEWEB)

    Schwertz, N.; Zoller, J.; Rosen, R.; Patton, S. [Lawrence Livermore National Lab., CA (United States); Bradley, C. [USDOE Office of Nuclear Energy, Science, Technology, Washington, DC (United States); Murray, A. [SAIC (United States)

    1995-07-01

    the US Department of Energy is evaluating technologies for the storage, disposal, or re-use of depleted uranium hexafluoride (UF{sub 6}). This paper discusses the following options, and provides a technology assessment for each one: (1) conversion to UO{sub 2} for use as mixed oxide duel, (2) conversion to UO{sub 2} to make DUCRETE for a multi-purpose storage container, (3) conversion to depleted uranium metal for use as shielding, (4) conversion to uranium carbide for use as high-temperature gas-cooled reactor (HTGR) fuel. In addition, conversion to U{sub 3}O{sub 8} as an option for long-term storage is discussed.

  4. Improvements in EBR-2 core depletion calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Hill, R.N.; Sakamoto, S.

    1991-01-01

    The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs

  5. The depletion of the stratospheric ozone layer

    International Nuclear Information System (INIS)

    Sabogal Nelson

    2000-01-01

    The protection of the Earth's ozone layer is of the highest importance to mankind. The dangers of its destruction are by now well known. The depletion of that layer has reached record levels. The Antarctic ozone hole covered this year a record area. The ozone layer is predicted to begin recovery in the next one or two decades and should be restored to pre-1980 levels by 2050. This is the achievement of the regime established by the 1985 Vienna Convention for the Protection of the Ozone Layer and the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. The regime established by these two agreements has been revised, and made more effective in London (1990), Copenhagen (1992), Vienna (1995), and Beijing (1999)

  6. Optical assessment of phytoplankton nutrient depletion

    DEFF Research Database (Denmark)

    Heath, M.R.; Richardson, Katherine; Kiørboe, Thomas

    1990-01-01

    The ratio of light absorption at 480 and 665 nm by 90% acetone extracts of marine phytoplankton pigments has been examined as a potential indicator of phytoplankton nutritional status in both laboratory and field studies. The laboratory studies demonstrated a clear relationship between nutritiona......-replete and nutrient-depleted cells. The field data suggest that the absorption ratio may be a useful indicator of nutritional status of natural phytoplankton populations, and can be used to augment the interpretation of other data....

  7. The ultimate disposition of depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    1990-12-01

    Significant amounts of the depleted uranium (DU) created by past uranium enrichment activities have been sold, disposed of commercially, or utilized by defense programs. In recent years, however, the demand for DU has become quite small compared to quantities available, and within the US Department of Energy (DOE) there is concern for any risks and/or cost liabilities that might be associated with the ever-growing inventory of this material. As a result, Martin Marietta Energy Systems, Inc. (Energy Systems), was asked to review options and to develop a comprehensive plan for inventory management and the ultimate disposition of DU accumulated at the gaseous diffusion plants (GDPs). An Energy Systems task team, under the chairmanship of T. R. Lemons, was formed in late 1989 to provide advice and guidance for this task. This report reviews options and recommends actions and objectives in the management of working inventories of partially depleted feed (PDF) materials and for the ultimate disposition of fully depleted uranium (FDU). Actions that should be considered are as follows. (1) Inspect UF{sub 6} cylinders on a semiannual basis. (2) Upgrade cylinder maintenance and storage yards. (3) Convert FDU to U{sub 3}O{sub 8} for long-term storage or disposal. This will include provisions for partial recovery of costs to offset those associated with DU inventory management and the ultimate disposal of FDU. Another recommendation is to drop the term tails'' in favor of depleted uranium'' or DU'' because the tails'' label implies that it is waste.'' 13 refs.

  8. Ozone depletion, greenhouse effect and atomic energy

    International Nuclear Information System (INIS)

    Adzersen, K.H.

    1991-01-01

    After describing the causes and effects of ozone depletion and the greenhouse effect, the author discusses the alternative offered by the nuclear industry. In his opinion, a worldwide energy strategy of risk minimisation will not be possible unless efficient energy use is introduced immediately, efficiently and on a reliable basis. Atomic energy is not viewed as an acceptable means of preventing the threatening climate change. (DG) [de

  9. The ultimate disposition of depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    Lemons, T.R. [Uranium Enrichment Organization, Oak Ridge, TN (United States)

    1991-12-31

    Depleted uranium (DU) is produced as a by-product of the uranium enrichment process. Over 340,000 MTU of DU in the form of UF{sub 6} have been accumulated at the US government gaseous diffusion plants and the stockpile continues to grow. An overview of issues and objectives associated with the inventory management and the ultimate disposition of this material is presented.

  10. Carbon sequestration in depleted oil shale deposits

    Science.gov (United States)

    Burnham, Alan K; Carroll, Susan A

    2014-12-02

    A method and apparatus are described for sequestering carbon dioxide underground by mineralizing the carbon dioxide with coinjected fluids and minerals remaining from the extraction shale oil. In one embodiment, the oil shale of an illite-rich oil shale is heated to pyrolyze the shale underground, and carbon dioxide is provided to the remaining depleted oil shale while at an elevated temperature. Conditions are sufficient to mineralize the carbon dioxide.

  11. Applied physics

    International Nuclear Information System (INIS)

    Anon.

    1980-01-01

    The Physics Division research program that is dedicated primarily to applied research goals involves the interaction of energetic particles with solids. This applied research is carried out in conjunction with the basic research studies from which it evolved

  12. Barium depletion in hollow cathode emitters

    International Nuclear Information System (INIS)

    Polk, James E.; Mikellides, Ioannis G.; Katz, Ira; Capece, Angela M.

    2016-01-01

    Dispenser hollow cathodes rely on a consumable supply of Ba released by BaO-CaO-Al 2 O 3 source material in the pores of a tungsten matrix to maintain a low work function surface. The examination of cathode emitters from long duration tests shows deposits of tungsten at the downstream end that appear to block the flow of Ba from the interior. In addition, a numerical model of Ba transport in the cathode plasma indicates that the Ba partial pressure in the insert may exceed the equilibrium vapor pressure of the dominant Ba-producing reaction, and it was postulated previously that this would suppress Ba loss in the upstream part of the emitter. New measurements of the Ba depletion depth from a cathode insert operated for 8200 h reveal that Ba loss is confined to a narrow region near the downstream end, confirming this hypothesis. The Ba transport model was modified to predict the depletion depth with time. A comparison of the calculated and measured depletion depths gives excellent qualitative agreement, and quantitative agreement was obtained assuming an insert temperature 70 °C lower than measured beginning-of-life values

  13. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  14. Depleted uranium residual radiological risk assessment for Kosovo sites

    International Nuclear Information System (INIS)

    Durante, Marco; Pugliese, Mariagabriella

    2003-01-01

    During the recent conflict in Yugoslavia, depleted uranium rounds were employed and were left in the battlefield. Health concern is related to the risk arising from contamination of areas in Kosovo with depleted uranium penetrators and dust. Although chemical toxicity is the most significant health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict. Uranium munitions are considered to be a source of radiological contamination of the environment. Based on measurements and estimates from the recent Balkan Task Force UNEP mission in Kosovo, we have estimated effective doses to resident populations using a well-established food-web mathematical model (RESRAD code). The UNEP mission did not find any evidence of widespread contamination in Kosovo. Rather than the actual measurements, we elected to use a desk assessment scenario (Reference Case) proposed by the UNEP group as the source term for computer simulations. Specific applications to two Kosovo sites (Planeja village and Vranovac hill) are described. Results of the simulations suggest that radiation doses from water-independent pathways are negligible (annual doses below 30 μSv). A small radiological risk is expected from contamination of the groundwater in conditions of effective leaching and low distribution coefficient of uranium metal. Under the assumptions of the Reference Case, significant radiological doses (>1 mSv/year) might be achieved after many years from the conflict through water-dependent pathways. Even in this worst-case scenario, DU radiological risk would be far overshadowed by its chemical toxicity

  15. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  16. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  17. Depletion interaction measured by colloidal probe atomic force microscopy

    NARCIS (Netherlands)

    Wijting, W.K.; Knoben, W.; Besseling, N.A.M.; Leermakers, F.A.M.; Cohen Stuart, M.A.

    2004-01-01

    We investigated the depletion interaction between stearylated silica surfaces in cyclohexane in the presence of dissolved polydimethylsiloxane by means of colloidal probe atomic force microscopy. We found that the range of the depletion interaction decreases with increasing concentration.

  18. Applying the computer code ''beam scanning' for obtaining the electron beam energy spectrum and monitoring the beam scanning system with a faraday cup and edge current sensors

    International Nuclear Information System (INIS)

    Bystrov, P.A.

    2014-01-01

    The results of experiments simulation, obtained in the development of technique for controlling the parameters of the electron beam in a compact radiation sterilization installation are presented. Calculations were performed with a help of a computer code ''BEAM SCANNING'', developed in MRTI. Proposed a method to obtain the spectrum of the electron beam by simulation the experiments in which a Faraday cup waveforms were measured. Preliminary results are presented. Also the results of the experiments and calculations obtained in the development of the amplitude angle sensors are presented. The experiments for the beam irradiation of lead plates proposed as current sensors were modeled. Results are presented in comparison with experimental data. Also are presented the simulation results for the device designed to control scanning system.

  19. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  20. Dietary arginine depletion reduces depressive-like responses in male, but not female, mice.

    Science.gov (United States)

    Workman, Joanna L; Weber, Michael D; Nelson, Randy J

    2011-09-30

    Previous behavioral studies have manipulated nitric oxide (NO) production either by pharmacological inhibition of its synthetic enzyme, nitric oxide synthase (NOS), or by deletion of the genes that code for NOS. However manipulation of dietary intake of the NO precursor, L-arginine, has been understudied in regard to behavioral regulation. L-Arginine is a common amino acid present in many mammalian diets and is essential during development. In the brain L-arginine is converted into NO and citrulline by the enzyme, neuronal NOS (nNOS). In Experiment 1, paired mice were fed a diet comprised either of an L-arginine-depleted, L-arginine-supplemented, or standard level of L-arginine during pregnancy. Offspring were continuously fed the same diets and were tested in adulthood in elevated plus maze, forced swim, and resident-intruder aggression tests. L-Arginine depletion reduced depressive-like responses in male, but not female, mice and failed to significantly alter anxiety-like or aggressive behaviors. Arginine depletion throughout life reduced body mass overall and eliminated the sex difference in body mass. Additionally, arginine depletion significantly increased corticosterone concentrations, which negatively correlated with time spent floating. In Experiment 2, adult mice were fed arginine-defined diets two weeks prior to and during behavioral testing, and again tested in the aforementioned tests. Arginine depletion reduced depressive-like responses in the forced swim test, but did not alter behavior in the elevated plus maze or the resident intruder aggression test. Corticosterone concentrations were not altered by arginine diet manipulation in adulthood. These results indicate that arginine depletion throughout development, as well as during a discrete period during adulthood ameliorates depressive-like responses. These results may yield new insights into the etiology and sex differences of depression. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Evaluation 2 of B10 depletion in the WH PWR

    International Nuclear Information System (INIS)

    Park, Sang Won; Woo, Hae Suk; Kim, Sun Doo; Chae, Hee Dong; Myung, Sun Yup; Jang, Ju Kyung

    2001-01-01

    This paper presents the methodology to evaluate the B 10 depletion behavior in the pressurized water reactor. And B 10 depletion evaluation is performed based on the prediction program and the measured data of B 10 . The result shows that B 10 depletion during normal operation is not negligible. Therefore, adjustments for this depletion effect should be made to calculate the estimated critical postion(ECP) and determine the boron concentration required to maintain the specified shutdown margin

  2. The depletion potential in one, two and three dimensions

    Indian Academy of Sciences (India)

    Abstract. We study the behavior of the depletion potential in binary mixtures of hard particles in one, two, and three dimensions within the framework of a general theory for depletion potential using density functional theory. By doing so we extend earlier studies of the depletion potential in three dimensions to the cases of d ...

  3. 26 CFR 1.642(e)-1 - Depreciation and depletion.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Depreciation and depletion. 1.642(e)-1 Section 1... (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(e)-1 Depreciation and depletion. An estate or trust is allowed the deductions for depreciation and depletion, but only to the extent the...

  4. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611...

  5. High-energy-ion depletion in the charge exchange spectrum of Alcator C

    International Nuclear Information System (INIS)

    Schissel, D.P.

    1982-01-01

    A three-dimensional, guiding center, Monte Carlo code is developed to study ion orbits in Alcator C. The highly peaked ripple of the magnetic field of Alcator is represented by an analytical expression for the vector potential. The analytical ripple field is compared to the resulting magnetic field generated by a current model of the toroidal plates; agreement is excellent. Ion-Ion scattering is simulated by a pitch angle and an energy scattering operator. The equations of motion are integrated with a variable time step, extrapolating integrator. The code produces collisionless banana and ripple trapped loss cones which agree well with present theory. Global energy distributions have been calculated and show a slight depletion above 8.5 keV. Particles which are ripple trapped and lost are at energies below where depletion is observed. It is found that ions pitch angle scatter less as energy is increased. The result is that, when viewed in velocity space, ions form probability lobes the shape of mouse ears which are fat near the thermal energy. Therefore, particles enter the loss cone at low energies near the bottom of the core. Recommendations for future work include improving the analytic model of the ripple field, testing the effect of del . B not equal to 0 on ion orbits, and improving the efficiency of the code by either using a spline fit for the magnetic fields or by creating a vectorized Monte Carlo code

  6. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    Science.gov (United States)

    Asoka-Kumar, P.; Leung, T. C.; Lynn, K. G.; Nielsen, B.; Forcier, M. P.; Weinberg, Z. A.; Rubloff, G. W.

    1992-06-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions.

  7. Positron annihilation studies in the field induced depletion regions of metal-oxide-semiconductor structures

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Leung, T.C.; Lynn, K.G.; Nielsen, B.; Forcier, M.P.; Weinberg, Z.A.; Rubloff, G.W.

    1992-01-01

    The centroid shifts of positron annihilation spectra are reported from the depletion regions of metal-oxide-semiconductor (MOS) capacitors at room temperature and at 35 K. The centroid shift measurement can be explained using the variation of the electric field strength and depletion layer thickness as a function of the applied gate bias. An estimate for the relevant MOS quantities is obtained by fitting the centroid shift versus beam energy data with a steady-state diffusion-annihilation equation and a derivative-gaussian positron implantation profile. Inadequacy of the present analysis scheme is evident from the derived quantities and alternate methods are required for better predictions

  8. Code stroke in Asturias.

    Science.gov (United States)

    Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S

    2016-04-01

    Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  9. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  10. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  11. Automatic code generation in practice

    DEFF Research Database (Denmark)

    Adam, Marian Sorin; Kuhrmann, Marco; Schultz, Ulrik Pagh

    2016-01-01

    -specific language to specify those requirements and to allow for generating a safety-enforcing layer of code, which is deployed to the robot. The paper at hand reports experiences in practically applying code generation to mobile robots. For two cases, we discuss how we addressed challenges, e.g., regarding weaving......Mobile robots often use a distributed architecture in which software components are deployed to heterogeneous hardware modules. Ensuring the consistency with the designed architecture is a complex task, notably if functional safety requirements have to be fulfilled. We propose to use a domain...... code generation into proprietary development environments and testing of manually written code. We find that a DSL based on the same conceptual model can be used across different kinds of hardware modules, but a significant adaptation effort is required in practical scenarios involving different kinds...

  12. Characterization of depleted uranium oxides fabricated using different processing methods

    International Nuclear Information System (INIS)

    Hastings, E.P.; Lewis, C.; FitzPatrick, J.; Rademacher, D.; Tandon, L.

    2008-01-01

    Identifying both physical and chemical characteristics of Special Nuclear Material (SNM) production processes is the corner stone of nuclear forensics. Typically, processing markers are based on measuring an interdicted sample's bulk chemical properties, such as the elemental or isotopic composition, or focusing on the chemical and physical morphology of only a few particles. Therefore, it is imperative that known SNM processes be fully characterized from bulk to trace level for each particle size range. This report outlines a series of particle size measurements and fractionation techniques that can be applied to a bulk SNM powders, categorizing both chemical and physical properties in discrete particle size fractions. This will be demonstrated by characterizing the process signatures of a series of different depleted uranium oxides prepared at increasing firing temperatures (350-1100 deg C). Results will demonstrate how each oxides' material density, particle size distribution, and morphology varies. (author)

  13. Potential For Stratospheric Ozone Depletion During Carboniferous

    Science.gov (United States)

    Bill, M.; Goldstein, A. H.

    Methyl bromide (CH3Br) constitutes the largest source of bromine atoms to the strato- sphere whereas methyl chloride (CH3Cl) is the most abundant halocarbon in the tro- posphere. Both gases play an important role in stratospheric ozone depletion. For in- stance, Br coupled reactions are responsible for 30 to 50 % of total ozone loss in the polar vortex. Currently, the largest natural sources of CH3Br and CH3Cl appear to be biological production in the oceans, inorganic production during biomass burning and plant production in salt marsh ecosystems. Variations of paleofluxes of CH3Br and CH3Cl can be estimated by analyses of oceanic paleoproductivity, stratigraphic analyses of frequency and distribution of fossil charcoal indicating the occurrence of wildfires, and/or by paleoreconstruction indicating the extent of salt marshes. Dur- ing the lower Carboniferous time (Tournaisian-Visean), the southern margin of the Laurasian continent was characterized by charcoal deposits. Estimation on frequency of charcoal layers indicates that wildfires occur in a range of 3-35 years (Falcon-Lang 2000). This suggests that biomass burning could be an important source of CH3Br and CH3Cl during Tournaisian-Viesan time. During Tounaisian and until Merame- cian carbon and oxygen isotope records have short term oscillations (Bruckschen et al. 1999, Mii et al. 1999). Chesterian time (mid- Carboniferous) is marked by an in- crease in delta18O values ( ~ 2 permil) and an increase of glacial deposit frequency suggesting lower temperatures. The occurrence of glacial deposits over the paleopole suggests polar conditions and the associated special features of polar mete- orology such as strong circumpolar wind in the stratosphere (polar vortex) and polar stratospheric clouds. Thus, conditions leading to polar statospheric ozone depletion can be found. Simultaneously an increase in delta13C values is documented. We interpret the positive shift in delta13C as a result of higher bioproductivity

  14. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  15. A modern depleted uranium manufacturing facility

    International Nuclear Information System (INIS)

    Zagula, T.A.

    1995-07-01

    The Specific Manufacturing Capabilities (SMC) Project located at the Idaho National Engineering Laboratory (INEL) and operated by Lockheed Martin Idaho Technologies Co. (LMIT) for the Department of Energy (DOE) manufactures depleted uranium for use in the U.S. Army MIA2 Abrams Heavy Tank Armor Program. Since 1986, SMC has fabricated more than 12 million pounds of depleted uranium (DU) products in a multitude of shapes and sizes with varying metallurgical properties while maintaining security, environmental, health and safety requirements. During initial facility design in the early 1980's, emphasis on employee safety, radiation control and environmental consciousness was gaining momentum throughout the DOE complex. This fact coupled with security and production requirements forced design efforts to focus on incorporating automation, local containment and computerized material accountability at all work stations. The result was a fully automated production facility engineered to manufacture DU armor packages with virtually no human contact while maintaining security, traceability and quality requirements. This hands off approach to handling depleted uranium resulted in minimal radiation exposures and employee injuries. Construction of the manufacturing facility was complete in early 1986 with the first armor package certified in October 1986. Rolling facility construction was completed in 1987 with the first certified plate produced in the fall of 1988. Since 1988 the rolling and manufacturing facilities have delivered more than 2600 armor packages on schedule with 100% final product quality acceptance. During this period there was an annual average of only 2.2 lost time incidents and a single individual maximum radiation exposure of 150 mrem. SMC is an example of designing and operating a facility that meets regulatory requirements with respect to national security, radiation control and personnel safety while achieving production schedules and product quality

  16. A Critical Assessment of the Resource Depletion Potential of Current and Future Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Jens F. Peters

    2016-12-01

    Full Text Available Resource depletion aspects are repeatedly used as an argument for a shift towards new battery technologies. However, whether serious shortages due to the increased demand for traction and stationary batteries can actually be expected is subject to an ongoing discussion. In order to identify the principal drivers of resource depletion for battery production, we assess different lithium-ion battery types and a new lithium-free battery technology (sodium-ion under this aspect, applying different assessment methodologies. The findings show that very different results are obtained with existing impact assessment methodologies, which hinders clear interpretation. While cobalt, nickel and copper can generally be considered as critical metals, the magnitude of their depletion impacts in comparison with that of other battery materials like lithium, aluminum or manganese differs substantially. A high importance is also found for indirect resource depletion effects caused by the co-extraction of metals from mixed ores. Remarkably, the resource depletion potential per kg of produced battery is driven only partially by the electrode materials and thus depends comparably little on the battery chemistry itself. One of the key drivers for resource depletion seems to be the metals (and co-products in electronic parts required for the battery management system, a component rather independent from the actual battery chemistry. However, when assessing the batteries on a capacity basis (per kWh storage capacity, a high-energy density also turns out to be relevant, since it reduces the mass of battery required for providing one kWh, and thus the associated resource depletion impacts.

  17. Depleted uranium concrete container feasibility study

    International Nuclear Information System (INIS)

    Haelsig, R.T.

    1994-09-01

    The purpose of this report is to consider the feasibility of using containers constructed of depleted uranium aggregate concrete (DUCRETE) to store and transport radioactive materials. The method for this study was to review the advantages and disadvantages of DUCRETE containers considering design requirements for potential applications. The author found that DUCRETE is a promising material for onsite storage containers, provided DUCRETE vessels can be certified for one-way transport to disposal sites. The author also found that DUCRETE multipurpose spent nuclear fuel storage/transport packages are technically viable, provided altered temperature acceptance limits can be developed for DUCRETE

  18. Capstone Depleted Uranium Aerosols: Generation and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Parkhurst, MaryAnn; Szrom, Fran; Guilmette, Ray; Holmes, Tom; Cheng, Yung-Sung; Kenoyer, Judson L.; Collins, John W.; Sanderson, T. Ellory; Fliszar, Richard W.; Gold, Kenneth; Beckman, John C.; Long, Julie

    2004-10-19

    In a study designed to provide an improved scientific basis for assessing possible health effects from inhaling depleted uranium (DU) aerosols, a series of DU penetrators was fired at an Abrams tank and a Bradley fighting vehicle. A robust sampling system was designed to collect aerosols in this difficult environment and continuously monitor the sampler flow rates. Aerosols collected were analyzed for uranium concentration and particle size distribution as a function of time. They were also analyzed for uranium oxide phases, particle morphology, and dissolution in vitro. The resulting data provide input useful in human health risk assessments.

  19. Pricing of Water Resources With Depletable Externality: The Effects of Pollution Charges

    Science.gov (United States)

    Kitabatake, Yoshifusa

    1990-04-01

    With an abstraction of a real-world situation, the paper views water resources as a depletable capital asset which yields a stream of services such as water supply and the assimilation of pollution discharge. The concept of the concave or convex water resource depletion function is then introduced and applied to a general two-sector, three-factor model. The main theoretical contribution is to prove that when the water resource depletion function is a concave rather than a convex function of pollution, it is more likely that gross regional income will increase with a higher pollution charge policy. The concavity of the function is meant to imply that with an increase in pollution released, the ability of supplying water at a certain minimum quality level diminishes faster and faster. A numerical example is also provided.

  20. Effects of drop testing on scale model shipping containers shielded with depleted uranium

    International Nuclear Information System (INIS)

    Butler, T.A.

    1980-02-01

    Three scale model shipping containers shielded with depleted uranium were dropped onto an essentially unyielding surface from various heights to determine their margins to failure. This report presents the results of a thorough posttest examination of the models to check for basic structural integrity, shielding integrity, and deformations. Because of unexpected behavior exhibited by the depleted uranium shielding, several tests were performed to further characterize its mechanical properties. Based on results of the investigations, recommendations are made for improved container design and for applying the results to full-scale containers. Even though the specimens incorporated specific design features, the results of this study are generally applicable to any container design using depleted uranium

  1. Applied Electromagnetics

    Energy Technology Data Exchange (ETDEWEB)

    Yamashita, H; Marinova, I; Cingoski, V [eds.

    2002-07-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics.

  2. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  3. Depletion of elements in shock-driven gas

    International Nuclear Information System (INIS)

    Gondhalekar, P.M.

    1985-01-01

    The depletion of elements in shocked gas in supernova remnants and in interstellar bubbles is examined. It is shown that elements are depleted in varying degrees in gas filaments shocked to velocities up to 200 km s -1 and that large differences in depletions are observed in gas filaments shocked to similar velocities. In the shocked gas the depletion of an element appears to be correlated with the electron density (or the neutral gas density) in the filaments. This correlation, if confirmed, is similar to the correlation between depletion and mean density of gas in the clouds in interstellar space. (author)

  4. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  5. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  6. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  7. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  8. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  9. Procedure and code for calculating black control rods taking into account epithermal absorption, code CAS-1

    International Nuclear Information System (INIS)

    Martinc, R.; Trivunac, N.; Zivkovic, Z.

    1964-12-01

    This report describes the computer code CAS-1, calculation method and procedure applied for calculating the black control rods taking into account the epithermal neutron absorption. Results obtained for supercell method applied for regular lattice reflected in the multiplication medium is part of this report in addition to the computer code manual

  10. GAMERA - The New Magnetospheric Code

    Science.gov (United States)

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  11. Deoxyribonucleoside kinases in mitochondrial DNA depletion.

    Science.gov (United States)

    Saada-Reisch, Ann

    2004-10-01

    Mitochondrial DNA (mtDNA) depletion syndromes (MDS) are a heterogeneous group of mitochondrial disorders, manifested by a decreased mtDNA copy number and respiratory chain dysfunction. Primary MDS are inherited autosomally and may affect a single organ or multiple tissues. Mutated mitochondrial deoxyribonucleoside kinases; deoxyguanosine kinase (dGK) and thymidine kinase 2 (TK2), were associated with the hepatocerebral and myopathic forms of MDS respectively. dGK and TK2 are key enzymes in the mitochondrial nucleotide salvage pathway, providing the mitochondria with deoxyribonucleotides (dNP) essential for mtDNA synthesis. Although the mitochondrial dNP pool is physically separated from the cytosolic one, dNP's may still be imported through specific transport. Non-replicating tissues, where cytosolic dNP supply is down regulated, are thus particularly vulnerable to dGK and TK2 deficiency. The overlapping substrate specificity of deoxycytidine kinase (dCK) may explain the relative sparing of muscle in dGK deficiency, while low basal TK2 activity render this tissue susceptible to TK2 deficiency. The precise pathophysiological mechanisms of mtDNA depletion due to dGK and TK2 deficiencies remain to be determined, though recent findings confirm that it is attributed to imbalanced dNTP pools.

  12. Bone marrow scintigraphy in hemopoietic depletion states

    International Nuclear Information System (INIS)

    Fortynova, J.; Bakos, K.; Pradacova, J.

    1981-01-01

    Bone marrow scintigraphy was performed in 29 patients with hemopoietic depletion states of various etiology. Two tracers were used for visualization, viz., sup(99m)Tc-sulfur-colloid and 111 InCl 3 ;some patients were examined using both indicators. 111 InCl 3 is bound to transferrin and is adsorbed on the surface of reticulocytes and erythroblasts. A scintillation camera PHO GAMMA SEARLE IV fitted with a moving table and computer CLINCOM were used to obtain whole-body images. The comparison of all scans and marrow puncture smears was done. In patients with aplastic anemia with both hyperplastic or hypoplastic marrow good correlation of bone marrow scans and sternal puncture smears was found. In several cases the scintigraphic examination helped to establish the diagnosis of marrow depletion. A peculiar disadvantage of the imaging method with either sup(99m)Tc-sulfur-colloid or 111 InCl 3 is that it shows the disorders in erythropoietic and reticuloendothelial cells whereas the defects in myelopoietic cell series and platelet precursors are not provable. According to literature data, great attention is paid to the prognostic value of scintigraphic examination in aplastic anemia. (author)

  13. Bone marrow scintigraphy in hemopoietic depletion states

    Energy Technology Data Exchange (ETDEWEB)

    Fortynova, J. (Ustav Hematologie a Krevni Transfuze, Prague (Czechoslovakia)); Bakos, K.; Pradacova, J. (Karlova Univ., Prague (Czechoslovakia). Biofyzikalni Ustav)

    1981-01-01

    Bone marrow scintigraphy was performed in 29 patients with hemopoietic depletion states of various etiology. Two tracers were used for visualization, viz., sup(99m)Tc-sulfur-colloid and /sup 111/InCl/sub 3/; some patients were examined using both indicators. /sup 111/InCl/sub 3/ is bound to transferrin and is adsorbed on the surface of reticulocytes and erythroblasts. A scintillation camera PHO GAMMA SEARLE IV fitted with a moving table and computer CLINCOM were used to obtain whole-body images. The comparison of all scans and marrow puncture smears was done. In patients with aplastic anemia with both hyperplastic or hypoplastic marrow good correlation of bone marrow scans and sternal puncture smears was found. In several cases the scintigraphic examination helped to establish the diagnosis of marrow depletion. A peculiar disadvantage of the imaging method with either sup(99m)Tc-sulfur-colloid or /sup 111/InCl/sub 3/ is that it shows the disorders in erythropoietic and reticuloendothelial cells whereas the defects in myelopoietic cell series and platelet precursors are not provable. According to literature data, great attention is paid to the prognostic value of scintigraphic examination in aplastic anemia.

  14. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  15. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  16. Mantle depletion and metasomatism recorded in orthopyroxene in highly depleted peridotites

    DEFF Research Database (Denmark)

    Scott, James; Liu, Jingao; Pearson, D. Graham

    2016-01-01

    Although trace element concentrations in clinopyroxene serve as a useful tool for assessing the depletion and enrichment history of mantle peridotites, this is not applicable for peridotites in which the clinopyroxene component has been consumed (~ 25% partial melting). Orthopyroxene persists in ...

  17. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  18. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  19. Solution of the isotopic depletion equation using decomposition method and analytical solution

    Energy Technology Data Exchange (ETDEWEB)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: fprata@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  20. Solution of the isotopic depletion equation using decomposition method and analytical solution

    International Nuclear Information System (INIS)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S.

    2011-01-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  1. Radiological assessment of depleted uranium migration offsite from an ordnance range

    International Nuclear Information System (INIS)

    Rynders, D.G.

    1996-01-01

    The military utilizes ordnance loaded with depleted uranium in order to maximize armor penetrating capabilities. These weapons are tested on open ranges where the weapons are fired through a cloth target and impact into the soil. This paper examines the potential environmental impact from use of depleted uranium in an open setting. A preliminary pathway analysis was performed to examine potential routes of exposure to nonhuman species in the vicinity and ultimately to man. Generic data was used in the study to estimate the isotopic mix and weight of the ordnance. Key factors in the analysis included analyzing the physics of weapon impact on soil, chemical changes in material upon impact, and mechanisms of offsite transport (including atmospheric and overland transport). Non-standard exposure scenarios were investigated, including the possibility of offsite contaminant transport due to range grassfires. Two radiological assessment codes, MEPAS (Multi media Environmental Pollutant Assessment System) and RESRAD were used to help analyze the scenarios

  2. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  3. Deuterium-depleted water. Romanian achievements and perspective

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Saros-Rogobete, Irina; Titescu, Gheorghe

    2001-01-01

    Deuterium-depleted water has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Beginning with 1996 ICSI Rm. Valcea, deuterium-depleted water producer, co-operated with Romanian specialized institutes for biological effect's evaluation of deuterium-depleted water. These investigations lead to the following conclusions: - Deuterium-depleted water caused a tendency towards the increase of the basal tonus, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tonus and vascular reactivity produced by the deuterium-depleted water persist after the removal of the vascular endothelium; - Animals treated with deuterium-depleted water showed an increase of the resistance both to sublethal and to lethal gamma radiation doses, suggesting a radioprotective action; - Deuterium-depleted water stimulates immune defence reactions and increases the numbers of polymorphonuclear neutrophils; - Investigations regarding artificial reproduction of fish with deuterium-depleted water fecundated solutions confirmed favourable influence in embryo growth stage and resistance in subsequent growth stages; - It was studied germination, growth and quantitative character's variability in plants; one can remark the favourable influence of deuterium-depleted water on biological process in plants in various ontogenetic stages; - The deuterium depletion in seawater produces the diminution of the water spectral energy related to an increased metabolism of Tetraselmis Suecica. (authors)

  4. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  5. Depletion benchmarks calculation of random media using explicit modeling approach of RMC

    International Nuclear Information System (INIS)

    Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan

    2016-01-01

    Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.

  6. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  7. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  8. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  9. Network Coding Over The 232

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter

    2013-01-01

    Creating efficient finite field implementations has been an active research topic for several decades. Many appli- cations in areas such as cryptography, signal processing, erasure coding and now also network coding depend on this research to deliver satisfactory performance. In this paper we...... from a benchmark application written in C++. These results are finally compared to different binary and binary extension field implementations. The results show that the prime field implementation offers a large field size while maintaining a very good performance. We believe that using prime fields...

  10. Tylosin depletion from edible pig tissues.

    Science.gov (United States)

    Prats, C; El Korchi, G; Francesch, R; Arboix, M; Pérez, B

    2002-12-01

    The depletion of tylosin from edible pig tissues was studied following 5 days of intramuscular (i.m.) administration of 10 mg/kg of tylosin to 16 crossbreed pigs. Animals were slaughtered at intervals after treatment and samples of muscle, kidney, liver, skin+fat, and injection site were collected and analysed by high-performance liquid chromatography (HPLC). Seven days after the completion of treatment, the concentration of tylosin in kidney, skin+fat, and at the injection site was higher than the European Union maximal residue limit (MRL) of 100 microg/kg. Tylosin residues in all tissues were below the quantification limit (50 microg/kg) at 10 and 14 days post-treatment.

  11. Ozone depletion following future volcanic eruptions

    Science.gov (United States)

    Eric Klobas, J.; Wilmouth, David M.; Weisenstein, Debra K.; Anderson, James G.; Salawitch, Ross J.

    2017-07-01

    While explosive volcanic eruptions cause ozone loss in the current atmosphere due to an enhancement in the availability of reactive chlorine following the stratospheric injection of sulfur, future eruptions are expected to increase total column ozone as halogen loading approaches preindustrial levels. The timing of this shift in the impact of major volcanic eruptions on the thickness of the ozone layer is poorly known. Modeling four possible climate futures, we show that scenarios with the smallest increase in greenhouse gas concentrations lead to the greatest risk to ozone from heterogeneous chemical processing following future eruptions. We also show that the presence in the stratosphere of bromine from natural, very short-lived biogenic compounds is critically important for determining whether future eruptions will lead to ozone depletion. If volcanic eruptions inject hydrogen halides into the stratosphere, an effect not considered in current ozone assessments, potentially profound reductions in column ozone would result.

  12. Kinetic depletion model for pellet ablation

    International Nuclear Information System (INIS)

    Kuteev, Boris V.

    2001-11-01

    A kinetic model for depletion effect, which determines pellet ablation when the pellet passes a rational magnetic surface, is formulated. The model predicts a moderate decrease of the ablation rate compared with the earlier considered monoenergy versions [1, 2]. For typical T-10 conditions the ablation rate reduces by a reactor of 2.5 when the 1-mm pellet penetrates through the plasma center. A substantial deceleration of pellets -about 15% per centimeter of low shire rational q region; is predicted. Penetration for Low Field Side and High Field Side injections is considered taking into account modification of the electron distribution function by toroidal magnetic field. It is shown that Shafranov shift and toroidal effects yield the penetration length for HFS injection higher by a factor of 1.5. This fact should be taken into account when plasma-shielding effects on penetration are considered. (author)

  13. ORIGEN-S: scale system module to calculate fuel depletion, actinide transmutation, fission product buildup and decay, and associated radiation source terms

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    ORIGEN-S computes time-dependent concentrations and source terms of a large number of isotopes, which are simultaneously generated or depleted through neutronic transmutation, fission, radioactive decay, input feet rates and physical or chemical removal rates. The calculations may pertain to fuel irradiation within nuclear reactors, or the storage, management, transportation or subsequent chemical processing of removed fuel elements. The matrix exponential expansion model of the ORIGIN code is unaltered in ORIGEN-S. Essentially all features of ORIGEN were retained, expanded or supplemented within new computations. The primary objective of ORIGEN-S, as requested by the Nuclear Regulatory Commission, is that the calculations may utilize the multi-energy group cross sections from any currently processed standardized ENDF/B data base. This purpose has been implemented through the prior execution of codes within either the SCALE System or the AMPX System, developed at the Oak Ridge National Laboratory. These codes compute flux-weighted cross sections, simulating conditions within any given reactor fuel assembly, and convert the data into a library that can be input to ORIGEN-S. Time-dependent libraries may be produced, reflecting fuel composition variations during irradiation. Presented in the document are: detailed and condensed input instructions, model theory, features available, range of applicability, brief subroutine descriptions, sample input, and I/O requirements. Presently the code is operable on IBM 360/370 computers and may be converted for CDC computers. ORIGEN-S is a functional module in the SCALE System and will be one of the modules invoked in the SAS2 Control Module, presently being developed, or may be applied as a stand alone program. It can be used in nuclear reactor and processing plant design studies, radiation safety analyses, and environmental assessments

  14. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  15. Pathogenic lysosomal depletion in Parkinson's disease.

    Science.gov (United States)

    Dehay, Benjamin; Bové, Jordi; Rodríguez-Muela, Natalia; Perier, Celine; Recasens, Ariadna; Boya, Patricia; Vila, Miquel

    2010-09-15

    Mounting evidence suggests a role for autophagy dysregulation in Parkinson's disease (PD). The bulk degradation of cytoplasmic proteins (including α-synuclein) and organelles (such as mitochondria) is mediated by macroautophagy, which involves the sequestration of cytosolic components into autophagosomes (AP) and its delivery to lysosomes. Accumulation of AP occurs in postmortem brain samples from PD patients, which has been widely attributed to an induction of autophagy. However, the cause and pathogenic significance of these changes remain unknown. Here we found in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine mouse model of PD that AP accumulation and dopaminergic cell death are preceded by a marked decrease in the amount of lysosomes within dopaminergic neurons. Lysosomal depletion was secondary to the abnormal permeabilization of lysosomal membranes induced by increased mitochondrial-derived reactive oxygen species. Lysosomal permeabilization resulted in a defective clearance and subsequent accumulation of undegraded AP and contributed directly to neurodegeneration by the ectopic release of lysosomal proteases into the cytosol. Lysosomal breakdown and AP accumulation also occurred in PD brain samples, where Lewy bodies were strongly immunoreactive for AP markers. Induction of lysosomal biogenesis by genetic or pharmacological activation of lysosomal transcription factor EB restored lysosomal levels, increased AP clearance and attenuated 1-methyl-4-phenylpyridinium-induced cell death. Similarly, the autophagy-enhancer compound rapamycin attenuated PD-related dopaminergic neurodegeneration, both in vitro and in vivo, by restoring lysosomal levels. Our results indicate that AP accumulation in PD results from defective lysosomal-mediated AP clearance secondary to lysosomal depletion. Restoration of lysosomal levels and function may thus represent a novel neuroprotective strategy in PD.

  16. Multi-scale entropic depletion phenomena in polymer liquids

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, Debapriya [Department of Materials Science, University of Illinois, Urbana, Illinois 61801 (United States); Schweizer, Kenneth S., E-mail: kschweiz@illinois.edu [Department of Materials Science, University of Illinois, Urbana, Illinois 61801 (United States); Department of Chemistry, University of Illinois, Urbana, Illinois 61801 (United States); Frederick Seitz Materials Research Laboratory, University of Illinois, Urbana, Illinois 61801 (United States)

    2015-06-07

    We apply numerical polymer integral equation theory to study the entropic depletion problem for hard spheres dissolved in flexible chain polymer melts and concentrated solutions over an exceptionally wide range of polymer radius of gyration to particle diameter ratios (R{sub g}/D), particle-monomer diameter ratios (D/d), and chain lengths (N) including the monomer and oligomer regimes. Calculations are performed based on a calibration of the effective melt packing fraction that reproduces the isobaric dimensionless isothermal compressibility of real polymer liquids. Three regimes of the polymer-mediated interparticle potential of mean force (PMF) are identified and analyzed in depth. (i) The magnitude of the contact attraction that dominates thermodynamic stability scales linearly with D/d and exhibits a monotonic and nonperturbative logarithmic increase with N ultimately saturating in the long chain limit. (ii) A close to contact repulsive barrier emerges that grows linearly with D/d and can attain values far in excess of thermal energy for experimentally relevant particle sizes and chain lengths. This raises the possibility of kinetic stabilization of particles in nanocomposites. The barrier grows initially logarithmically with N, attains a maximum when 2R{sub g} ∼ D/2, and then decreases towards its asymptotic long chain limit as 2R{sub g} ≫ D. (iii) A long range (of order R{sub g}) repulsive, exponentially decaying component of the depletion potential emerges when polymer coils are smaller than, or of order, the nanoparticle diameter. Its amplitude is effectively constant for 2R{sub g} ≤ D. As the polymer becomes larger than the particle, the amplitude of this feature decreases extremely rapidly and becomes negligible. A weak long range and N-dependent component of the monomer-particle pair correlation function is found which is suggested to be the origin of the long range repulsive PMF. Implications of our results for thermodynamics and miscibility are

  17. Bacterial reduction by cell salvage washing and leukocyte depletion filtration.

    Science.gov (United States)

    Waters, Jonathan H; Tuohy, Marion J; Hobson, Donna F; Procop, Gary

    2003-09-01

    Blood conservation techniques are being increasingly used because of the increased cost and lack of availability of allogeneic blood. Cell salvage offers great blood savings opportunities but is thought to be contraindicated in a number of areas (e.g., blood contaminated with bacteria). Several outcome studies have suggested the safety of this technique in trauma and colorectal surgery, but many practitioners are still hesitant to apply cell salvage in the face of frank bacterial contamination. This study was undertaken to assess the efficacy of bacterial removal when cell salvage was combined with leukocyte depletion filtration. Expired packed erythrocytes were obtained and inoculated with a fixed amount of a stock bacteria (Escherichia coli American Type Culture Collections [ATCC] 25922, Pseudomonas aeruginosa ATCC 27853, Staphylococcus aureus ATCC 29213, or Bacteroides fragilis ATCC 25285) in amounts ranging from 2,000 to 4,000 colony forming units/ml. The blood was processed via a cell salvage machine. The washed blood was then filtered using a leukocyte reduction filter. The results for blood taken during each step of processing were compared using a repeated-measures design. Fifteen units of blood were contaminated with each of the stock bacteria. From the prewash sample to the postfiltration sample, 99.0%, 99.6%, 100%, and 97.6% of E. coli, S. aureus, P. aeruginosa, and B. fragilis were removed, respectively. Significant but not complete removal of contaminating bacteria was seen. An increased level of patient safety may be added to cell salvage by including a leukocyte depletion filter when salvaging blood that might be grossly contaminated with bacteria.

  18. Concatenated quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.

    1996-07-01

    One main problem for the future of practial quantum computing is to stabilize the computation against unwanted interactions with the environment and imperfections in the applied operations. Existing proposals for quantum memories and quantum channels require gates with asymptotically zero error to store or transmit an input quantum state for arbitrarily long times or distances with fixed error. This report gives a method which has the property that to store or transmit a qubit with maximum error {epsilon} requires gates with errors at most {ital c}{epsilon} and storage or channel elements with error at most {epsilon}, independent of how long we wish to store the state or how far we wish to transmit it. The method relies on using concatenated quantum codes and hierarchically implemented recovery operations. The overhead of the method is polynomial in the time of storage or the distance of the transmission. Rigorous and heuristic lower bounds for the constant {ital c} are given.

  19. Applied number theory

    CERN Document Server

    Niederreiter, Harald

    2015-01-01

    This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas.  Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc.  Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...

  20. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  1. Uncertainty analysis of the FRAP code

    International Nuclear Information System (INIS)

    Peck, S.O.

    1978-01-01

    A user oriented, automated uncertainty analysis capability has been built into the FRAP code (Fuel Rod Analysis Program) and applied to a PWR fuel rod undergoing a LOCA. The method of uncertainty analysis is the Response Surface Method (RSM). (author)

  2. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  3. Critical lengths of error events in convolutional codes

    DEFF Research Database (Denmark)

    Justesen, Jørn

    1994-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  4. Critical Lengths of Error Events in Convolutional Codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Andersen, Jakob Dahl

    1998-01-01

    If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes......If the calculation of the critical length is based on the expurgated exponent, the length becomes nonzero for low error probabilities. This result applies to typical long codes, but it may also be useful for modeling error events in specific codes...

  5. Is Ego Depletion Real? An Analysis of Arguments.

    Science.gov (United States)

    Friese, Malte; Loschelder, David D; Gieseler, Karolin; Frankenbach, Julius; Inzlicht, Michael

    2018-03-01

    An influential line of research suggests that initial bouts of self-control increase the susceptibility to self-control failure (ego depletion effect). Despite seemingly abundant evidence, some researchers have suggested that evidence for ego depletion was the sole result of publication bias and p-hacking, with the true effect being indistinguishable from zero. Here, we examine (a) whether the evidence brought forward against ego depletion will convince a proponent that ego depletion does not exist and (b) whether arguments that could be brought forward in defense of ego depletion will convince a skeptic that ego depletion does exist. We conclude that despite several hundred published studies, the available evidence is inconclusive. Both additional empirical and theoretical works are needed to make a compelling case for either side of the debate. We discuss necessary steps for future work toward this aim.

  6. Depletion-induced biaxial nematic states of boardlike particles

    International Nuclear Information System (INIS)

    Belli, S; Van Roij, R; Dijkstra, M

    2012-01-01

    With the aim of investigating the stability conditions of biaxial nematic liquid crystals, we study the effect of adding a non-adsorbing ideal depletant on the phase behavior of colloidal hard boardlike particles. We take into account the presence of the depletant by introducing an effective depletion attraction between a pair of boardlike particles. At fixed depletant fugacity, the stable liquid-crystal phase is determined through a mean-field theory with restricted orientations. Interestingly, we predict that for slightly elongated boardlike particles a critical depletant density exists, where the system undergoes a direct transition from an isotropic liquid to a biaxial nematic phase. As a consequence, by tuning the depletant density, an easy experimental control parameter, one can stabilize states of high biaxial nematic order even when these states are unstable for pure systems of boardlike particles. (paper)

  7. Applied Enzymology.

    Science.gov (United States)

    Manoharan, Asha; Dreisbach, Joseph H.

    1988-01-01

    Describes some examples of chemical and industrial applications of enzymes. Includes a background, a discussion of structure and reactivity, enzymes as therapeutic agents, enzyme replacement, enzymes used in diagnosis, industrial applications of enzymes, and immobilizing enzymes. Concludes that applied enzymology is an important factor in…

  8. Arabic Natural Language Processing System Code Library

    Science.gov (United States)

    2014-06-01

    Adelphi, MD 20783-1197 This technical note provides a brief description of a Java library for Arabic natural language processing ( NLP ) containing code...for training and applying the Arabic NLP system described in the paper "A Cross-Task Flexible Transition Model for Arabic Tokenization, Affix...and also English) natural language processing ( NLP ), containing code for training and applying the Arabic NLP system described in Stephen Tratz’s

  9. Control of Single Molecule Fluorescence Dynamics by Stimulated Emission Depletion

    OpenAIRE

    Marsh, R. J.; Osborne, M. A.; Bain, A. J.

    2003-01-01

    The feasibility of manipulating the single molecule absorption-emission cycle using picosecond stimulated emission depletion (STED) is investigated using a stochastic computer simulation. In the simulation the molecule is subjected to repeated excitation and depletion events using time delayed pairs of excitation (PUMP) and depletion (DUMP) pulses derived from a high repetition rate pulsed laser system. The model is used to demonstrate that a significant and even substantial reduction in the ...

  10. Halftone Coding with JBIG2

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    2000-01-01

    of a halftone pattern dictionary.The decoder first decodes the gray-scale image. Then for each gray-scale pixel looks up the corresponding halftonepattern in the dictionary and places it in the reconstruction bitmap at the position corresponding to the gray-scale pixel. The coding method is inherently lossy......The emerging international standard for compression of bilevel images and bi-level documents, JBIG2,provides a mode dedicated for lossy coding of halftones. The encoding procedure involves descreening of the bi-levelimage into gray-scale, encoding of the gray-scale image, and construction...... and care must be taken to avoid introducing artifacts in the reconstructed image. We describe how to apply this coding method for halftones created by periodic ordered dithering, by clustered dot screening (offset printing), and by techniques which in effect dithers with blue noise, e.g., error diffusion...

  11. Generalized perturbation theory for LWR depletion analysis and core design applications

    International Nuclear Information System (INIS)

    White, J.R.; Frank, B.R.

    1986-01-01

    A comprehensive time-dependent perturbation theory formulation that includes macroscopic depletion, thermal-hydraulic and poison feedback effects, and a criticality reset mechanism is developed. The methodology is compatible with most current LWR design codes. This new development allows GTP/DTP methods to be used quantitatively in a variety of realistic LWR physics applications that were not possible prior to this work. A GTP-based optimization technique for incore fuel management analyses is addressed as a promising application of the new formulation

  12. Glutathione depletion in tissues after administration of buthionine sulphoximine

    International Nuclear Information System (INIS)

    Minchinton, A.I.; Rojas, A.; Smith, A.; Soranson, J.A.; Shrieve, D.C.; Jones, N.R.; Bremner, J.C.

    1984-01-01

    Buthionine sulphoximine (BSO) an inhibitor of glutathione (GSH) biosynthesis, was administered to mice in single and repeated doses. The resultant pattern of GSH depletion was studied in liver, kidney, skeletal muscle and three types of murine tumor. Liver and kidney exhibited a rapid depletion of GSH. Muscle was depleted to a similar level, but at a slower rate after a single dose. All three tumors required repeated administration of BSO over several days to obtain a similar degree of depletion to that shown in the other tissues

  13. Research on using depleted uranium as nuclear fuel for HWR

    International Nuclear Information System (INIS)

    Zhang Jiahua; Chen Zhicheng; Bao Borong

    1999-01-01

    The purpose of our work is to find a way for application of depleted uranium in CANDU reactor by using MOX nuclear fuel of depleted U and Pu instead of natural uranium. From preliminary evaluation and calculation, it was shown that MOX nuclear fuel consisting of depleted uranium enrichment tailings (0.25% 235 U) and plutonium (their ratio 99.5%:0.5%) could replace natural uranium in CANDU reactor to sustain chain reaction. The prospects of application of depleted uranium in nuclear energy field are also discussed

  14. Producing, Importing, and Exporting Ozone-Depleting Substances

    Science.gov (United States)

    Overview page provides links to information on producing, importing, and exporting ozone-depleting substances, including information about the HCFC allowance system, importing, labeling, recordkeeping and reporting.

  15. Gas generation matrix depletion quality assurance project plan

    International Nuclear Information System (INIS)

    1998-01-01

    The Los Alamos National Laboratory (LANL) is to provide the necessary expertise, experience, equipment and instrumentation, and management structure to: Conduct the matrix depletion experiments using simulated waste for quantifying matrix depletion effects; and Conduct experiments on 60 cylinders containing simulated TRU waste to determine the effects of matrix depletion on gas generation for transportation. All work for the Gas Generation Matrix Depletion (GGMD) experiment is performed according to the quality objectives established in the test plan and under this Quality Assurance Project Plan (QAPjP)

  16. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  17. Description of the THYDE-P code

    International Nuclear Information System (INIS)

    Asahi, Yoshiro

    1978-07-01

    This paper is a preliminary report about the methods and the models applied to a computer code named THYDE-P which is concerned with thermal-hydraulic transients of a PWR plant following a large or small area break of a primary coolant system pipe, generally referred to as a loss-of-coolant accident (LOCA). The THYDE-P code deals not only with blowdown phase, but also with reflooding phase. What characterizes the THYDE-P code is its entirely new model for the primary loop network. The code user information and the programming detail are not included in this report, but in a future documentation. (auth.)

  18. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  19. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  20. Regret causes ego-depletion and finding benefits in the regrettable events alleviates ego-depletion.

    Science.gov (United States)

    Gao, Hongmei; Zhang, Yan; Wang, Fang; Xu, Yan; Hong, Ying-Yi; Jiang, Jiang

    2014-01-01

    This study tested the hypotheses that experiencing regret would result in ego-depletion, while finding benefits (i.e., "silver linings") in the regret-eliciting events counteracted the ego-depletion effect. Using a modified gambling paradigm (Experiments 1, 2, and 4) and a retrospective method (Experiments 3 and 5), five experiments were conducted to induce regret. Results revealed that experiencing regret undermined performance on subsequent tasks, including a paper-and-pencil calculation task (Experiment 1), a Stroop task (Experiment 2), and a mental arithmetic task (Experiment 3). Furthermore, finding benefits in the regret-eliciting events improved subsequent performance (Experiments 4 and 5), and this improvement was mediated by participants' perceived vitality (Experiment 4). This study extended the depletion model of self-regulation by considering emotions with self-conscious components (in our case, regret). Moreover, it provided a comprehensive understanding of how people felt and performed after experiencing regret and after finding benefits in the events that caused the regret.

  1. How Ego Depletion Affects Sexual Self-Regulation: Is It More Than Resource Depletion?

    Science.gov (United States)

    Nolet, Kevin; Rouleau, Joanne-Lucine; Benbouriche, Massil; Carrier Emond, Fannie; Renaud, Patrice

    2015-12-21

    Rational thinking and decision making are impacted when in a state of sexual arousal. The inability to self-regulate arousal can be linked to numerous problems, like sexual risk taking, infidelity, and sexual coercion. Studies have shown that most men are able to exert voluntary control over their sexual excitation with various levels of success. Both situational and dispositional factors can influence self-regulation achievement. The goal of this research was to investigate how ego depletion, a state of low self-control capacity, interacts with personality traits-propensities for sexual excitation and inhibition-and cognitive absorption, to cause sexual self-regulation failure. The sexual responses of 36 heterosexual males were assessed using penile plethysmography. They were asked to control their sexual arousal in two conditions, with and without ego depletion. Results suggest that ego depletion has opposite effects based on the trait sexual inhibition, as individuals moderately inhibited showed an increase in performance while highly inhibited ones showed a decrease. These results challenge the limited resource model of self-regulation and point to the importance of considering how people adapt to acute and high challenging conditions.

  2. Applied optics

    International Nuclear Information System (INIS)

    Orszag, A.; Antonetti, A.

    1988-01-01

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed [fr

  3. Recent trends in coding theory and its applications

    CERN Document Server

    Li, Wen-Ching Winnie

    2007-01-01

    Coding theory draws on a remarkable selection of mathematical topics, both pure and applied. The various contributions in this volume introduce coding theory and its most recent developments and applications, emphasizing both mathematical and engineering perspectives on the subject. This volume covers four important areas in coding theory: algebraic geometry codes, graph-based codes, space-time codes, and quantum codes. Both students and seasoned researchers will benefit from the extensive and self-contained discussions of the development and recent progress in these areas.

  4. A semi-empirical model for the formation and depletion of the high burnup structure in UO{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Pizzocri, D. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany); Politecnico di Milano, Department of Energy, Nuclear Engineering Division, Via La Masa 34, 20156, Milan (Italy); Cappia, F. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany); Technische Universität München, Boltzmannstraße 15, 85747, Garching bei München (Germany); Luzzi, L., E-mail: lelio.luzzi@polimi.it [Politecnico di Milano, Department of Energy, Nuclear Engineering Division, Via La Masa 34, 20156, Milan (Italy); Pastore, G. [Idaho National Laboratory, Fuel Modeling and Simulation Department, 2525 Fremont Avenue, 83415, Idaho Falls (United States); Rondinella, V.V.; Van Uffelen, P. [European Commission, Joint Research Centre, Directorate for Nuclear Safety and Security, PO Box 2340, 76125, Karlsruhe (Germany)

    2017-04-15

    In the rim zone of UO{sub 2} nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. For this purpose, we performed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Based on these new experimental data, we infer an exponential reduction of the average grain size with local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes. - Highlights: •Development of a new model for the formation and depletion of the high burnup structure. •New average grain-size measurements to support model development. •Formation threshold of the high burnup structure based on the concept of effective burnup. •Coupled description of grain recrystallization/polygonisation and depletion of intra-granular fission gas. •Model suitable for application in fuel performance codes.

  5. Theoretical analysis of nuclear reactors (Phase I), I-V, Part IV, Nuclear fuel depletion

    International Nuclear Information System (INIS)

    Pop-Jordanov, J.

    1962-07-01

    Nuclear fuel depletion is analyzed in order to estimate the qualitative and quantitative fuel property changes during irradiation and the influence of changes on the reactivity during long-term reactor operation. The changes of fuel properties are described by changes of neutron absorption and fission cross sections. Part one of this report covers the economic significance of fuel burnup and the review of fuel isotopic changes during depletion. Pat two contains the analysis of the U 235 chain, analytical expressions for the concentrations of U 235 , U 236 and Np 237 as a function of burnup. Part three contains the analysis of neutron spectrum influence on the Westcott method for calculating the cross sections. Part four contains the calculation method applied on Calder Hall type reactor. The results were obtained by applying ZUSE-22 R digital computer

  6. Metallographic Characterization of Wrought Depleted Uranium

    Energy Technology Data Exchange (ETDEWEB)

    Forsyth, Robert Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hill, Mary Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-14

    Metallographic characterization was performed on wrought depleted uranium (DU) samples taken from the longitudinal and transverse orientations from specific locations on two specimens. Characterization of the samples included general microstructure, inclusion analysis, grain size analysis, and microhardness testing. Comparisons of the characterization results were made to determine any differences based on specimen, sample orientation, or sample location. In addition, the characterization results for the wrought DU samples were also compared with data obtained from the metallographic characterization of cast DU samples previously characterized. No differences were observed in microstructure, inclusion size, morphology, and distribution, or grain size in regard to specimen, location, or orientation for the wrought depleted uranium samples. However, a small difference was observed in average hardness with regard to orientation at the same locations within the same specimen. The longitudinal samples were slightly harder than the transverse samples from the same location of the same specimen. This was true for both wrought DU specimens. Comparing the wrought DU sample data with the previously characterized cast DU sample data, distinct differences in microstructure, inclusion size, morphology and distribution, grain size, and microhardness were observed. As expected, the microstructure of the wrought DU samples consisted of small recrystallized grains which were uniform, randomly oriented, and equiaxed with minimal twinning observed in only a few grains. In contrast, the cast DU microstructure consisted of large irregularly shaped grains with extensive twinning observed in most grains. Inclusions in the wrought DU samples were elongated, broken and cracked and light and dark phases were observed in some inclusions. The mean inclusion area percentage for the wrought DU samples ranged from 0.08% to 0.34% and the average density from all wrought DU samples was 1.62E+04/cm

  7. PENBURN - A 3-D Zone-Based Depletion/Burnup Solver

    International Nuclear Information System (INIS)

    Manalo, Kevin; Plower, Thomas; Rowe, Mireille; Mock, Travis; Sjoden, Glenn E.

    2008-01-01

    PENBURN (Parallel Environment Burnup) is a general depletion/burnup solver which, when provided with zone-based reaction rates, computes time-dependent isotope concentrations for a set of actinides and fission products. Burnup analysis in PENBURN is performed with a direct Bateman-solver chain solution technique. Specifically, in tandem with PENBURN is the use of PENTRAN, a parallel multi-group anisotropic Sn code for 3-D Cartesian geometries. In PENBURN, the linear chain method is actively used to solve individual isotope chains which are then fully attributed by the burnup code to yield integrated isotope concentrations for each nuclide specified. Included with the discussion of code features, a single PWR fuel pin calculation with the burnup code is performed and detailed with a benchmark comparison to PIE (Post-Irradiation Examination) data within the SFCOMPO (Spent Fuel Composition / NEA) database, and also with burnup codes in SCALE5.1. Conclusions within the paper detail, in PENBURN, the accuracy of major actinides, flux profile behavior as a function of burnup, and criticality calculations for the PWR fuel pin model. (authors)

  8. Simulations and observations of plasma depletion, ion composition, and airglow emissions in two auroral ionospheric depletion experiments

    International Nuclear Information System (INIS)

    Yau, A.W.; Whalen, B.A.; Harris, F.R.; Gattinger, R.L.; Pongratz, M.B.; Bernhardt, P.A.

    1985-01-01

    In an ionospheric depletion experiment where chemically reactive vapors such as H 2 O and CO 2 are injected into the O + dominant F region to accelerate the plasma recombination rate and to reduce the plasma density, the ion composition in the depleted region is modified, and photometric emissions are produced. We compare in situ ion composition, density, and photometric measurements from two ionospheric depletion experiments with predictions from chemical modeling. The two injections, Waterhole I and III, were part of an auroral perturbation experiment and occurred in different ambient conditions. In both injections a core region of greater than fivefold plasma depletion was observed over roughly-equal5-km diameter within seconds of the injection, surrounded by an outer region of less drastic and slower depletion. In Waterhole I the plasma density was depleted tenfold over a 30-km diamter region after 2 min. The ambient O + density was drastically reduced, and the molecular O + 2 abundance was enhanced fivehold in the depletion region. OH airglow emission associated with the depletion was observed with a peak emission intensity of roughly-equal1 kR. In Waterhole III the ambient density was a decade lower, and the plasma depletion was less drastic, being twofold over 30 km after 2 min. The airglow emissions were also much less intense and below measurement sensitivity (30 R for the OH 306.4-nm emission; 50 R for the 630.0-nm emission)

  9. Transient Treg depletion enhances therapeutic anti‐cancer vaccination

    Science.gov (United States)

    Aston, Wayne J.; Chee, Jonathan; Khong, Andrea; Cleaver, Amanda L.; Solin, Jessica N.; Ma, Shaokang; Lesterhuis, W. Joost; Dick, Ian; Holt, Robert A.; Creaney, Jenette; Boon, Louis; Robinson, Bruce; Lake, Richard A.

    2016-01-01

    Abstract Introduction Regulatory T cells (Treg) play an important role in suppressing anti‐ immunity and their depletion has been linked to improved outcomes. To better understand the role of Treg in limiting the efficacy of anti‐cancer immunity, we used a Diphtheria toxin (DTX) transgenic mouse model to specifically target and deplete Treg. Methods Tumor bearing BALB/c FoxP3.dtr transgenic mice were subjected to different treatment protocols, with or without Treg depletion and tumor growth and survival monitored. Results DTX specifically depleted Treg in a transient, dose‐dependent manner. Treg depletion correlated with delayed tumor growth, increased effector T cell (Teff) activation, and enhanced survival in a range of solid tumors. Tumor regression was dependent on Teffs as depletion of both CD4 and CD8 T cells completely abrogated any survival benefit. Severe morbidity following Treg depletion was only observed, when consecutive doses of DTX were given during peak CD8 T cell activation, demonstrating that Treg can be depleted on multiple occasions, but only when CD8 T cell activation has returned to base line levels. Finally, we show that even minimal Treg depletion is sufficient to significantly improve the efficacy of tumor‐peptide vaccination. Conclusions BALB/c.FoxP3.dtr mice are an ideal model to investigate the full therapeutic potential of Treg depletion to boost anti‐tumor immunity. DTX‐mediated Treg depletion is transient, dose‐dependent, and leads to strong anti‐tumor immunity and complete tumor regression at high doses, while enhancing the efficacy of tumor‐specific vaccination at low doses. Together this data highlight the importance of Treg manipulation as a useful strategy for enhancing current and future cancer immunotherapies. PMID:28250921

  10. Tylosin depletion in edible tissues of turkeys.

    Science.gov (United States)

    Montesissa, C; De Liguoro, M; Santi, A; Capolongo, F; Biancotto, G

    1999-10-01

    The depletion of tylosin residues in edible turkey tissues was followed after 3 days of administration of tylosin tartrate at 500 mg l-1 in drinking water, to 30 turkeys. Immediately after the end of the treatment (day 0) and at day 1, 3, 5 and 10 of withdrawal, six turkeys (three males and three females) per time were sacrificed and samples of edible tissues were collected. Tissue homogenates were extracted, purified and analysed by HPLC according to a method previously published for the analysis of tylosin residues in pig tissues. In all tissues, tylosin residues were already below the detection limits of 50 micrograms kg-1 at time zero. However, in several samples of tissues (skin + fat, liver, kidney, muscle), from the six turkeys sacrificed at that time, one peak corresponding to an unknown tylosin equivalent was detected at measurable concentrations. The identification of this unknown compound was performed by LC-MS/MS analysis of the extracts from incurred samples. The mass fragmentation of the compound was consistent with the structure of tylosin D (the alcoholic derivative of tylosin A), the major metabolite of tylosin previously recovered and identified in tissues and/or excreta from treated chickens, cattle and pigs.

  11. Recirculating cooling water solute depletion models

    International Nuclear Information System (INIS)

    Price, W.T.

    1990-01-01

    Chromates have been used for years to inhibit copper corrosion in the plant Recirculating Cooling Water (RCW) system. However, chromates have become an environmental problem in recent years both in the chromate removal plant (X-616) operation and from cooling tower drift. In response to this concern, PORTS is replacing chromates with Betz Dianodic II, a combination of phosphates, BZT, and a dispersant. This changeover started with the X-326 system in 1989. In order to control chemical concentrations in X-326 and in systems linked to it, we needed to be able to predict solute concentrations in advance of the changeover. Failure to predict and control these concentrations can result in wasted chemicals, equipment fouling, or increased corrosion. Consequently, Systems Analysis developed two solute concentration models. The first simulation represents the X-326 RCW system by itself; and models the depletion of a solute once the feed has stopped. The second simulation represents the X-326, X-330, and the X-333 systems linked together by blowdown. This second simulation represents the concentration of a solute in all three systems simultaneously. 4 figs

  12. Depleted uranium human health risk assessment, Jefferson Proving Ground, Indiana

    International Nuclear Information System (INIS)

    Ebinger, M.H.; Hansen, W.R.

    1994-01-01

    The risk to human health from fragments of depleted uranium (DU) at Jefferson Proving Ground (JPG) was estimated using two types of ecosystem pathway models. A steady-state, model of the JPG area was developed to examine the effects of DU in soils, water, and vegetation on deer that were hunted and consumed by humans. The RESRAD code was also used to estimate the effects of farming the impact area and consuming the products derived from the farm. The steady-state model showed that minimal doses to humans are expected from consumption of deer that inhabit the impact area. Median values for doses to humans range from about 1 mrem (±2.4) to 0.04 mrem (±0.13) and translate to less than 1 x 10 -6 detriments (excess cancers) in the population. Monte Carlo simulation of the steady-state model was used to derive the probability distributions from which the median values were drawn. Sensitivity analyses of the steady-state model showed that the amount of DU in airborne dust and, therefore, the amount of DU on the vegetation surface, controlled the amount of DU ingested by deer and by humans. Human doses from the RESRAD estimates ranged from less than 1 mrem/y to about 6.5 mrem/y in a hunting scenario and subsistence fanning scenario, respectively. The human doses exceeded the 100 mrem/y dose limit when drinking water for the farming scenario was obtained from the on-site aquifer that was presumably contaminated with DU. The two farming scenarios were unrealistic land uses because the additional risk to humans due to unexploded ordnance in the impact area was not figured into the risk estimate. The doses estimated with RESRAD translated to less than 1 x 10 -6 detriments to about 1 x 10 -3 detriments. The higher risks were associated only with the farming scenario in which drinking water was obtained on-site

  13. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  14. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  15. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  16. X-ray CCD image sensor with a thick depletion region

    International Nuclear Information System (INIS)

    Saito, Hirobumi; Watabe, Hiroshi.

    1984-01-01

    To develop a solid-state image sensor for high energy X-ray above 1 -- 2 keV, basic studies have been made on the CCD (charge coupled device) with a thick depletion region. A method of super-imposing a high DC bias voltage on low voltage signal pulses was newly proposed. The characteristics of both SCCD and BCCD were investigated, and their ability as X-ray sensors was compared. It was found that a depletion region of 60 μm thick was able to be obtained with ordinary doping density of 10 20 /m 3 , and that even thicker over 1 mm depletion region was able to be obtained with doping density of about 10 18 /m 3 , and a high bias voltage above 1 kV was able to be applied. It is suggested that the CCD image sensors for 8 keV or 24 keV X-ray can be realized since the absorption length of these X-ray in Si is about 60 μm and 1 mm, respectively. As for the characteristics other than the depletion thickness, the BCCD is preferable to SCCD for the present purpose because of lower noise and dark current. As for the transfer method, the frame-transfer method is recommended. (Aoki, K.)

  17. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models; Desenvolvimento de um sistema computacional para o planejamento radioterapico com a tecnica IMRT aplicado ao codigo MCNP com interface grafica 3D para modelos de voxel

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Telma Cristina Ferreira

    2009-07-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  18. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models; Desenvolvimento de um sistema computacional para o planejamento radioterapico com a tecnica IMRT aplicado ao codigo MCNP com interface grafica 3D para modelos de voxel

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Telma Cristina Ferreira

    2009-07-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  19. Environmental performance of green building code and certification systems.

    Science.gov (United States)

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  20. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  1. Sensitivity Analysis of Depletion Parameters for Heat Load Evaluation of PWR Spent Fuel Storage Pool

    International Nuclear Information System (INIS)

    Kim, In Young; Lee, Un Chul

    2011-01-01

    As necessity of safety re-evaluation for spent fuel storage facility has emphasized after the Fukushima accident, accuracy improvement of heat load evaluation has become more important to acquire reliable thermal-hydraulic evaluation results. As groundwork, parametric and sensitivity analyses of various storage conditions for Kori Unit 4 spent fuel storage pool and spent fuel depletion parameters such as axial burnup effect, operation history, and specific heat are conducted using ORIGEN2 code. According to heat load evaluation and parametric sensitivity analyses, decay heat of last discharged fuel comprises maximum 80.42% of total heat load of storage facility and there is a negative correlation between effect of depletion parameters and cooling period. It is determined that specific heat is most influential parameter and operation history is secondly influential parameter. And decay heat of just discharged fuel is varied from 0.34 to 1.66 times of average value and decay heat of 1 year cooled fuel is varied from 0.55 to 1.37 times of average value in accordance with change of specific power. Namely depletion parameters can cause large variation in decay heat calculation of short-term cooled fuel. Therefore application of real operation data instead of user selection value is needed to improve evaluation accuracy. It is expected that these results could be used to improve accuracy of heat load assessment and evaluate uncertainty of calculated heat load.

  2. Three-dimensional modeling of the neutral gas depletion effect in a helicon discharge plasma

    Science.gov (United States)

    Kollasch, Jeffrey; Schmitz, Oliver; Norval, Ryan; Reiter, Detlev; Sovinec, Carl

    2016-10-01

    Helicon discharges provide an attractive radio-frequency driven regime for plasma, but neutral-particle dynamics present a challenge to extending performance. A neutral gas depletion effect occurs when neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. The Monte Carlo neutral particle tracking code EIRENE was setup for the MARIA helicon experiment at UW Madison to study its neutral particle dynamics. Prescribed plasma temperature and density profiles similar to those in the MARIA device are used in EIRENE to investigate the main causes of the neutral gas depletion effect. The most dominant plasma-neutral interactions are included so far, namely electron impact ionization of neutrals, charge exchange interactions of neutrals with plasma ions, and recycling at the wall. Parameter scans show how the neutral depletion effect depends on parameters such as Knudsen number, plasma density and temperature, and gas-surface interaction accommodation coefficients. Results are compared to similar analytic studies in the low Knudsen number limit. Plans to incorporate a similar Monte Carlo neutral model into a larger helicon modeling framework are discussed. This work is funded by the NSF CAREER Award PHY-1455210.

  3. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  4. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  5. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  6. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  7. Optimal Allocation of Sampling Effort in Depletion Surveys

    Science.gov (United States)

    We consider the problem of designing a depletion or removal survey as part of estimating animal abundance for populations with imperfect capture or detection rates. In a depletion survey, animals are captured from a given area, counted, and withheld from the population. This proc...

  8. Podocyte Depletion in Thin GBM and Alport Syndrome.

    Science.gov (United States)

    Wickman, Larysa; Hodgin, Jeffrey B; Wang, Su Q; Afshinnia, Farsad; Kershaw, David; Wiggins, Roger C

    2016-01-01

    The proximate genetic cause of both Thin GBM and Alport Syndrome (AS) is abnormal α3, 4 and 5 collagen IV chains resulting in abnormal glomerular basement membrane (GBM) structure/function. We previously reported that podocyte detachment rate measured in urine is increased in AS, suggesting that podocyte depletion could play a role in causing progressive loss of kidney function. To test this hypothesis podometric parameters were measured in 26 kidney biopsies from 21 patients aged 2-17 years with a clinic-pathologic diagnosis including both classic Alport Syndrome with thin and thick GBM segments and lamellated lamina densa [n = 15] and Thin GBM cases [n = 6]. Protocol biopsies from deceased donor kidneys were used as age-matched controls. Podocyte depletion was present in AS biopsies prior to detectable histologic abnormalities. No abnormality was detected by light microscopy at 70% podocyte depletion. Low level proteinuria was an early event at about 25% podocyte depletion and increased in proportion to podocyte depletion. These quantitative data parallel those from model systems where podocyte depletion is the causative event. This result supports a hypothesis that in AS podocyte adherence to the GBM is defective resulting in accelerated podocyte detachment causing progressive podocyte depletion leading to FSGS-like pathologic changes and eventual End Stage Kidney Disease. Early intervention to reduce podocyte depletion is projected to prolong kidney survival in AS.

  9. Effect of greenhouse gas emissions on stratospheric ozone depletion

    NARCIS (Netherlands)

    Velders GJM; LLO

    1997-01-01

    The depletion of the ozone layer is caused mainly by the increase in emissions of chlorine- and bromine-containing compounds like CFCs, halons, carbon tetrachloride, methyl chloroform and methyl bromide. Emissions of greenhouse gases can affect the depletion of the ozone layer through atmospheric

  10. Ego Depletion Does Not Interfere With Working Memory Performance.

    Science.gov (United States)

    Singh, Ranjit K; Göritz, Anja S

    2018-01-01

    Ego depletion happens if exerting self-control reduces a person's capacity to subsequently control themselves. Previous research has suggested that ego depletion not only interferes with subsequent self-control but also with working memory. However, recent meta-analytical evidence casts doubt onto this. The present study tackles the question if ego depletion does interfere with working memory performance. We induced ego depletion in two ways: using an e-crossing task and using a Stroop task. We then measured working memory performance using the letter-number sequencing task. There was no evidence of ego depletion interfering with working memory performance. Several aspects of our study render this null finding highly robust. We had a large and heterogeneous sample of N = 1,385, which provided sufficient power. We deployed established depletion tasks from two task families (e-crossing task and Stroop), thus making it less likely that the null finding is due to a specific depletion paradigm. We derived several performance scores from the working memory task and ran different analyses to maximize the chances of finding an effect. Lastly, we controlled for two potential moderators, the implicit theories about willpower and dispositional self-control capacity, to ensure that a possible effect on working memory is not obscured by an interaction effect. In sum, this experiment strengthens the position that ego depletion works but does not affect working memory performance.

  11. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  12. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  13. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  14. Analysis and Application of Whey Protein Depleted Skim Milk Systems

    DEFF Research Database (Denmark)

    Sørensen, Hanne

    homogenisation (UHPH). The microfiltration will result in a milk fraction more or less depleted from whey protein, and could probably in combination with UHPH treatment contribute to milk fractions and cheeses with novel micro and macrostructures. These novel fractions could be used as new ingredients to improve......-destructive methods for this purpose. A significant changed structure was observed in skim milk depleted or partly depleted for whey protein, acidified and UHPH treated. Some of the properties of the UHPH treated skim milk depleted from whey protein observed in this study support the idea, that UHPH treatment has...... this. LF-NMR relaxation were utilised to obtain information about the water mobility (relaxation time), in diluted skim milk systems depleted from whey protein. Obtained results indicate that measuring relaxation times with LF-NMR could be difficult to utilize, since no clear relationship between...

  15. Barium depletion study on impregnated cathodes and lifetime prediction

    International Nuclear Information System (INIS)

    Roquais, J.M.; Poret, F.; Doze, R. le; Ricaud, J.L.; Monterrin, A.; Steinbrunn, A.

    2003-01-01

    In the thermionic cathodes used in cathode ray-tubes (CRTs), barium is the key element for the electronic emission. In the case of the dispenser cathodes made of a porous tungsten pellet impregnated with Ba, Ca aluminates, the evaporation of Ba determines the cathode lifetime with respect to emission performance in the CRT. The Ba evaporation results in progressive depletion of the impregnating material inside the pellet. In the present work, the Ba depletion with time has been extensively characterized over a large range of cathode temperature. Calculations using the depletion data allowed modeling of the depletion as a function of key parameters. The link between measured depletion and emission in tubes has been established, from which an end-of-life criterion was deduced. Taking modeling into account, predicting accelerated life-tests were performed using high-density maximum emission current (MIK)

  16. The Abiotic Depletion Potential: Background, Updates, and Future

    Directory of Open Access Journals (Sweden)

    Lauran van Oers

    2016-03-01

    Full Text Available Depletion of abiotic resources is a much disputed impact category in life cycle assessment (LCA. The reason is that the problem can be defined in different ways. Furthermore, within a specified problem definition, many choices can still be made regarding which parameters to include in the characterization model and which data to use. This article gives an overview of the problem definition and the choices that have been made when defining the abiotic depletion potentials (ADPs for a characterization model for abiotic resource depletion in LCA. Updates of the ADPs since 2002 are also briefly discussed. Finally, some possible new developments of the impact category of abiotic resource depletion are suggested, such as redefining the depletion problem as a dilution problem. This means taking the reserves in the environment and the economy into account in the reserve parameter and using leakage from the economy, instead of extraction rate, as a dilution parameter.

  17. Applied geodesy

    International Nuclear Information System (INIS)

    Turner, S.

    1987-01-01

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatment of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc

  18. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  19. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  20. Human podocyte depletion in association with older age and hypertension.

    Science.gov (United States)

    Puelles, Victor G; Cullen-McEwen, Luise A; Taylor, Georgina E; Li, Jinhua; Hughson, Michael D; Kerr, Peter G; Hoy, Wendy E; Bertram, John F

    2016-04-01

    Podocyte depletion plays a major role in the development and progression of glomerulosclerosis. Many kidney diseases are more common in older age and often coexist with hypertension. We hypothesized that podocyte depletion develops in association with older age and is exacerbated by hypertension. Kidneys from 19 adult Caucasian American males without overt renal disease were collected at autopsy in Mississippi. Demographic data were obtained from medical and autopsy records. Subjects were categorized by age and hypertension as potential independent and additive contributors to podocyte depletion. Design-based stereology was used to estimate individual glomerular volume and total podocyte number per glomerulus, which allowed the calculation of podocyte density (number per volume). Podocyte depletion was defined as a reduction in podocyte number (absolute depletion) or podocyte density (relative depletion). The cortical location of glomeruli (outer or inner cortex) and presence of parietal podocytes were also recorded. Older age was an independent contributor to both absolute and relative podocyte depletion, featuring glomerular hypertrophy, podocyte loss, and thus reduced podocyte density. Hypertension was an independent contributor to relative podocyte depletion by exacerbating glomerular hypertrophy, mostly in glomeruli from the inner cortex. However, hypertension was not associated with podocyte loss. Absolute and relative podocyte depletion were exacerbated by the combination of older age and hypertension. The proportion of glomeruli with parietal podocytes increased with age but not with hypertension alone. These findings demonstrate that older age and hypertension are independent and additive contributors to podocyte depletion in white American men without kidney disease. Copyright © 2016 the American Physiological Society.

  1. Rate-adaptive BCH coding for Slepian-Wolf coding of highly correlated sources

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Salmistraro, Matteo; Larsen, Knud J.

    2012-01-01

    This paper considers using BCH codes for distributed source coding using feedback. The focus is on coding using short block lengths for a binary source, X, having a high correlation between each symbol to be coded and a side information, Y, such that the marginal probability of each symbol, Xi in X......, given Y is highly skewed. In the analysis, noiseless feedback and noiseless communication are assumed. A rate-adaptive BCH code is presented and applied to distributed source coding. Simulation results for a fixed error probability show that rate-adaptive BCH achieves better performance than LDPCA (Low......-Density Parity-Check Accumulate) codes for high correlation between source symbols and the side information....

  2. Barium Depletion in Hollow Cathode Emitters

    Science.gov (United States)

    Polk, James E.; Capece, Angela M.; Mikellides, Ioannis G.; Katz, Ira

    2009-01-01

    The effect of tungsten erosion, transport and redeposition on the operation of dispenser hollow cathodes was investigated in detailed examinations of the discharge cathode inserts from an 8200 hour and a 30,352 hour ion engine wear test. Erosion and subsequent re-deposition of tungsten in the electron emission zone at the downstream end of the insert reduces the porosity of the tungsten matrix, preventing the ow of barium from the interior. This inhibits the interfacial reactions of the barium-calcium-aluminate impregnant with the tungsten in the pores. A numerical model of barium transport in the internal xenon discharge plasma shows that the barium required to reduce the work function in the emission zone can be supplied from upstream through the gas phase. Barium that flows out of the pores of the tungsten insert is rapidly ionized in the xenon discharge and pushed back to the emitter surface by the electric field and drag from the xenon ion flow. This barium ion flux is sufficient to maintain a barium surface coverage at the downstream end greater than 0.6, even if local barium production at that point is inhibited by tungsten deposits. The model also shows that the neutral barium pressure exceeds the equilibrium vapor pressure of the impregnant decomposition reaction over much of the insert length, so the reactions are suppressed. Only a small region upstream of the zone blocked by tungsten deposits is active and supplies the required barium. These results indicate that hollow cathode failure models based on barium depletion rates in vacuum dispenser cathodes are very conservative.

  3. Biological effects of deuterium - depleted water

    International Nuclear Information System (INIS)

    Stefanescu, I.; Titescu, Gh.; Croitoru, Cornelia; Saros-Rogobete, Irina

    2000-01-01

    Deuterium-depleted water (DDW) is represented by water that has an isotopic content smaller than 145 ppm D/(D + H). DDW production technique consists in the separation of deuterium from water by a continuous distillation process under pressure of about 133.3 mbar. The water used as raw material has a isotopic content of 145 ppm D/(D + H) and can be demineralized water, distillated water or condensed-steam. DDW results as a distillate with an isotopic deuterium content of 15-80 ppm, depending on the level we want to achieve. Beginning with 1996 the Institute of Cryogenics and Isotopic Technologies, DDW producer, co-operated with Romanian specialized institutes for studying the biological effects of DDW. The role of naturally occurring D in living organisms was examined by using DDW instead of natural water. These investigations led to the following conclusions: - DDW caused a tendency towards the increase of the basal tone, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tone and vascular reactivity produced by the DDW persists after the removal of the vascular endothelium; - Animals treated with DDW showed an increase of the resistance both to sublethal and lethal gamma radiation doses, suggesting a radioprotective action by the stimulation of non-specific immune defense mechanisms; - DDW stimulates immuno-defense reactions represented by the opsonic, bactericidal and phagocyte capacity of the immune system together with an increase in the number of poly-morphonuclear neutrophils; - Investigations regarding artificial reproduction of fish with DDW fecundated solutions confirmed favorable influence in embryo growth stage and resistance and following growth stages; - It was studied germination, growth and quantitative character variability in plants; one can remark the favorable influence of DDW on biological processes in plants in various ontogenetic stages. (authors)

  4. Interstellar Silicon Depletion and the Ultraviolet Extinction

    Science.gov (United States)

    Mishra, Ajay; Li, Aigen

    2018-01-01

    Spinning small silicate grains were recently invoked to account for the Galactic foreground anomalous microwave emission. These grains, if present, will absorb starlight in the far ultraviolet (UV). There is also renewed interest in attributing the enigmatic 2175 Å interstellar extinction bump to small silicates. To probe the role of silicon in the UV extinction, we explore the relations between the amount of silicon required to be locked up in silicates [Si/H]dust and the 2175 Å bump or the far-UV extinction rise, based on an analysis of the extinction curves along 46 Galactic sightlines for which the gas-phase silicon abundance [Si/H]gas is known. We derive [Si/H]dust either from [Si/H]ISM - [Si/H]gas or from the Kramers- Kronig relation which relates the wavelength-integrated extinction to the total dust volume, where [Si/H]ISM is the interstellar silicon reference abundance and taken to be that of proto-Sun or B stars. We also derive [Si/H]dust from fi�tting the observed extinction curves with a mixture of amorphous silicates and graphitic grains. We fi�nd that in all three cases [Si/H]dust shows no correlation with the 2175 Å bump, while the carbon depletion [C/H]dust tends to correlate with the 2175 Å bump. This supports carbon grains instead of silicates as the possible carrier of the 2175 Å bump. We also �find that neither [Si/H]dust nor [C/H]dust alone correlates with the far-UV extinction, suggesting that the far-UV extinction is a combined effect of small carbon grains and silicates.

  5. Deuterium depleted water. Romanian achievements and prospects

    International Nuclear Information System (INIS)

    Stefanescu, Ioan; Steflea, Dumitru; Titescu, Gheorghe; Tamaian, Radu

    2002-01-01

    The deuterium depleted water (DDW) is microbiologically pure distilled water with a deuterium content lower than that of natural waters which amounts to 140 - 150 ppm D/(D+H); variations depend on geographical zone and altitude. The procedure of obtaining DDW is based on isotopic separation of natural water by vacuum distillation. Isotope concentration can be chosen within 20 to 120 ppm D/(D+H). The ICSI at Rm. Valcea has patented the procedure and equipment for the production of DDW. According to the document SF-01-2002/INC-DTCI - ICSI Rm. Valcea, the product has a D/(D+H) isotope concentration of 25 ± 5. Studies and research for finding the effects and methods of application in different fields were initiated and developed in collaboration with different institutes in Romania. The following important results obtained so far could be mentioned: - absence of toxicity upon organisms; - activation of vascular reactivity; - enhancement of defence capacity of the organism through non-specific immunity activation; - increase of salmonid reproduction capacity and enhancement of the adaptability of alevins to the environmental conditions; - radioprotective effect to ionizing radiation; - maintaining meat freshness through osmotic shock; - stimulation of growth of aquatic macrophytes; - enhancement of culture plant development in certain ontogenetic stages. Mostly, the results and practical applications of the research were patented and awarded with gold medals at international invention fairs. At present, research-development programmes are undergoing to find active biological features of DDW in fighting cancer, on one hand, and its applicability as food additive of pets or performing animals, on the other hand

  6. Clinical case of Mitochondrial DNA Depletion

    Directory of Open Access Journals (Sweden)

    A. V. Degtyareva

    2017-01-01

    Full Text Available The article reports clinical case of early neonatal manifestation of a rare genetic disease – mitochondrial DNA depletion syndrome, confirmed in laboratory in Russia. Mutations of FBXL4, which encodes an orphan mitochondrial F-box protein, involved in the maintenance of mitochondrial DNA (mtDNA, ultimately leading to disruption of mtDNA replication and decreased activity of mitochondrial respiratory chain complexes. It’s a reason of abnormalities in clinically affected tissues, most of all the muscular system and the brain. In our case hydronephrosis on the right, subependimal cysts of the brain, partial intestinal obstruction accompanied by polyhydramnios were diagnosed antenatal. Baby’s condition at birth was satisfactory and worsened dramatically towards the end of the first day of life. Clinical presentation includes sepsis-like symptom complex, neonatal depression, muscular hypotonia, persistent decompensated lactic acidosis, increase in the concentration of mitochondrial markers in blood plasma and urine, and changes in the basal ganglia of the brain. Imaging of the brain by magnetic resonance imaging (MRI demonstrated global volume loss particularly the subcortical and periventricular white matter with significant abnormal signal in bilateral basal ganglia and brainstem with associated delayed myelination. Differential diagnosis was carried out with hereditary diseases that occur as a «sepsis-like» symptom complex, accompanied by lactic acidosis: a group of metabolic disorders of amino acids, organic acids, β-oxidation defects of fatty acids, respiratory mitochondrial chain disorders and glycogen storage disease. The diagnosis was confirmed after sequencing analysis of 62 mytochondrial genes by NGS (Next Generation Sequencing. Reported disease has an unfavorable prognosis, however, accurate diagnosis is very important for genetic counseling and helps prevent the re-birth of a sick child in the family.

  7. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  8. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  9. Phyto remediation of Depleted Uranium from Contaminated Soil and Sediments

    International Nuclear Information System (INIS)

    Al-Saad, K.A.; Amr, M.A.

    2012-01-01

    Seedlings of sunflower (Helianthus annuus L.) was used to test the effect of ph, citric acid, phosphoric acid, and ethylene-diamine-tetraacetic acid (EDTA) on the uptake and the translocation of depleted uranium (DU). The experiments was performed in hydroponic cultures and environmental soil samples collected from Qatar. The results of hydroponic experiment indicated that DU accumulated more in the roots than leaves, in the plants that was grown in contaminated water. The presence of phosphoric acid, citric acid, or EDTA showed different patterns of DU uptake. Higher transfer factor was observed when phosphoric acid was added. When EDTA was added, higher DU uptake was observed. The data suggested the DU was mostly retained to the root when EDTA was added. Also, the experiments were applied on environmental soil samples collected from Qatar. The presence of phosphoric acid, citric acid, or EDTA showed different patterns of DU uptake for the three different soil samples. The addition of EDTA increased the DU uptake in the sunflowers planted in the three types of soils. The results indicated that, generally, DU accumulated more in the roots compared to leaves and stems, except when soil was spiked with phosphoric acid. The translocation ratio was limited but highest ( 1.4) in the sunflower planted in soil S2705 when spiked with phosphoric acid. In the three soils tested, the result suggested higher DU translocation of sunflower with the presence of phosphoric acid.

  10. Study on methodology to estimate isotope generation and depletion for core design of HTGR

    International Nuclear Information System (INIS)

    Fukaya, Yuji; Ueta, Shohei; Goto, Minoru; Shimakawa, Satoshi

    2013-12-01

    An investigation on methodology to estimate isotope generation and depletion had been performed in order to improve the accuracy for HTGR core design. The technical problem for isotope generation and depletion can be divided into major three parts, for solving the burn-up equations, generating effective cross section and employing nuclide data. Especially for the generating effective cross section, the core burn-up calculation has a technological problem in common with point burn-up calculation. Thus, the investigation had also been performed for the core burn-up calculation to develop new code system in the future. As a result, it was found that the cross section with the extended 108 energy groups structure from the SRAC 107 groups structure to 20 MeV and the cross section collapse using the flux obtained by the deterministic code SRAC is proper for the use. In addition, it becomes clear the needs for the nuclear data from an investigation on the preparation condition for nuclear data for a safety analysis and a fuel design. (author)

  11. Cross-site comparison of ribosomal depletion kits for Illumina RNAseq library construction.

    Science.gov (United States)

    Herbert, Zachary T; Kershner, Jamie P; Butty, Vincent L; Thimmapuram, Jyothi; Choudhari, Sulbha; Alekseyev, Yuriy O; Fan, Jun; Podnar, Jessica W; Wilcox, Edward; Gipson, Jenny; Gillaspy, Allison; Jepsen, Kristen; BonDurant, Sandra Splinter; Morris, Krystalynne; Berkeley, Maura; LeClerc, Ashley; Simpson, Stephen D; Sommerville, Gary; Grimmett, Leslie; Adams, Marie; Levine, Stuart S

    2018-03-15

    Ribosomal RNA (rRNA) comprises at least 90% of total RNA extracted from mammalian tissue or cell line samples. Informative transcriptional profiling using massively parallel sequencing technologies requires either enrichment of mature poly-adenylated transcripts or targeted depletion of the rRNA fraction. The latter method is of particular interest because it is compatible with degraded samples such as those extracted from FFPE and also captures transcripts that are not poly-adenylated such as some non-coding RNAs. Here we provide a cross-site study that evaluates the performance of ribosomal RNA removal kits from Illumina, Takara/Clontech, Kapa Biosystems, Lexogen, New England Biolabs and Qiagen on intact and degraded RNA samples. We find that all of the kits are capable of performing significant ribosomal depletion, though there are differences in their ease of use. All kits were able to remove ribosomal RNA to below 20% with intact RNA and identify ~ 14,000 protein coding genes from the Universal Human Reference RNA sample at >1FPKM. Analysis of differentially detected genes between kits suggests that transcript length may be a key factor in library production efficiency. These results provide a roadmap for labs on the strengths of each of these methods and how best to utilize them.

  12. Estimates of radiological risk from depleted uranium weapons in war scenarios.

    Science.gov (United States)

    Durante, Marco; Pugliese, Mariagabriella

    2002-01-01

    Several weapons used during the recent conflict in Yugoslavia contain depleted uranium, including missiles and armor-piercing incendiary rounds. Health concern is related to the use of these weapons, because of the heavy-metal toxicity and radioactivity of uranium. Although chemical toxicity is considered the more important source of health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict, and uranium munitions are a possible source of contamination in the environment. Actual measurements of radioactive contamination are needed to assess the risk. In this paper, a computer simulation is proposed to estimate radiological risk related to different exposure scenarios. Dose caused by inhalation of radioactive aerosols and ground contamination induced by Tomahawk missile impact are simulated using a Gaussian plume model (HOTSPOT code). Environmental contamination and committed dose to the population resident in contaminated areas are predicted by a food-web model (RESRAD code). Small values of committed effective dose equivalent appear to be associated with missile impacts (50-y CEDE radiological risk. These computer simulations suggest that little radiological risk is associated to the use of depleted uranium weapons.

  13. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  14. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  15. Opportunistic Adaptive Transmission for Network Coding Using Nonbinary LDPC Codes

    Directory of Open Access Journals (Sweden)

    Cocco Giuseppe

    2010-01-01

    Full Text Available Network coding allows to exploit spatial diversity naturally present in mobile wireless networks and can be seen as an example of cooperative communication at the link layer and above. Such promising technique needs to rely on a suitable physical layer in order to achieve its best performance. In this paper, we present an opportunistic packet scheduling method based on physical layer considerations. We extend channel adaptation proposed for the broadcast phase of asymmetric two-way bidirectional relaying to a generic number of sinks and apply it to a network context. The method consists of adapting the information rate for each receiving node according to its channel status and independently of the other nodes. In this way, a higher network throughput can be achieved at the expense of a slightly higher complexity at the transmitter. This configuration allows to perform rate adaptation while fully preserving the benefits of channel and network coding. We carry out an information theoretical analysis of such approach and of that typically used in network coding. Numerical results based on nonbinary LDPC codes confirm the effectiveness of our approach with respect to previously proposed opportunistic scheduling techniques.

  16. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  17. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  18. The influence of fog parameters on aerosol depletion measured in the KAEVER experiments

    International Nuclear Information System (INIS)

    Poss, G.; Weber, D.; Fritsche, B.

    1995-01-01

    The release of radioactive aerosols in the environment is one of the most serious hazards in case of an accident in nuclear power plant. Many efforts have been made in the past in numerous experimental programs like NSPP, DEMONA, VANAM, LACE, MARVIKEN, others are still underway to improve the knowledge of the aerosol behavior and depletion in a reactor containment in order to estimate the possible source term and to validate computer codes. In the German single compartment KAEVER facility the influence of size distribution, morphology, composition and solubility on the aerosol behavior is investigated. One of the more specific items is to learn about open-quotes wet depletionclose quotes means, the aerosol depletion behavior in condensing atmospheres. There are no experiments known where the fog parameters like droplet size distribution, volume concentration, respectively airborne liquid water content have been measured in- and on-line explicitly. To the authors knowledge the use of the Battelle FASP photometer, which was developed especially for this reason, for the first time gives insight in condensation behavior under accident typical thermal hydraulic conditions. It delivers a basis for code validation in terms of a real comparison of measurements and calculations. The paper presents results from open-quotes wet depletionclose quotes aerosol experiments demonstrating how depletion velocity depends on the fog parameters and where obviously critical fog parameter seem to change the regime from a open-quotes pseudo dry depletionclose quotes at a relative humidity of 100% but quasi no or very low airborne liquid water content to a real open-quotes wet depletionclose quotes under the presence of fogs with varying densities. Characteristics are outlined how soluble and insoluble particles as well as aerosol mixtures behave under condensing conditions

  19. Motion-adaptive intraframe transform coding of video signals

    NARCIS (Netherlands)

    With, de P.H.N.

    1989-01-01

    Spatial transform coding has been widely applied for image compression because of its high coding efficiency. However, in many intraframe systems, in which every TV frame is independently processed, coding of moving objects in the case of interlaced input signals is not addressed. In this paper, we

  20. Quality assurance requirements in various codes and standards

    International Nuclear Information System (INIS)

    Shaaban, H.I.; EL-Sayed, A.; Aly, A.E.

    1987-01-01

    The quality assurance requirements in various countries and according to various international codes and standards are presented, compared and critically discussed. Cases of developing countries are also discussed, and the use of IAEA code of practice and other codes for quality assurance in these countries is reviewed. Recommendations are made regarding the quality assurance system to be applied for Egypt's nuclear power plants

  1. Test Code Quality and Its Relation to Issue Handling Performance

    NARCIS (Netherlands)

    Athanasiou, D.; Nugroho, A.; Visser, J.; Zaidman, A.

    2014-01-01

    Automated testing is a basic principle of agile development. Its benefits include early defect detection, defect cause localization and removal of fear to apply changes to the code. Therefore, maintaining high quality test code is essential. This study introduces a model that assesses test code

  2. Applying radiation

    International Nuclear Information System (INIS)

    Mallozzi, P.J.; Epstein, H.M.; Jung, R.G.; Applebaum, D.C.; Fairand, B.P.; Gallagher, W.J.; Uecker, R.L.; Muckerheide, M.C.

    1979-01-01

    The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction

  3. Synthetic liquid fuels development: assessment of critical factors. Volume III. Coal resource depletion

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, E.M.; Yabroff, I.W.; Kroll, C.A.; White, R.K.; Walton, B.L.; Ivory, M.E.; Fullen, R.E.; Weisbecker, L.W.; Hays, R.L.

    1977-01-01

    While US coal resources are known to be vast, their rate of depletion in a future based predominantly on coal has not been examined analytically heretofore. The Coal Depletion Model inventories the coal resource on a regional basis and calculates the cost of coal extraction by three technologies - strip and underground mining and in-situ combustion. A plausible coal demand scenario extending from 1975 to the year 2050 is used as a basis in applying the model. In the year 2050, plants in operation include 285 syncrude plants, each producing 100,000 B/D; 312 SNG plants, each producing 250 million SCF/D and 722 coal-fired electric power plants, each of 1000 MW capacity. In addition, there is 890 million tons per year of industrial coal consumption. Such a high level of coal use would deplete US coal resources much more rapidly than most people appreciate. Of course, the actual amount of US coal is unknown, and if the coal in the hypothetical reliability category is included, depletion is delayed. Coal in this category, however, has not been mapped; it is only presumed to exist on the basis of geological theory. The coal resource depletion model shows that unilateral imposition of a severance tax by a state tends to shift production to other coal producing regions. Boom and bust cycles are both delayed and reduced in their magnitude. When several states simultaneously impose severance taxes, the effect of each is weakened.Key policy issues that emerge from this analysis concern the need to reduce the uncertainty of the magnitude and geographic distribution of the US coal resource and the need to stimulate interaction among the parties at interest to work out equitable and acceptable coal conversion plant location strategies capable of coping with the challenges of a high-coal future.

  4. Long-term groundwater depletion in the United States

    Science.gov (United States)

    Konikow, Leonard F.

    2015-01-01

    The volume of groundwater stored in the subsurface in the United States decreased by almost 1000 km3 during 1900–2008. The aquifer systems with the three largest volumes of storage depletion include the High Plains aquifer, the Mississippi Embayment section of the Gulf Coastal Plain aquifer system, and the Central Valley of California. Depletion rates accelerated during 1945–1960, averaging 13.6 km3/year during the last half of the century, and after 2000 increased again to about 24 km3/year. Depletion intensity is a new parameter, introduced here, to provide a more consistent basis for comparing storage depletion problems among various aquifers by factoring in time and areal extent of the aquifer. During 2001–2008, the Central Valley of California had the largest depletion intensity. Groundwater depletion in the United States can explain 1.4% of observed sea-level rise during the 108-year study period and 2.1% during 2001–2008. Groundwater depletion must be confronted on local and regional scales to help reduce demand (primarily in irrigated agriculture) and/or increase supply.

  5. Associative Interactions in Crowded Solutions of Biopolymers Counteract Depletion Effects.

    Science.gov (United States)

    Groen, Joost; Foschepoth, David; te Brinke, Esra; Boersma, Arnold J; Imamura, Hiromi; Rivas, Germán; Heus, Hans A; Huck, Wilhelm T S

    2015-10-14

    The cytosol of Escherichia coli is an extremely crowded environment, containing high concentrations of biopolymers which occupy 20-30% of the available volume. Such conditions are expected to yield depletion forces, which strongly promote macromolecular complexation. However, crowded macromolecule solutions, like the cytosol, are very prone to nonspecific associative interactions that can potentially counteract depletion. It remains unclear how the cytosol balances these opposing interactions. We used a FRET-based probe to systematically study depletion in vitro in different crowded environments, including a cytosolic mimic, E. coli lysate. We also studied bundle formation of FtsZ protofilaments under identical crowded conditions as a probe for depletion interactions at much larger overlap volumes of the probe molecule. The FRET probe showed a more compact conformation in synthetic crowding agents, suggesting strong depletion interactions. However, depletion was completely negated in cell lysate and other protein crowding agents, where the FRET probe even occupied slightly more volume. In contrast, bundle formation of FtsZ protofilaments proceeded as readily in E. coli lysate and other protein solutions as in synthetic crowding agents. Our experimental results and model suggest that, in crowded biopolymer solutions, associative interactions counterbalance depletion forces for small macromolecules. Furthermore, the net effects of macromolecular crowding will be dependent on both the size of the macromolecule and its associative interactions with the crowded background.

  6. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  7. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  8. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  9. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bostelmann, F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on

  10. Challenges dealing with depleted uranium in Germany - Reuse or disposal

    International Nuclear Information System (INIS)

    Moeller, Kai D.

    2007-01-01

    During enrichment large amounts of depleted Uranium are produced. In Germany every year 2.800 tons of depleted uranium are generated. In Germany depleted uranium is not classified as radioactive waste but a resource for further enrichment. Therefore since 1996 depleted Uranium is sent to ROSATOM in Russia. However it still has to be dealt with the second generation of depleted Uranium. To evaluate the alternative actions in case a solution has to be found in Germany, several studies have been initiated by the Federal Ministry of the Environment. The work that has been carried out evaluated various possibilities to deal with depleted uranium. The international studies on this field and the situation in Germany have been analyzed. In case no further enrichment is planned the depleted uranium has to be stored. In the enrichment process UF 6 is generated. It is an international consensus that for storage it should be converted to U 3 O 8 . The necessary technique is well established. If the depleted Uranium would have to be characterized as radioactive waste, a final disposal would become necessary. For the planned Konrad repository - a repository for non heat generating radioactive waste - the amount of Uranium is limited by the licensing authority. The existing license would not allow the final disposal of large amounts of depleted Uranium in the Konrad repository. The potential effect on the safety case has not been roughly analyzed. As a result it may be necessary to think about alternatives. Several possibilities for the use of depleted uranium in the industry have been identified. Studies indicate that the properties of Uranium would make it useful in some industrial fields. Nevertheless many practical and legal questions are open. One further option may be the use as shielding e.g. in casks for transport or disposal. Possible techniques for using depleted Uranium as shielding are the use of the metallic Uranium as well as the inclusion in concrete. Another

  11. Depletion interaction of casein micelles and an exocellular polysaccharide

    Science.gov (United States)

    Tuinier, R.; Ten Grotenhuis, E.; Holt, C.; Timmins, P. A.; de Kruif, C. G.

    1999-07-01

    Casein micelles become mutually attractive when an exocellular polysaccharide produced by Lactococcus lactis subsp. cremoris NIZO B40 (hereafter called EPS) is added to skim milk. The attraction can be explained as a depletion interaction between the casein micelles induced by the nonadsorbing EPS. We used three scattering techniques (small-angle neutron scattering, turbidity measurements, and dynamic light scattering) to measure the attraction. In order to connect the theory of depletion interaction with experiment, we calculated structure factors of hard spheres interacting by a depletion pair potential. Theoretical predictions and all the experiments showed that casein micelles became more attractive upon increasing the EPS concentration.

  12. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  13. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  14. PLUTON, Isotope Generation and Depletion in Highly Irradiated LWR Fuel Rods

    International Nuclear Information System (INIS)

    Lemehov, Sergei; Motoe, Suzuki

    2003-01-01

    1 - Description of program or function: The PLUTON-PC is a three-group neutronic code analyzing, as functions of time and burnup, the change of radial profiles, together with average values, of power density, burnup, concentration of trans-uranium elements, plutonium buildup, depletion of fissile elements, and fission product generation in water reactor fuel rod with standard UO 2 , UO 2 -Gd 2 O 3 , inhomogeneous MOX, and UO 2 -ThO 2 . The PLUTON-PC code, which has been designed to be run on Windows PC, has adopted a theoretical shape function of neutron attenuation in pellet, which enables users to perform a very fast and accurate calculation easily. The code includes the irradiation conditions of the Halden Reactor which gives verification data for the code. Verification has been performed up to 83 GWd/tU, and a satisfactory agreement has been obtained. 2 - Methods: Based upon cumulative yields, the PLUTON-PC code calculates as a function of radial position and local burnup concentrations of fission products, macroscopic scattering cross-sections and self-shielding effect which is important for standard fuel (for Pu-242 mainly) and more importantly for homogeneous and inhomogeneous MOX fuel because of higher concentrations of fissile and fertile isotopes of plutonium. The code results in burnup dependent fission rate density profiles throughout the in-reactor irradiation of LWR fuel rods. The isotopes included in calculations have been extended to cover all trans-uranium groups (plutonium plus higher actinides) of fissile and fertile isotopes. Self-shielding problem and scattering effects have been revised and solved for all isotopes in the calculations for adequacy at high burnup, different irradiation conditions and cladding materials

  15. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  16. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  17. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  18. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  19. Code of ethics for dental researchers.

    Science.gov (United States)

    2014-01-01

    The International Association for Dental Research, in 2009, adopted a code of ethics. The code applies to members of the association and is enforceable by sanction, with the stated requirement that members are expected to inform the association in cases where they believe misconduct has occurred. The IADR code goes beyond the Belmont and Helsinki statements by virtue of covering animal research. It also addresses issues of sponsorship of research and conflicts of interest, international collaborative research, duty of researchers to be informed about applicable norms, standards of publication (including plagiarism), and the obligation of "whistleblowing" for the sake of maintaining the integrity of the dental research enterprise as a whole. The code is organized, like the ADA code, into two sections. The IADR principles are stated, but not defined, and number 12, instead of the ADA's five. The second section consists of "best practices," which are specific statements of expected or interdicted activities. The short list of definitions is useful.

  20. Survey Of Lossless Image Coding Techniques

    Science.gov (United States)

    Melnychuck, Paul W.; Rabbani, Majid

    1989-04-01

    Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.