WorldWideScience

Sample records for depletion codes applied

  1. Real depletion in nodal diffusion codes

    International Nuclear Information System (INIS)

    Petkov, P.T.

    2002-01-01

    The fuel depletion is described by more than one hundred fuel isotopes in the advanced lattice codes like HELIOS, but only a few fuel isotopes are accounted for even in the advanced steady-state diffusion codes. The general assumption that the number densities of the majority of the fuel isotopes depend only on the fuel burnup is seriously in error if high burnup is considered. The real depletion conditions in the reactor core differ from the asymptotic ones at the stage of lattice depletion calculations. This study reveals which fuel isotopes should be explicitly accounted for in the diffusion codes in order to predict adequately the real depletion effects in the core. A somewhat strange conclusion is that if the real number densities of the main fissionable isotopes are not explicitly accounted for in the diffusion code, then Sm-149 should not be accounted for either, because the net error in k-inf is smaller (Authors)

  2. TURTLE 24.0 diffusion depletion code

    International Nuclear Information System (INIS)

    Altomare, S.; Barry, R.F.

    1971-09-01

    TURTLE is a two-group, two-dimensional (x-y, x-z, r-z) neutron diffusion code featuring a direct treatment of the nonlinear effects of xenon, enthalpy, and Doppler. Fuel depletion is allowed. TURTLE was written for the study of azimuthal xenon oscillations, but the code is useful for general analysis. The input is simple, fuel management is handled directly, and a boron criticality search is allowed. Ten thousand space points are allowed (over 20,000 with diagonal symmetry). TURTLE is written in FORTRAN IV and is tailored for the present CDC-6600. The program is core-contained. Provision is made to save data on tape for future reference. (auth)

  3. Development of depletion perturbation theory for a reactor nodal code

    Energy Technology Data Exchange (ETDEWEB)

    Bowman, S.M.

    1981-09-01

    A generalized depletion perturbation (DPT) theory formulation for light water reactor (LWR) depletion problems is developed and implemented into the three-dimensional LWR nodal code SIMULATE. This development applies the principles of the original derivation by M.L. Williams to the nodal equations solved by SIMULATE. The present formulation is first described in detail, and the nodal coupling methodology in SIMULATE is used to determine partial derivatives of the coupling coefficients. The modifications to the original code and the new DPT options available to the user are discussed. Finally, the accuracy and the applicability of the new DPT capability to LWR design analysis are examined for several LWR depletion test cases. The cases range from simple static cases to a realistic PWR model for an entire fuel cycle. Responses of interest included K/sub eff/, nodal peaking, and peak nodal exposure. The nonlinear behavior of responses with respect to perturbations of the various types of cross sections was also investigated. The time-dependence of the sensitivity coefficients for different responses was examined and compared. Comparison of DPT results for these examples to direct calculations reveals the limited applicability of depletion perturbation theory to LWR design calculations at the present. The reasons for these restrictions are discussed, and several methods which might improve the computational accuracy of DPT are proposed for future research.

  4. ISOGEN: Interactive isotope generation and depletion code

    International Nuclear Information System (INIS)

    Venkata Subbaiah, Kamatam

    2016-01-01

    ISOGEN is an interactive code for solving first order coupled linear differential equations with constant coefficients for a large number of isotopes, which are produced or depleted by the processes of radioactive decay or through neutron transmutation or fission. These coupled equations can be written in a matrix notation involving radioactive decay constants and transmutation coefficients, and the eigenvalues of thus formed matrix vary widely (several tens of orders), and hence no single method of solution is suitable for obtaining precise estimate of concentrations of isotopes. Therefore, different methods of solutions are followed, namely, matrix exponential method, Bateman series method, and Gauss-Seidel iteration method, as was followed in the ORIGEN-2 code. ISOGEN code is written in a modern computer language, VB.NET version 2013 for Windows operating system version 7, which enables one to provide many interactive features between the user and the program. The output results depend on the input neutron database employed and the time step involved in the calculations. The present program can display the information about the database files, and the user has to select one which suits the current need. The program prints the 'WARNING' information if the time step is too large, which is decided based on the built-in convergence criterion. Other salient interactive features provided are (i) inspection of input data that goes into calculation, (ii) viewing of radioactive decay sequence of isotopes (daughters, precursors, photons emitted) in a graphical format, (iii) solution of parent and daughter products by direct Bateman series solution method, (iv) quick input method and context sensitive prompts for guiding the novice user, (v) view of output tables for any parameter of interest, and (vi) output file can be read to generate new information and can be viewed or printed since the program stores basic nuclide concentration unlike other batch jobs. The

  5. Sensibility analysis of fuel depletion using different nuclear fuel depletion codes

    Energy Technology Data Exchange (ETDEWEB)

    Martins, F.; Velasquez, C.E.; Castro, V.F.; Pereira, C.; Silva, C. A. Mello da, E-mail: felipmartins94@gmail.com, E-mail: carlosvelcab@hotmail.com, E-mail: victorfariascastro@gmail.com, E-mail: claubia@nuclear.ufmg.br, E-mail: clarysson@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departamento de Engenharia Nuclear

    2017-07-01

    Nowadays, the utilization of different nuclear codes to perform the depletion and criticality calculations has been used to simulated nuclear reactors problems. Therefore, the goal is to analyze the sensibility of the fuel depletion of a PWR assembly using three different nuclear fuel depletion codes. The burnup calculations are performed using the codes MCNP5/ORIGEN2.1 (MONTEBURNS), KENO-VI/ORIGEN-S (TRITONSCALE6.0) and MCNPX (MCNPX/CINDER90). Each nuclear code performs the burnup using different depletion codes. Each depletion code works with collapsed energies from a master library in 1, 3 and 63 groups, respectively. Besides, each code uses different ways to obtain neutron flux that influences the depletions calculation. The results present a comparison of the neutronic parameters and isotopes composition such as criticality and nuclides build-up, the deviation in results are going to be assigned to features of the depletion code in use, such as the different radioactive decay internal libraries and the numerical method involved in solving the coupled differential depletion equations. It is also seen that the longer the period is and the more time steps are chosen, the larger the deviation become. (author)

  6. Development of the point-depletion code DEPTH

    Energy Technology Data Exchange (ETDEWEB)

    She, Ding [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Wang, Kan, E-mail: wangkan@mail.tsinghua.edu.cn [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Yu, Ganglin [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China)

    2013-05-15

    Highlights: ► The DEPTH code has been developed for the large-scale depletion system. ► DEPTH uses the data library which is convenient to couple with MC codes. ► TTA and matrix exponential methods are implemented and compared. ► DEPTH is able to calculate integral quantities based on the matrix inverse. ► Code-to-code comparisons prove the accuracy and efficiency of DEPTH. -- Abstract: The burnup analysis is an important aspect in reactor physics, which is generally done by coupling of transport calculations and point-depletion calculations. DEPTH is a newly-developed point-depletion code of handling large burnup depletion systems and detailed depletion chains. For better coupling with Monte Carlo transport codes, DEPTH uses data libraries based on the combination of ORIGEN-2 and ORIGEN-S and allows users to assign problem-dependent libraries for each depletion step. DEPTH implements various algorithms of treating the stiff depletion systems, including the Transmutation trajectory analysis (TTA), the Chebyshev Rational Approximation Method (CRAM), the Quadrature-based Rational Approximation Method (QRAM) and the Laguerre Polynomial Approximation Method (LPAM). Three different modes are supported by DEPTH to execute the decay, constant flux and constant power calculations. In addition to obtaining the instantaneous quantities of the radioactivity, decay heats and reaction rates, DEPTH is able to calculate the integral quantities by a time-integrated solver. Through calculations compared with ORIGEN-2, the validity of DEPTH in point-depletion calculations is proved. The accuracy and efficiency of depletion algorithms are also discussed. In addition, an actual pin-cell burnup case is calculated to illustrate the DEPTH code performance in coupling with the RMC Monte Carlo code.

  7. MCOR - Monte Carlo depletion code for reference LWR calculations

    Energy Technology Data Exchange (ETDEWEB)

    Puente Espel, Federico, E-mail: fup104@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Tippayakul, Chanatip, E-mail: cut110@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Ivanov, Kostadin, E-mail: kni1@psu.edu [Department of Mechanical and Nuclear Engineering, Pennsylvania State University (United States); Misu, Stefan, E-mail: Stefan.Misu@areva.com [AREVA, AREVA NP GmbH, Erlangen (Germany)

    2011-04-15

    Research highlights: > Introduction of a reference Monte Carlo based depletion code with extended capabilities. > Verification and validation results for MCOR. > Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations. Additionally

  8. MCOR - Monte Carlo depletion code for reference LWR calculations

    International Nuclear Information System (INIS)

    Puente Espel, Federico; Tippayakul, Chanatip; Ivanov, Kostadin; Misu, Stefan

    2011-01-01

    Research highlights: → Introduction of a reference Monte Carlo based depletion code with extended capabilities. → Verification and validation results for MCOR. → Utilization of MCOR for benchmarking deterministic lattice physics (spectral) codes. - Abstract: The MCOR (MCnp-kORigen) code system is a Monte Carlo based depletion system for reference fuel assembly and core calculations. The MCOR code is designed as an interfacing code that provides depletion capability to the LANL Monte Carlo code by coupling two codes: MCNP5 with the AREVA NP depletion code, KORIGEN. The physical quality of both codes is unchanged. The MCOR code system has been maintained and continuously enhanced since it was initially developed and validated. The verification of the coupling was made by evaluating the MCOR code against similar sophisticated code systems like MONTEBURNS, OCTOPUS and TRIPOLI-PEPIN. After its validation, the MCOR code has been further improved with important features. The MCOR code presents several valuable capabilities such as: (a) a predictor-corrector depletion algorithm, (b) utilization of KORIGEN as the depletion module, (c) individual depletion calculation of each burnup zone (no burnup zone grouping is required, which is particularly important for the modeling of gadolinium rings), and (d) on-line burnup cross-section generation by the Monte Carlo calculation for 88 isotopes and usage of the KORIGEN libraries for PWR and BWR typical spectra for the remaining isotopes. Besides the just mentioned capabilities, the MCOR code newest enhancements focus on the possibility of executing the MCNP5 calculation in sequential or parallel mode, a user-friendly automatic re-start capability, a modification of the burnup step size evaluation, and a post-processor and test-matrix, just to name the most important. The article describes the capabilities of the MCOR code system; from its design and development to its latest improvements and further ameliorations

  9. ISODEP, A Fuel Depletion Analysis Code for Predicting Isotopic ...

    African Journals Online (AJOL)

    The trend of results was found to be consistent with those obtained by analytical and other numerical methods. Discovery and Innovation Vol. 13 no. 3/4 December (2001) pp. 184-195. KEY WORDS: depletion analysis, code, research reactor, simultaneous equations, decay of nuclides, radionuclitides, isotope. Résumé

  10. Monte Carlo simulation in UWB1 depletion code

    International Nuclear Information System (INIS)

    Lovecky, M.; Prehradny, J.; Jirickova, J.; Skoda, R.

    2015-01-01

    U W B 1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in the University of West Bohemia in Pilsen, Czech Republic. In order to achieve higher precision, the newly developed code was extended by adding a Monte Carlo solver. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers (BA) allow the compensation of the initial reactivity excess of nuclear fuel and result in an increase of fuel cycles lengths with higher enriched fuels. The paper describes the depletion calculations of VVER nuclear fuel doped with rare earth oxides as burnable absorber based on performed depletion calculations, rare earth oxides are divided into two equally numerous groups, suitable burnable absorbers and poisoning absorbers. According to residual poisoning and BA reactivity worth, rare earth oxides marked as suitable burnable absorbers are Nd, Sm, Eu, Gd, Dy, Ho and Er, while poisoning absorbers include Sc, La, Lu, Y, Ce, Pr and Tb. The presentation slides have been added to the article

  11. San Onofre PWR Data for Code Validation of MOX Fuel Depletion Analyses - Revision 1

    International Nuclear Information System (INIS)

    Hermann, O.W.

    2000-01-01

    The isotopic composition of mixed-oxide fuel (fabricated with both uranium and plutonium isotopes) discharged from reactors is of interest to the Fissile Material Disposition Program. The validation of depletion codes used to predict isotopic compositions of MOX fuel, similar to studies concerning uranium-only fueled reactors, thus, is very important. The EEI-Westinghouse Plutonium Recycle Demonstration Program was conducted to examine the use of MOX fuel in the San Onofre PWR, Unit I, during cycles 2 and 3. The data, usually required as input to depletion codes, either one-dimensional or lattice codes, were taken from various sources and compiled into this report. Where data were either lacking or determined inadequate, the appropriate data were supplied from other references. The scope of the reactor operations and design data, in addition to the isotopic analyses, was considered to be of sufficient quality for depletion code validation

  12. The development of depletion program coupled with Monte Carlo computer code

    International Nuclear Information System (INIS)

    Nguyen Kien Cuong; Huynh Ton Nghiem; Vuong Huu Tan

    2015-01-01

    The paper presents the development of depletion code for light water reactor coupled with MCNP5 code called the MCDL code (Monte Carlo Depletion for Light Water Reactor). The first order differential depletion system equations of 21 actinide isotopes and 50 fission product isotopes are solved by the Radau IIA Implicit Runge Kutta (IRK) method after receiving neutron flux, reaction rates in one group energy and multiplication factors for fuel pin, fuel assembly or whole reactor core from the calculation results of the MCNP5 code. The calculation for beryllium poisoning and cooling time is also integrated in the code. To verify and validate the MCDL code, high enriched uranium (HEU) and low enriched uranium (LEU) fuel assemblies VVR-M2 types and 89 fresh HEU fuel assemblies, 92 LEU fresh fuel assemblies cores of the Dalat Nuclear Research Reactor (DNRR) have been investigated and compared with the results calculated by the SRAC code and the MCNP R EBUS linkage system code. The results show good agreement between calculated data of the MCDL code and reference codes. (author)

  13. Acceleration of the MCNP branch of the OCTOPUS depletion code system

    International Nuclear Information System (INIS)

    Pijlgroms, B.J.; Hogenbirk, A.; Oppe, J.

    1998-09-01

    OCTOPUS depletion calculations using the 3D Monte Carlo spectrum code MCNP (Monte Carlo Code for Neutron and Photon Transport) require much computing time. In a former implementation, the time required by OCTOPUS to perform multi-zone calculations, increased roughly proportional to the number of burnable zones. By using a different method the situation has improved considerably. In the new implementation described here, the dependence of the computing time on the number of zones has been moved from the MCNP code to a faster postprocessing code. By this, the overall computing time will reduce substantially. 11 refs

  14. DANDE: a linked code system for core neutronics/depletion analysis

    International Nuclear Information System (INIS)

    LaBauve, R.J.; England, T.R.; George, D.C.; MacFarlane, R.E.; Wilson, W.B.

    1986-01-01

    This report describes DANDE - a modular neutronics, depletion code system for reactor analysis. It consists of nuclear data processing, core physics, and fuel depletion modules, and allows one to use diffusion and transport methods interchangeably in core neutronics calculations. This latter capability is especially important in the design of small modular cores. Additional unique features include the capability of updating the nuclear data file during a calculation; a detailed treatment of depletion, burnable poisons as well as fuel; and the ability to make geometric changes such as control rod repositioning and fuel relocation in the cource of a calculation. The detailed treatment of reactor fuel burnup, fission-product creation and decay, as well as inventories of higher-order actinides is a necessity when predicting the behavior of reactor fuel under increased burn conditions. The operation of the code system is illustrated in this report by two sample problems. 25 refs

  15. DANDE: a linked code system for core neutronics/depletion analysis

    International Nuclear Information System (INIS)

    LaBauve, R.J.; England, T.R.; George, D.C.; MacFarlane, R.E.; Wilson, W.B.

    1985-06-01

    This report describes DANDE - a modular neutronics, depletion code system for reactor analysis. It consists of nuclear data processing, core physics, and fuel depletion modules, and allows one to use diffusion and transport methods interchangeably in core neutronics calculations. This latter capability is especially important in the design of small modular cores. Additional unique features include the capability of updating the nuclear data file during a calculation; a detailed treatment of depletion, burnable poisons as well as fuel; and the ability to make geometric changes such as control rod repositioning and fuel relocation in the course of a calculation. The detailed treatment of reactor fuel burnup, fission-product creation and decay, as well as inventories of higher-order actinides is a necessity when predicting the behavior of reactor fuel under increased burn conditions. The operation of the code system is made clear in this report by following a sample problem

  16. ASSESSMENT OF BURNABLE ABSORBER FUEL DESIGN BY UWB1 DEPLETION CODE

    Directory of Open Access Journals (Sweden)

    Martin Lovecky

    2016-12-01

    Full Text Available UWB1 depletion code is being developed as a fast computational tool for the study of burnable absorbers in University of West Bohemia in Pilsen, Czech Republic. Research of fuel depletion aims at development and introduction of advanced types of burnable absorbers in nuclear fuel. Burnable absorbers compensate for the initial excess reactivity and consequently allow for lower power peaking factors and longer fuel cycles with higher fuel enrichments. The paper describes the depletion calculations of CANDU, PWR and SFR nuclear fuel doped with rare earth oxides as burnable absorber. Uniform distribution of burnable absorber in the fuel is assumed. Based on performed depletion calculations, rare earth oxides are divided into two groups, suitable burnable absorbers and poisoning absorbers. Moreover, basic economic comparison is performed based on actual stock prices.

  17. ORPHEE research reactor: 3D core depletion calculation using Monte-Carlo code TRIPOLI-4®

    Science.gov (United States)

    Damian, F.; Brun, E.

    2014-06-01

    ORPHEE is a research reactor located at CEA Saclay. It aims at producing neutron beams for experiments. This is a pool-type reactor (heavy water), and the core is cooled by light water. Its thermal power is 14 MW. ORPHEE core is 90 cm height and has a cross section of 27x27 cm2. It is loaded with eight fuel assemblies characterized by a various number of fuel plates. The fuel plate is composed of aluminium and High Enriched Uranium (HEU). It is a once through core with a fuel cycle length of approximately 100 Equivalent Full Power Days (EFPD) and with a maximum burnup of 40%. Various analyses under progress at CEA concern the determination of the core neutronic parameters during irradiation. Taking into consideration the geometrical complexity of the core and the quasi absence of thermal feedback for nominal operation, the 3D core depletion calculations are performed using the Monte-Carlo code TRIPOLI-4® [1,2,3]. A preliminary validation of the depletion calculation was performed on a 2D core configuration by comparison with the deterministic transport code APOLLO2 [4]. The analysis showed the reliability of TRIPOLI-4® to calculate a complex core configuration using a large number of depleting regions with a high level of confidence.

  18. Depletion of Shine-Dalgarno Sequences Within Bacterial Coding Regions Is Expression Dependent

    Science.gov (United States)

    Yang, Chuyue; Hockenberry, Adam J.; Jewett, Michael C.; Amaral, Luís A. N.

    2016-01-01

    Efficient and accurate protein synthesis is crucial for organismal survival in competitive environments. Translation efficiency (the number of proteins translated from a single mRNA in a given time period) is the combined result of differential translation initiation, elongation, and termination rates. Previous research identified the Shine-Dalgarno (SD) sequence as a modulator of translation initiation in bacterial genes, while codon usage biases are frequently implicated as a primary determinant of elongation rate variation. Recent studies have suggested that SD sequences within coding sequences may negatively affect translation elongation speed, but this claim remains controversial. Here, we present a metric to quantify the prevalence of SD sequences in coding regions. We analyze hundreds of bacterial genomes and find that the coding sequences of highly expressed genes systematically contain fewer SD sequences than expected, yielding a robust correlation between the normalized occurrence of SD sites and protein abundances across a range of bacterial taxa. We further show that depletion of SD sequences within ribosomal protein genes is correlated with organismal growth rates, supporting the hypothesis of strong selection against the presence of these sequences in coding regions and suggesting their association with translation efficiency in bacteria. PMID:27605518

  19. Depletion of Shine-Dalgarno Sequences Within Bacterial Coding Regions Is Expression Dependent

    Directory of Open Access Journals (Sweden)

    Chuyue Yang

    2016-11-01

    Full Text Available Efficient and accurate protein synthesis is crucial for organismal survival in competitive environments. Translation efficiency (the number of proteins translated from a single mRNA in a given time period is the combined result of differential translation initiation, elongation, and termination rates. Previous research identified the Shine-Dalgarno (SD sequence as a modulator of translation initiation in bacterial genes, while codon usage biases are frequently implicated as a primary determinant of elongation rate variation. Recent studies have suggested that SD sequences within coding sequences may negatively affect translation elongation speed, but this claim remains controversial. Here, we present a metric to quantify the prevalence of SD sequences in coding regions. We analyze hundreds of bacterial genomes and find that the coding sequences of highly expressed genes systematically contain fewer SD sequences than expected, yielding a robust correlation between the normalized occurrence of SD sites and protein abundances across a range of bacterial taxa. We further show that depletion of SD sequences within ribosomal protein genes is correlated with organismal growth rates, supporting the hypothesis of strong selection against the presence of these sequences in coding regions and suggesting their association with translation efficiency in bacteria.

  20. Development of SSUBPIC code for modeling the neutral gas depletion effect in helicon discharges

    Science.gov (United States)

    Kollasch, Jeffrey; Sovenic, Carl; Schmitz, Oliver

    2017-10-01

    The SSUBPIC (steady-state unstructured-boundary particle-in-cell) code is being developed to model helicon plasma devices. The envisioned modeling framework incorporates (1) a kinetic neutral particle model, (2) a kinetic ion model, (3) a fluid electron model, and (4) an RF power deposition model. The models are loosely coupled and iterated until convergence to steady-state. Of the four required solvers, the kinetic ion and neutral particle simulation can now be done within the SSUBPIC code. Recent SSUBPIC modifications include implementation and testing of a Coulomb collision model (Lemons et al., JCP, 228(5), pp. 1391-1403) allowing efficient coupling of kineticly-treated ions to fluid electrons, and implementation of a neutral particle tracking mode with charge-exchange and electron impact ionization physics. These new simulation capabilities are demonstrated working independently and coupled to ``dummy'' profiles for RF power deposition to converge on steady-state plasma and neutral profiles. The geometry and conditions considered are similar to those of the MARIA experiment at UW-Madison. Initial results qualitatively show the expected neutral gas depletion effect in which neutrals in the plasma core are not replenished at a sufficient rate to sustain a higher plasma density. This work is funded by the NSF CAREER award PHY-1455210 and NSF Grant PHY-1206421.

  1. Utility subroutine package used by Applied Physics Division export codes

    International Nuclear Information System (INIS)

    Adams, C.H.; Derstine, K.L.; Henryson, H. II; Hosteny, R.P.; Toppel, B.J.

    1983-04-01

    This report describes the current state of the utility subroutine package used with codes being developed by the staff of the Applied Physics Division. The package provides a variety of useful functions for BCD input processing, dynamic core-storage allocation and managemnt, binary I/0 and data manipulation. The routines were written to conform to coding standards which facilitate the exchange of programs between different computers

  2. Burnable absorbers in CANDU fuel bundle depletion with U{sub W}B{sub 1} code

    Energy Technology Data Exchange (ETDEWEB)

    Lovecky, M., E-mail: lovecky@rice.zcu.cz [Univ. of West Bohemia, Pilsen (Czech Republic); Skoda, R., E-mail: radek.skoda@fs.cvut.cz [Czech Technical Univ., Prague (Czech Republic); Hussein, M.; Song, J.; Chan, P., E-mail: mohamed.hussein@rmc.ca, E-mail: jae.song@rmc.ca, E-mail: paul.chan@rmc.ca [Royal Military of College of Canada, Kinston, ON (Canada)

    2015-07-01

    U{sub W}B{sub 1} nuclear fuel depletion code is being developed by Lovecky et al to conduct burnable neutron-absorber research for fast and thermal reactor designs. The use of neutron absorber in CANDU to gain operating margin was proposed by Chan et al. The development of U{sub W}B{sub 1} and the use of n-absorbers in CANDU were published in 2014. Research and development are still ongoing. This paper describes the simulation of CANDU fuel bundle depletion. The accuracy and the speed of the code are compared to WIMS, Serpent and MCNP6 reference codes. The results show that U{sub W}B{sub 1} is suitable to be used as a depletion code to study the removal of the initial transient and suppression of the plutonium peaks in CANDU fuel reactivity. U{sub W}B{sub 1} code introduces an advantage in the depletion calculation time. (author)

  3. APPLYING COGNITIVE CODE TOWARDS INDONESIAN EFL LEARNERS’ WRITING COMPETENCE IMPROVEMENT

    Directory of Open Access Journals (Sweden)

    Ita Juita

    2014-06-01

    Full Text Available This classroom action research (CAR presents a research for solving the student’s problems in writing class by using two cycles of Kemmis and McTaggart. In this CAR, there are three crucial instruments. They are students’ learning journal to know what the student’ map thinking which is related to the cognitive code and the writing material, researcher’ journal and questionnaire. The students’ problems in writing subject happen in one class of English Department of the University of Kuningan, West Java – Indonesia. The learners find it difficult to process words into sentences. Applying cognitive code in this CAR is the strategy, with the purpose to know what the students need by asking them to use some tools such as student’s learning journal, thus the students are able to tell their difficulties based on their learning experiences in class. Cognitive code looks students or learners as thinking being and learn based on their learning experience. The students’ writing competence in the beginning of this research is 40, meanwhile, after applying cognitive code as the method of teaching learning process, the class average gets 64.5 in the post test. Thus, the normalized gain to measure the students’ writing development is on number 0.7, it means the students’ writing improvement is moderate. The students’ attitude toward cognitive code is taken from rating scales is 82%. Based on the data, it can be concluded that cognitive code is effective method in teaching writing.

  4. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Soung Chang Liew

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to “straightforward” network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  5. Applying Physical-Layer Network Coding in Wireless Networks

    Directory of Open Access Journals (Sweden)

    Liew SoungChang

    2010-01-01

    Full Text Available A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes, simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11. This paper shows that the concept of network coding can be applied at the physical layer to turn the broadcast property into a capacity-boosting advantage in wireless ad hoc networks. Specifically, we propose a physical-layer network coding (PNC scheme to coordinate transmissions among nodes. In contrast to "straightforward" network coding which performs coding arithmetic on digital bit streams after they have been received, PNC makes use of the additive nature of simultaneously arriving electromagnetic (EM waves for equivalent coding operation. And in doing so, PNC can potentially achieve 100% and 50% throughput increases compared with traditional transmission and straightforward network coding, respectively, in 1D regular linear networks with multiple random flows. The throughput improvements are even larger in 2D regular networks: 200% and 100%, respectively.

  6. Domain Decomposition strategy for pin-wise full-core Monte Carlo depletion calculation with the reactor Monte Carlo Code

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Jingang; Wang, Kan; Qiu, Yishu [Dept. of Engineering Physics, LiuQing Building, Tsinghua University, Beijing (China); Chai, Xiao Ming; Qiang, Sheng Long [Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu (China)

    2016-06-15

    Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC) codes in accomplishing pin-wise three-dimensional (3D) full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC) code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  7. FORIG: a computer code for calculating radionuclide generation and depletion in fusion and fission reactors. User's manual

    International Nuclear Information System (INIS)

    Blink, J.A.

    1985-03-01

    In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs

  8. Description of WWER-440 fuel assembly 3D depletion model developed with TRITON6 code (SCALE 6.0)

    International Nuclear Information System (INIS)

    2014-07-01

    One of the key components strongly influencing the accuracy of burnup credit methodology is precision of spent nuclear fuel isotopic composition prediction. To enhance the accuracy of spent nuclear fuel isotopic composition prediction modeling 3D depletion model of WWER-440 nuclear fuel was developed by TRITON6. TRITON6 couples ORIGEN-S depletion code with 3D neutron transport solver KENO-VI. This kind of coupling allows updating the ORIGEN-S cross-section libraries by 3D problem-dependent neutron fluxes computed by KENO-VI hence significantly improving the accuracy of isotopic composition predications. WWER-440 fuel assembly destructive experimental results carried out by RIAR Dimitrovgrad are planned to be used for verification and validation of developed model. Therefore WWER-440 fuel assembly model was developed with taking into account peculiarities in destructive experimental results

  9. Fast frequency hopping codes applied to SAC optical CDMA network

    Science.gov (United States)

    Tseng, Shin-Pin

    2015-06-01

    This study designed a fast frequency hopping (FFH) code family suitable for application in spectral-amplitude-coding (SAC) optical code-division multiple-access (CDMA) networks. The FFH code family can effectively suppress the effects of multiuser interference and had its origin in the frequency hopping code family. Additional codes were developed as secure codewords for enhancing the security of the network. In considering the system cost and flexibility, simple optical encoders/decoders using fiber Bragg gratings (FBGs) and a set of optical securers using two arrayed-waveguide grating (AWG) demultiplexers (DeMUXs) were also constructed. Based on a Gaussian approximation, expressions for evaluating the bit error rate (BER) and spectral efficiency (SE) of SAC optical CDMA networks are presented. The results indicated that the proposed SAC optical CDMA network exhibited favorable performance.

  10. Lattices applied to coding for reliable and secure communications

    CERN Document Server

    Costa, Sueli I R; Campello, Antonio; Belfiore, Jean-Claude; Viterbo, Emanuele

    2017-01-01

    This book provides a first course on lattices – mathematical objects pertaining to the realm of discrete geometry, which are of interest to mathematicians for their structure and, at the same time, are used by electrical and computer engineers working on coding theory and cryptography. The book presents both fundamental concepts and a wealth of applications, including coding and transmission over Gaussian channels, techniques for obtaining lattices from finite prime fields and quadratic fields, constructions of spherical codes, and hard lattice problems used in cryptography. The topics selected are covered in a level of detail not usually found in reference books. As the range of applications of lattices continues to grow, this work will appeal to mathematicians, electrical and computer engineers, and graduate or advanced undergraduate in these fields.

  11. Applying a rateless code in content delivery networks

    Science.gov (United States)

    Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan

    2017-09-01

    Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.

  12. Grounded Theorising Applied to IS Research - Developing a Coding Strategy

    Directory of Open Access Journals (Sweden)

    Bruce Rowlands

    2005-05-01

    Full Text Available This paper provides an example of developing a coding strategy to build theory of the roles of methods in IS development. The research seeks to identify and understand how system development methods are used in an IS department within a large Australian bank. The paper details a theoretical framework, particulars of data collection, and documents an early phase of analysis – data reduction and the generation of an initial coding scheme. Guided by a framework to study the use of methods, the analysis demonstrates the framework’s plausibility in order to develop theoretical relationships with which to develop a grounded theory.

  13. C code generation applied to nonlinear model predictive control for an artificial pancreas

    DEFF Research Database (Denmark)

    Boiroux, Dimitri; Jørgensen, John Bagterp

    2017-01-01

    This paper presents a method to generate C code from MATLAB code applied to a nonlinear model predictive control (NMPC) algorithm. The C code generation uses the MATLAB Coder Toolbox. It can drastically reduce the time required for development compared to a manual porting of code from MATLAB to C...... of glucose regulation for people with type 1 diabetes as a case study. The average computation time when using generated C code is 0.21 s (MATLAB: 1.5 s), and the maximum computation time when using generated C code is 0.97 s (MATLAB: 5.7 s). Compared to the MATLAB implementation, generated C code can run...

  14. Adaptive Wavelet Coding Applied in a Wireless Control System

    Science.gov (United States)

    Gama, Felipe O. S.; O. Salazar, Andrés

    2017-01-01

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus Eb/N0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop. PMID:29236048

  15. Adaptive Wavelet Coding Applied in a Wireless Control System

    Directory of Open Access Journals (Sweden)

    Felipe O. S. Gama

    2017-12-01

    Full Text Available Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  16. PLUTON: Three-group neutronic code for burnup analysis of isotope generation and depletion in highly irradiated LWR fuel rods

    Energy Technology Data Exchange (ETDEWEB)

    Lemehov, Sergei E; Suzuki, Motoe [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-08-01

    PLUTON is a three-group neutronic code analyzing, as functions of time and burnup, the change of radial profiles, together with average values, of power density, burnup, concentration of trans-uranium elements, plutonium buildup, depletion of fissile elements, and fission product generation in water reactor fuel rod with standard UO{sub 2}, UO{sub 2}-Gd{sub 2}O{sub 3}, inhomogeneous MOX, and UO{sub 2}-ThO{sub 2}. The PLUTON code, which has been designed to be run on Windows PC, has adopted a theoretical shape function of neutron attenuation in pellet, which enables users to perform a very fast and accurate calculation easily. The present code includes the irradiation conditions of the Halden Reactor which gives verification data for the code. The total list of trans-uranium elements included in the calculations consists of {sub 92}U{sup 233-239}, {sub 93}Np{sup 237-239}, {sub 94}Pu{sup 238-243}, {sub 95}Am{sup 241-244} (including isomers), and {sub 96}Cm{sup 242-245}. Poisoning fission products are represented by {sub 54}Xe{sup 131,133,135}, {sub 48}Cd{sup 113}, {sub 62}Sm{sup 149,151,152}, {sub 64}Gd{sup 154-160}, {sub 63}Eu{sup 153,155}, {sub 36}Kr{sup 83,85}, {sub 42}Mo{sup 95}, {sub 43}Tc{sup 99}, {sub 45}Rh{sup 103}, {sub 47}Ag{sup 109}, {sub 53}I{sup 127,129,131}, {sub 55}Cs{sup 133}, {sub 57}La{sup 139}, {sub 59}Pr{sup 141}, {sub 60}Nd{sup 143-150}, {sub 61}Pm{sup 147}. Fission gases and volatiles included in the code are {sub 36}Kr{sup 83-86}, {sub 54}Xe{sup 129-136}, {sub 52}Te{sup 125-130}, {sub 53}I{sup 127-131}, {sub 55}Cs{sup 133-137}, and {sub 56}Ba{sup 135-140}. Verification has been performed up to 83 GWd/tU, and a satisfactory agreement has been obtained. (author)

  17. Development of a new nuclide generation and depletion code using a topological solver based on graph theory

    International Nuclear Information System (INIS)

    Kasselmann, S.; Scholthaus, S.; Rössel, C.; Allelein, H.-J.

    2014-01-01

    The problem of calculating the amounts of a coupled nuclide system varying with time especially when exposed to a neutron flux is a well-known problem and has been addressed by a number of computer codes. These codes cover a broad spectrum of applications, are based on comprehensive validation work and are therefore justifiably renowned among their users. However, due to their long development history, they are lacking a modern interface, which impedes a fast and robust internal coupling to other codes applied in the field of nuclear reactor physics. Therefore a project has been initiated to develop a new object-oriented nuclide transmutation code. It comprises an innovative solver based on graph theory, which exploits the topology of nuclide chains. This allows to always deal with the smallest nuclide system for the problem of interest. Highest priority has been given to the existence of a generic software interfaces well as an easy handling by making use of XML files for input and output. In this paper we report on the status of the code development and present first benchmark results, which prove the applicability of the selected approach. (author)

  18. A New 3D Maser Code Applied to Flaring Events

    Science.gov (United States)

    Gray, M. D.; Mason, L.; Etoka, S.

    2018-03-01

    We set out the theory and discretization scheme for a new finite-element computer code, written specifically for the simulation of maser sources. The code was used to compute fractional inversions at each node of a 3-D domain for a range of optical thicknesses. Saturation behaviour of the nodes with regard to location and optical depth were broadly as expected. We have demonstrated via formal solutions of the radiative transfer equation that the apparent size of the model maser cloud decreases as expected with optical depth as viewed by a distant observer. Simulations of rotation of the cloud allowed the construction of light-curves for a number of observable quantities. Rotation of the model cloud may be a reasonable model for quasi-periodic variability, but cannot explain periodic flaring.

  19. Domain Decomposition Strategy for Pin-wise Full-Core Monte Carlo Depletion Calculation with the Reactor Monte Carlo Code

    Directory of Open Access Journals (Sweden)

    Jingang Liang

    2016-06-01

    Full Text Available Because of prohibitive data storage requirements in large-scale simulations, the memory problem is an obstacle for Monte Carlo (MC codes in accomplishing pin-wise three-dimensional (3D full-core calculations, particularly for whole-core depletion analyses. Various kinds of data are evaluated and quantificational total memory requirements are analyzed based on the Reactor Monte Carlo (RMC code, showing that tally data, material data, and isotope densities in depletion are three major parts of memory storage. The domain decomposition method is investigated as a means of saving memory, by dividing spatial geometry into domains that are simulated separately by parallel processors. For the validity of particle tracking during transport simulations, particles need to be communicated between domains. In consideration of efficiency, an asynchronous particle communication algorithm is designed and implemented. Furthermore, we couple the domain decomposition method with MC burnup process, under a strategy of utilizing consistent domain partition in both transport and depletion modules. A numerical test of 3D full-core burnup calculations is carried out, indicating that the RMC code, with the domain decomposition method, is capable of pin-wise full-core burnup calculations with millions of depletion regions.

  20. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin [POSCO Nuclear Technology, Seoul (Korea, Republic of)

    2013-05-15

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors.

  1. Applying Hamming Code to Memory System of Safety Grade PLC (POSAFE-Q) Processor Module

    International Nuclear Information System (INIS)

    Kim, Taehee; Hwang, Sungjae; Park, Gangmin

    2013-01-01

    If some errors such as inverted bits occur in the memory, instructions and data will be corrupted. As a result, the PLC may execute the wrong instructions or refer to the wrong data. Hamming Code can be considered as the solution for mitigating this mis operation. In this paper, we apply hamming Code, then, we inspect whether hamming code is suitable for to the memory system of the processor module. In this paper, we applied hamming code to existing safety grade PLC (POSAFE-Q). Inspection data are collected and they will be referred for improving the PLC in terms of the soundness. In our future work, we will try to improve time delay caused by hamming calculation. It will include CPLD optimization and memory architecture or parts alteration. In addition to these hamming code-based works, we will explore any methodologies such as mirroring for the soundness of safety grade PLC. Hamming code-based works can correct bit errors, but they have limitation in multi bits errors

  2. SAS6. User's guide. A two-dimensional depletion and criticality analysis code package based on the SCALE-4 system

    International Nuclear Information System (INIS)

    Leege, P.F.A. de; Li, J.M.; Kloosterman, J.L.

    1995-04-01

    This users' guide gives a description of the functionality and the requested input of the SAS6 code sequence which can be used to perform burnup and criticality calculations based on functional modules from the SCALE-4 code system and libraries. The input file for the SAS6 control module is very similar to that of the other SAS and CSAS control modules available in the SCALE-4 system. Especially the geometry input of SAS6 is quite similar to that of SAS2H. However, the functionality of SAS6 is different from that of SAS2H. The geometry of the reactor lattice can be treated in more detail because use is made of the two-dimensional lattice code WIMS-D/IRI (An adapted version of WIMS-D/4) instead of the one-dimensional transport code XSDRNPM-S. Also the neutron absorption and production rates of nuclides not explicitly specified in the input can be accounted for by six pseudo nuclides. Furthermore, the centre pin can be subdivided into maximal 10 zones to improve the burnup calculation of the centre pin and to obtain more accurate k-infinite values for the assembly. Also the time step specification is more flexible than in the SAS2H sequence. (orig.)

  3. ORIGEN-ARP 2.00, Isotope Generation and Depletion Code System-Matrix Exponential Method with GUI and Graphics Capability

    International Nuclear Information System (INIS)

    2002-01-01

    1 - Description of program or function: ORIGEN-ARP was developed for the Nuclear Regulatory Commission and the Department of Energy to satisfy a need for an easy-to-use standardized method of isotope depletion/decay analysis for spent fuel, fissile material, and radioactive material. It can be used to solve for spent fuel characterization, isotopic inventory, radiation source terms, and decay heat. This release of ORIGEN-ARP is a standalone code package that contains an updated version of the SCALE-4.4a ORIGEN-S code. It contains a subset of the modules, data libraries, and miscellaneous utilities in SCALE-4.4a. This package is intended for users who do not need the entire SCALE package. ORIGEN-ARP 2.00 (2-12-2002) differs from the previous release ORIGEN-ARP 1.0 (July 2001) in the following ways: 1.The neutron source and energy spectrum routines were replaced with computational algorithms and data from the SOURCES-4B code (RSICC package CCC-661) to provide more accurate spontaneous fission and (alpha,n) neutron sources, and a delayed neutron source capability was added. 2.The printout of the fixed energy group structure photon tables was removed. Gamma sources and spectra are now printed for calculations using the Master Photon Library only. 2 - Methods: ORIGEN-ARP is an automated sequence to perform isotopic depletion / decay calculations using the ARP and ORIGEN-S codes of the SCALE system. The sequence includes the OrigenArp for Windows graphical user interface (GUI) that prepares input for ARP (Automated Rapid Processing) and ORIGEN-S. ARP automatically interpolates cross sections for the ORIGEN-S depletion/decay analysis using enrichment, burnup, and, optionally moderator density, from a set of libraries generated with the SCALE SAS2 depletion sequence. Library sets for four LWR fuel assembly designs (BWR 8 x 8, PWR 14 x 14, 15 x 15, 17 x 17) are included. The libraries span enrichments from 1.5 to 5 wt% U-235 and burnups of 0 to 60,000 MWD/MTU. Other

  4. Experimental scrambling and noise reduction applied to the optical encryption of QR codes.

    Science.gov (United States)

    Barrera, John Fredy; Vélez, Alejandro; Torroba, Roberto

    2014-08-25

    In this contribution, we implement two techniques to reinforce optical encryption, which we restrict in particular to the QR codes, but could be applied in a general encoding situation. To our knowledge, we present the first experimental-positional optical scrambling merged with an optical encryption procedure. The inclusion of an experimental scrambling technique in an optical encryption protocol, in particular dealing with a QR code "container", adds more protection to the encoding proposal. Additionally, a nonlinear normalization technique is applied to reduce the noise over the recovered images besides increasing the security against attacks. The opto-digital techniques employ an interferometric arrangement and a joint transform correlator encrypting architecture. The experimental results demonstrate the capability of the methods to accomplish the task.

  5. Photovoltaic and solar-thermal technologies in residential building codes, tackling building code requirements to overcome the impediments to applying new technologies

    Energy Technology Data Exchange (ETDEWEB)

    Wortman, D.; Echo-Hawk, L. [authors] and Wiechman, J.; Hayter, S.; Gwinner, D. [eds.

    1999-10-04

    This report describes the building code requirements and impediments to applying photovoltaic (PV) and solar-thermal technologies in residential buildings (one- or two-family dwellings). It reviews six modern model building codes that represent the codes to be adopted by most locations in the coming years: International Residential Code, First Draft (IRC), International Energy Conservation Code (IECC), International Mechanical Code (IMC), International Plumbing Code (IPC), International Fuel Gas Code (IFGC), and National Electrical Code (NEC). The IRC may become the basis for many of the building codes in the United States after it is released in 2000, and it references the other codes that will also likely become applicable at that time. These codes are reviewed as they apply to photovoltaic systems in buildings and building-integrated photovoltaic systems and to active-solar domestic hot-water and space-heating systems. The first discussion is on general code issues that impact the s e technologies-for example, solar access and sustainability. Then, secondly, the discussion investigates the relationship of the technologies to the codes, providing examples, while keeping two major issues in mind: How do the codes treat these technologies as building components? and Do the IECC and other codes allow reasonable credit for the energy impacts of the technologies? The codes can impact the implementation of the above technologies in several ways: (1) The technology is not mentioned in the codes. It may be an obstacle to implementing the technology, and the solution is to develop appropriate explicit sections or language in the codes. (2) The technology is discussed by the codes, but the language is confusing or ambiguous. The solution is to clarify the language. (3) The technology is discussed in the codes, but the discussion is spread over several sections or different codes. Practitioners may not easily find all of the relevant material that should be considered. The

  6. Depleted uranium

    International Nuclear Information System (INIS)

    Huffer, E.; Nifenecker, H.

    2001-02-01

    This document deals with the physical, chemical and radiological properties of the depleted uranium. What is the depleted uranium? Why do the military use depleted uranium and what are the risk for the health? (A.L.B.)

  7. APPLYING SPARSE CODING TO SURFACE MULTIVARIATE TENSOR-BASED MORPHOMETRY TO PREDICT FUTURE COGNITIVE DECLINE.

    Science.gov (United States)

    Zhang, Jie; Stonnington, Cynthia; Li, Qingyang; Shi, Jie; Bauer, Robert J; Gutman, Boris A; Chen, Kewei; Reiman, Eric M; Thompson, Paul M; Ye, Jieping; Wang, Yalin

    2016-04-01

    Alzheimer's disease (AD) is a progressive brain disease. Accurate diagnosis of AD and its prodromal stage, mild cognitive impairment, is crucial for clinical trial design. There is also growing interests in identifying brain imaging biomarkers that help evaluate AD risk presymptomatically. Here, we applied a recently developed multivariate tensor-based morphometry (mTBM) method to extract features from hippocampal surfaces, derived from anatomical brain MRI. For such surface-based features, the feature dimension is usually much larger than the number of subjects. We used dictionary learning and sparse coding to effectively reduce the feature dimensions. With the new features, an Adaboost classifier was employed for binary group classification. In tests on publicly available data from the Alzheimers Disease Neuroimaging Initiative, the new framework outperformed several standard imaging measures in classifying different stages of AD. The new approach combines the efficiency of sparse coding with the sensitivity of surface mTBM, and boosts classification performance.

  8. Uncertainty propagation applied to multi-scale thermal-hydraulics coupled codes. A step towards validation

    Energy Technology Data Exchange (ETDEWEB)

    Geffray, Clotaire Clement

    2017-03-20

    The work presented here constitutes an important step towards the validation of the use of coupled system thermal-hydraulics and computational fluid dynamics codes for the simulation of complex flows in liquid metal cooled pool-type facilities. First, a set of methods suited for uncertainty and sensitivity analysis and validation activities with regards to the specific constraints of the work with coupled and expensive-to-run codes is proposed. Then, these methods are applied to the ATHLET - ANSYS CFX model of the TALL-3D facility. Several transients performed at this latter facility are investigated. The results are presented, discussed and compared to the experimental data. Finally, assessments of the validity of the selected methods and of the quality of the model are offered.

  9. Applying a System Dynamics Approach for Modeling Groundwater Dynamics to Depletion under Different Economical and Climate Change Scenarios

    Directory of Open Access Journals (Sweden)

    Hamid Balali

    2015-09-01

    Full Text Available In the recent decades, due to many different factors, including climate change effects towards be warming and lower precipitation, as well as some structural policies such as more intensive harvesting of groundwater and low price of irrigation water, the level of groundwater has decreased in most plains of Iran. The objective of this study is to model groundwater dynamics to depletion under different economic policies and climate change by using a system dynamics approach. For this purpose a dynamic hydro-economic model which simultaneously simulates the farmer’s economic behavior, groundwater aquifer dynamics, studied area climatology factors and government economical policies related to groundwater, is developed using STELLA 10.0.6. The vulnerability of groundwater balance is forecasted under three scenarios of climate including the Dry, Nor and Wet and also, different scenarios of irrigation water and energy pricing policies. Results show that implementation of some economic policies on irrigation water and energy pricing can significantly affect on groundwater exploitation and its volume balance. By increasing of irrigation water price along with energy price, exploitation of groundwater will improve, in so far as in scenarios S15 and S16, studied area’s aquifer groundwater balance is positive at the end of planning horizon, even in Dry condition of precipitation. Also, results indicate that climate change can affect groundwater recharge. It can generally be expected that increases in precipitation would produce greater aquifer recharge rates.

  10. Numerical simulations of hydrodynamic instabilities: perturbation codes Pansy, Perle, and 2D code Chic applied to a realistic LIL target

    Energy Technology Data Exchange (ETDEWEB)

    Hallo, L.; Olazabal-Loume, M.; Maire, P.H.; Breil, J.; Schurtz, G. [CELIA, 33 - Talence (France); Morse, R.L. [Arizona Univ., Dept. of Nuclear Engineering, Tucson (United States)

    2006-06-15

    This paper deals with ablation front instabilities simulations in the context of direct drive inertial confinement fusion. A simplified deuterium-tritium target, representative of realistic target on LIL (laser integration line at Megajoule laser facility) is considered. We describe here two numerical approaches: the linear perturbation method using the perturbation codes Perle (planar) and Pansy (spherical) and the direct simulation method using our bi-dimensional hydrodynamic code Chic. Our work shows a good behaviour of all methods even for large wavenumbers during the acceleration phase of the ablation front. We also point out a good agreement between model and numerical predictions at ablation front during the shock wave transit.

  11. STH-CFD Codes Coupled Calculations Applied to HLM Loop and Pool Systems

    Directory of Open Access Journals (Sweden)

    M. Angelucci

    2017-01-01

    Full Text Available This work describes the coupling methodology between a modified version of RELAP5/Mod3.3 and ANSYS Fluent CFD code developed at the University of Pisa. The described coupling procedure can be classified as “two-way,” nonoverlapping, “online” coupling. In this work, a semi-implicit numerical scheme has been implemented, giving greater stability to the simulations. A MATLAB script manages both the codes, oversees the reading and writing of the boundary conditions at the interfaces, and handles the exchange of data. A new tool was used to control the Fluent session, allowing a reduction of the time required for the exchange of data. The coupling tool was used to simulate a loop system (NACIE facility and a pool system (CIRCE facility, both working with Lead Bismuth Eutectic and located at ENEA Brasimone Research Centre. Some modifications in the coupling procedure turned out to be necessary to apply the methodology in the pool system. In this paper, the comparison between the obtained coupled numerical results and the experimental data is presented. The good agreement between experiments and calculations evinces the capability of the coupled calculation to model correctly the involved phenomena.

  12. CodeRAnts: A recommendation method based on collaborative searching and ant colonies, applied to reusing of open source code

    Directory of Open Access Journals (Sweden)

    Isaac Caicedo-Castro

    2014-01-01

    Full Text Available This paper presents CodeRAnts, a new recommendation method based on a collaborative searching technique and inspired on the ant colony metaphor. This method aims to fill the gap in the current state of the matter regarding recommender systems for software reuse, for which prior works present two problems. The first is that, recommender systems based on these works cannot learn from the collaboration of programmers and second, outcomes of assessments carried out on these systems present low precision measures and recall and in some of these systems, these metrics have not been evaluated. The work presented in this paper contributes a recommendation method, which solves these problems.

  13. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments.

    Science.gov (United States)

    Santos, José; Monteagudo, Angel

    2011-02-21

    As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the fact that the best possible codes show the patterns of the

  14. Simulated evolution applied to study the genetic code optimality using a model of codon reassignments

    Directory of Open Access Journals (Sweden)

    Monteagudo Ángel

    2011-02-01

    Full Text Available Abstract Background As the canonical code is not universal, different theories about its origin and organization have appeared. The optimization or level of adaptation of the canonical genetic code was measured taking into account the harmful consequences resulting from point mutations leading to the replacement of one amino acid for another. There are two basic theories to measure the level of optimization: the statistical approach, which compares the canonical genetic code with many randomly generated alternative ones, and the engineering approach, which compares the canonical code with the best possible alternative. Results Here we used a genetic algorithm to search for better adapted hypothetical codes and as a method to guess the difficulty in finding such alternative codes, allowing to clearly situate the canonical code in the fitness landscape. This novel proposal of the use of evolutionary computing provides a new perspective in the open debate between the use of the statistical approach, which postulates that the genetic code conserves amino acid properties far better than expected from a random code, and the engineering approach, which tends to indicate that the canonical genetic code is still far from optimal. We used two models of hypothetical codes: one that reflects the known examples of codon reassignment and the model most used in the two approaches which reflects the current genetic code translation table. Although the standard code is far from a possible optimum considering both models, when the more realistic model of the codon reassignments was used, the evolutionary algorithm had more difficulty to overcome the efficiency of the canonical genetic code. Conclusions Simulated evolution clearly reveals that the canonical genetic code is far from optimal regarding its optimization. Nevertheless, the efficiency of the canonical code increases when mistranslations are taken into account with the two models, as indicated by the

  15. Modification of PRETOR Code to Be Applied to Transport Simulation in Stellarators

    Energy Technology Data Exchange (ETDEWEB)

    Fontanet, J.; Castejon, F.; Dies, J.; Fontdecaba, J.; Alejaldre, C.

    2001-07-01

    The 1.5 D transport code PRETOR, that has been previously used to simulate tokamak plasmas, has been modified to perform transport analysis in stellarator geometry. The main modifications that have been introduced in the code are related with the magnetic equilibrium and with the modelling of energy and particle transport. Therefore, PRETOR- Stellarator version has been achieved and the code is suitable to perform simulations on stellarator plasmas. As an example, PRETOR- Stellarator has been used in the transport analysis of several Heliac Flexible TJ-II shots, and the results are compared with those obtained using PROCTR code. These results are also compared with the obtained using the tokamak version of PRETOR to show the importance of the introduced changes. (Author) 18 refs.

  16. Modification of PRETOR Code to Be Applied to Transport Simulation in Stellarators

    International Nuclear Information System (INIS)

    Fontanet, J.; Castejon, F.; Dies, J.; Fontdecaba, J.; Alejaldre, C.

    2001-01-01

    The 1.5 D transport code PRETOR, that has been previously used to simulate tokamak plasmas, has been modified to perform transport analysis in stellarator geometry. The main modifications that have been introduced in the code are related with the magnetic equilibrium and with the modelling of energy and particle transport. Therefore, PRETOR- Stellarator version has been achieved and the code is suitable to perform simulations on stellarator plasmas. As an example, PRETOR- Stellarator has been used in the transport analysis of several Heliac Flexible TJ-II shots, and the results are compared with those obtained using PROCTR code. These results are also compared with the obtained using the tokamak version of PRETOR to show the importance of the introduced changes. (Author) 18 refs

  17. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000

    Energy Technology Data Exchange (ETDEWEB)

    Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division

    2016-04-29

    This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations.

  18. The design of the CMOS wireless bar code scanner applying optical system based on ZigBee

    Science.gov (United States)

    Chen, Yuelin; Peng, Jian

    2008-03-01

    The traditional bar code scanner is influenced by the length of data line, but the farthest distance of the wireless bar code scanner of wireless communication is generally between 30m and 100m on the market. By rebuilding the traditional CCD optical bar code scanner, a CMOS code scanner is designed based on the ZigBee to meet the demands of market. The scan system consists of the CMOS image sensor and embedded chip S3C2401X, when the two dimensional bar code is read, the results show the inaccurate and wrong code bar, resulted from image defile, disturber, reads image condition badness, signal interference, unstable system voltage. So we put forward the method which uses the matrix evaluation and Read-Solomon arithmetic to solve them. In order to construct the whole wireless optics of bar code system and to ensure its ability of transmitting bar code image signals digitally with long distances, ZigBee is used to transmit data to the base station, and this module is designed based on image acquisition system, and at last the wireless transmitting/receiving CC2430 module circuit linking chart is established. And by transplanting the embedded RTOS system LINUX to the MCU, an applying wireless CMOS optics bar code scanner and multi-task system is constructed. Finally, performance of communication is tested by evaluation software Smart RF. In broad space, every ZIGBEE node can realize 50m transmission with high reliability. When adding more ZigBee nodes, the transmission distance can be several thousands of meters long.

  19. Sticks and Stones: Why First Amendment Absolutism Fails When Applied to Campus Harassment Codes.

    Science.gov (United States)

    Lumsden, Linda

    This paper analyzes how absolutist arguments against campus harassment codes violate the spirit of the first amendment, examining in particular the United States Supreme Court ruling in "RAV v. St. Paul." The paper begins by tracing the current development of first amendment doctrine, analyzing its inadequacy in the campus hate speech…

  20. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    Science.gov (United States)

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  1. Deterministic sensitivity and uncertainty methodology for best estimate system codes applied in nuclear technology

    International Nuclear Information System (INIS)

    Petruzzi, A.; D'Auria, F.; Cacuci, D.G.

    2009-01-01

    Nuclear Power Plant (NPP) technology has been developed based on the traditional defense in depth philosophy supported by deterministic and overly conservative methods for safety analysis. In the 1970s [1], conservative hypotheses were introduced for safety analyses to address existing uncertainties. Since then, intensive thermal-hydraulic experimental research has resulted in a considerable increase in knowledge and consequently in the development of best-estimate codes able to provide more realistic information about the physical behaviour and to identify the most relevant safety issues allowing the evaluation of the existing actual margins between the results of the calculations and the acceptance criteria. However, the best-estimate calculation results from complex thermal-hydraulic system codes (like Relap5, Cathare, Athlet, Trace, etc..) are affected by unavoidable approximations that are un-predictable without the use of computational tools that account for the various sources of uncertainty. Therefore the use of best-estimate codes (BE) within the reactor technology, either for design or safety purposes, implies understanding and accepting the limitations and the deficiencies of those codes. Taking into consideration the above framework, a comprehensive approach for utilizing quantified uncertainties arising from Integral Test Facilities (ITFs, [2]) and Separate Effect Test Facilities (SETFs, [3]) in the process of calibrating complex computer models for the application to NPP transient scenarios has been developed. The methodology proposed is capable of accommodating multiple SETFs and ITFs to learn as much as possible about uncertain parameters, allowing for the improvement of the computer model predictions based on the available experimental evidences. The proposed methodology constitutes a major step forward with respect to the generally used expert judgment and statistical methods as it permits a) to establish the uncertainties of any parameter

  2. Comparative Analysis of VERA Depletion Problems

    International Nuclear Information System (INIS)

    Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung

    2016-01-01

    Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations

  3. Comparative Analysis of VERA Depletion Problems

    Energy Technology Data Exchange (ETDEWEB)

    Park, Jinsu; Kim, Wonkyeong; Choi, Sooyoung; Lee, Hyunsuk; Lee, Deokjung [Ulsan National Institute of Science and Technology, Ulsan (Korea, Republic of)

    2016-10-15

    Each code has its own solver for depletion, which can produce different depletion calculation results. In order to produce reference solutions for depletion calculation comparison, sensitivity studies should be preceded for each depletion solver. The sensitivity tests for burnup interval, number of depletion zones, and recoverable energy per fission (Q-value) were performed in this paper. For the comparison of depletion calculation results, usually the multiplication factors are compared as a function of burnup. In this study, new comparison methods have been introduced by using the number density of isotope or element, and a cumulative flux instead of burnup. In this paper, optimum depletion calculation options are determined through the sensitivity study of the burnup intervals and the number of depletion intrazones. Because the depletion using CRAM solver performs well for large burnup intervals, smaller number of burnup steps can be used to produce converged solutions. It was noted that the depletion intra-zone sensitivity is only pin-type dependent. The 1 and 10 depletion intra-zones for the normal UO2 pin and gadolinia rod, respectively, are required to obtain the reference solutions. When the optimized depletion calculation options are used, the differences of Q-values are found to be a main cause of the differences of solutions. In this paper, new comparison methods were introduced for consistent code-to-code comparisons even when different kappa libraries were used in the depletion calculations.

  4. Apply or Not to Apply, That Is The Question: Sustainable Development as Solution to the Antinomy About the Application of the New Forest Code

    Directory of Open Access Journals (Sweden)

    Rafael Antonietti Matthes

    2016-10-01

    Full Text Available Starting from the Brazilian constitutional premise, through which, economic development, and social development, should strive for maintaining environmental quality for present and future generations (Article 225, heading, this study suggests a possible indicator to resolve the contradiction related to the applicability or otherwise of the new Forest Code (Law 12651 of May 25, 2012 to the terms of adjustment of conduct signed before its term, which agreed obligations should be implemented upon its validity. Apply or not apply, that is the question. On the one hand, postulate in favor of the fence thesis environmental backlash on the other, there is the provision of incentives such as propulsion protective behaviors and the factual social effectiveness of current regulations now. Using the methods of dialectical and systemic approach, with empirical notes, lists up the fundamental right to sustainable development as respond to the contrast of the legal language indicated in the methodological problem.

  5. Thermal-hydraulic Analysis of LOCA to Apply PSA Using MAAP and MARS codes

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Yun Je; Kim, Tae Jin; Park, Goon Cherl [Seoul National University, Seoul (Korea, Republic of); Lim, Ho Gon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2007-10-15

    Thermal-hydraulic analysis in Probabilistic Safety Assessment (PSA) is performed to product basic data, which are needed to analyze accident sequence, construct system fault tree and evaluate operator error. Through the thermal-hydraulic analysis, the reactor power level, the pressure and the temperature of primary side, and the pressure, the temperature and the water level of secondary side are calculated. From these data, system success criteria for construction of event tree and the allowable outrage time for human reliability analysis are determined. Until now, system codes such as MAAP, RELAP, MELCOR, RETRAN have been widely used for thermal-hydraulic analysis in PSA. The adequacy of analysis is dependent on the type of accident and the models of codes. As a first step of 'Study on Best-Estimate Thermal-Hydraulic Analysis Methodology Applicable to Probabilistic Safety Assessment', a part of National Nuclear Technology Program of Ministry of Science and Technology, it is required to compare the result of MARS analysis with that of MAAP analysis previously performed, and to evaluate its applicability to PSA.

  6. Assessment of Optical Coherence Tomography Color Probability Codes in Myopic Glaucoma Eyes After Applying a Myopic Normative Database.

    Science.gov (United States)

    Seol, Bo Ram; Kim, Dong Myung; Park, Ki Ho; Jeoung, Jin Wook

    2017-11-01

    To evaluate the optical coherence tomography (OCT) color probability codes based on a myopic normative database and to investigate whether the implementation of the myopic normative database can improve the OCT diagnostic ability in myopic glaucoma. Comparative validity study. In this study, 305 eyes (154 myopic healthy eyes and 151 myopic glaucoma eyes) were included. A myopic normative database was obtained based on myopic healthy eyes. We evaluated the agreement between OCT color probability codes after applying the built-in and myopic normative databases, respectively. Another 120 eyes (60 myopic healthy eyes and 60 myopic glaucoma eyes) were included and the diagnostic performance of OCT color codes using a myopic normative database was investigated. The mean weighted kappa (Kw) coefficients for quadrant retinal nerve fiber layer (RNFL) thickness, clock-hour RNFL thickness, and ganglion cell-inner plexiform layer (GCIPL) thickness were 0.636, 0.627, and 0.564, respectively. The myopic normative database showed a higher specificity than did the built-in normative database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P database in quadrant RNFL thickness, clock-hour RNFL thickness, and GCIPL thickness (P = .011, P = .004, P database. The implementation of a myopic normative database is needed to allow more precise interpretation of OCT color probability codes when used in myopic eyes. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Cervical vertebral maturation: An objective and transparent code staging system applied to a 6-year longitudinal investigation.

    Science.gov (United States)

    Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca

    2017-05-01

    To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  8. An Efficient VQ Codebook Search Algorithm Applied to AMR-WB Speech Coding

    Directory of Open Access Journals (Sweden)

    Cheng-Yu Yeh

    2017-04-01

    Full Text Available The adaptive multi-rate wideband (AMR-WB speech codec is widely used in modern mobile communication systems for high speech quality in handheld devices. Nonetheless, a major disadvantage is that vector quantization (VQ of immittance spectral frequency (ISF coefficients takes a considerable computational load in the AMR-WB coding. Accordingly, a binary search space-structured VQ (BSS-VQ algorithm is adopted to efficiently reduce the complexity of ISF quantization in AMR-WB. This search algorithm is done through a fast locating technique combined with lookup tables, such that an input vector is efficiently assigned to a subspace where relatively few codeword searches are required to be executed. In terms of overall search performance, this work is experimentally validated as a superior search algorithm relative to a multiple triangular inequality elimination (MTIE, a TIE with dynamic and intersection mechanisms (DI-TIE, and an equal-average equal-variance equal-norm nearest neighbor search (EEENNS approach. With a full search algorithm as a benchmark for overall search load comparison, this work provides an 87% search load reduction at a threshold of quantization accuracy of 0.96, a figure far beyond 55% in the MTIE, 76% in the EEENNS approach, and 83% in the DI-TIE approach.

  9. A computational code for resolution of general compartment models applied to internal dosimetry

    International Nuclear Information System (INIS)

    Claro, Thiago R.; Todo, Alberto S.

    2011-01-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C≠ programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  10. A computational code for resolution of general compartment models applied to internal dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Claro, Thiago R.; Todo, Alberto S., E-mail: claro@usp.br, E-mail: astodo@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The dose resulting from internal contamination can be estimated with the use of biokinetic models combined with experimental results obtained from bio analysis and the knowledge of the incorporation time. The biokinetics models can be represented by a set of compartments expressing the transportation, retention and elimination of radionuclides from the body. The ICRP publications, number 66, 78 and 100, present compartmental models for the respiratory tract, gastrointestinal tract and for systemic distribution for an array of radionuclides of interest for the radiological protection. The objective of this work is to develop a computational code for designing, visualization and resolution of compartmental models of any nature. There are available four different techniques for the resolution of system of differential equations, including semi-analytical and numerical methods. The software was developed in C{ne} programming, using a Microsoft Access database and XML standards for file exchange with other applications. Compartmental models for uranium, thorium and iodine radionuclides were generated for the validation of the CBT software. The models were subsequently solved by SSID software and the results compared with the values published in the issue 78 of ICRP. In all cases the system is in accordance with the values published by ICRP. (author)

  11. SAS6. User`s guide. A two-dimensional depletion and criticality analysis code package based on the SCALE-4 system

    Energy Technology Data Exchange (ETDEWEB)

    Leege, P.F.A. de; Li, J.M.; Kloosterman, J.L.

    1995-04-01

    This users` guide gives a description of the functionality and the requested input of the SAS6 code sequence which can be used to perform burnup and criticality calculations based on functional modules from the SCALE-4 code system and libraries. The input file for the SAS6 control module is very similar to that of the other SAS and CSAS control modules available in the SCALE-4 system. Especially the geometry input of SAS6 is quite similar to that of SAS2H. However, the functionality of SAS6 is different from that of SAS2H. The geometry of the reactor lattice can be treated in more detail because use is made of the two-dimensional lattice code WIMS-D/IRI (An adapted version of WIMS-D/4) instead of the one-dimensional transport code XSDRNPM-S. Also the neutron absorption and production rates of nuclides not explicitly specified in the input can be accounted for by six pseudo nuclides. Furthermore, the centre pin can be subdivided into maximal 10 zones to improve the burnup calculation of the centre pin and to obtain more accurate k-infinite values for the assembly. Also the time step specification is more flexible than in the SAS2H sequence. (orig.).

  12. Numerical Analysis of Diaphragm Wall Model Executed in Poznań Clay Formation Applying Selected Fem Codes

    Directory of Open Access Journals (Sweden)

    Superczyńska M.

    2016-09-01

    Full Text Available The paper presents results of numerical calculations of a diaphragm wall model executed in Poznań clay formation. Two selected FEM codes were applied, Plaxis and Abaqus. Geological description of Poznań clay formation in Poland as well as geotechnical conditions on construction site in Warsaw city area were presented. The constitutive models of clay implemented both in Plaxis and Abaqus were discussed. The parameters of the Poznań clay constitutive models were assumed based on authors’ experimental tests. The results of numerical analysis were compared taking into account the measured values of horizontal displacements.

  13. Thermal-hydraulic calculations using MARS code applied to low power and shutdown probabilistic safety assessment in a PWR

    International Nuclear Information System (INIS)

    Son, Young-Seok; Shin, Jee-Young; Lim, Ho-Gon; Park, Jin-Hee; Jang, Seung-Cheol

    2005-01-01

    The methods developed for full-power probabilistic safety assessment, including thermal-hydraulic methods, have been widely applied to low power and shutdown conditions. Experience from current low power and shutdown probabilistic safety assessments, however, indicates that the thermal-hydraulic methods developed for full-power probabilistic safety assessments are not always reliable when applied to low power and shutdown conditions and consequently may yield misleading and inaccurate risk insights. To increase the usefulness of the low power and shutdown risk insights, the current methods and tools used for thermal-hydraulic calculations should be examined to ascertain whether they function effectively for low power and shutdown conditions. In this study, a platform for relatively detailed thermal-hydraulic calculations applied to low power and shutdown conditions in a pressurized water reactor was developed based on the best estimate thermal-hydraulic analysis code, MARS2.1. To confirm the applicability of the MARS platform to low power and shutdown conditions, many thermal-hydraulic analyses were performed for the selected topic, i.e. the loss of shutdown cooling events for various plant operating states at the Korean standard nuclear power plant. The platform developed in this study can deal effectively with low power and shutdown conditions, as well as assist the accident sequence analysis in low power and shutdown probabilistic safety assessments by providing fundamental data. Consequently, the resulting analyses may yield more realistic and accurate low power and shutdown risk insights

  14. The new MCNP6 depletion capability

    International Nuclear Information System (INIS)

    Fensin, M. L.; James, M. R.; Hendricks, J. S.; Goorley, J. T.

    2012-01-01

    The first MCNP based in-line Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology. (authors)

  15. The New MCNP6 Depletion Capability

    International Nuclear Information System (INIS)

    Fensin, Michael Lorne; James, Michael R.; Hendricks, John S.; Goorley, John T.

    2012-01-01

    The first MCNP based inline Monte Carlo depletion capability was officially released from the Radiation Safety Information and Computational Center as MCNPX 2.6.0. Both the MCNP5 and MCNPX codes have historically provided a successful combinatorial geometry based, continuous energy, Monte Carlo radiation transport solution for advanced reactor modeling and simulation. However, due to separate development pathways, useful simulation capabilities were dispersed between both codes and not unified in a single technology. MCNP6, the next evolution in the MCNP suite of codes, now combines the capability of both simulation tools, as well as providing new advanced technology, in a single radiation transport code. We describe here the new capabilities of the MCNP6 depletion code dating from the official RSICC release MCNPX 2.6.0, reported previously, to the now current state of MCNP6. NEA/OECD benchmark results are also reported. The MCNP6 depletion capability enhancements beyond MCNPX 2.6.0 reported here include: (1) new performance enhancing parallel architecture that implements both shared and distributed memory constructs; (2) enhanced memory management that maximizes calculation fidelity; and (3) improved burnup physics for better nuclide prediction. MCNP6 depletion enables complete, relatively easy-to-use depletion calculations in a single Monte Carlo code. The enhancements described here help provide a powerful capability as well as dictate a path forward for future development to improve the usefulness of the technology.

  16. Feedback Codes and Action Plans: Building the Capacity of First-Year Students to Apply Feedback to a Scientific Report

    Science.gov (United States)

    Bird, Fiona L.; Yucel, Robyn

    2015-01-01

    Effective feedback can build self-assessment skills in students so that they become more competent and confident to identify and self-correct weaknesses in their work. In this study, we trialled a feedback code as part of an integrated programme of formative and summative assessment tasks, which provided feedback to first-year students on their…

  17. Comparison of the results of several heat transfer computer codes when applied to a hypothetical nuclear waste repository

    International Nuclear Information System (INIS)

    Claiborne, H.C.; Wagner, R.S.; Just, R.A.

    1979-12-01

    A direct comparison of transient thermal calculations was made with the heat transfer codes HEATING5, THAC-SIP-3D, ADINAT, SINDA, TRUMP, and TRANCO for a hypothetical nuclear waste repository. With the exception of TRUMP and SINDA (actually closer to the earlier CINDA3G version), the other codes agreed to within +-5% for the temperature rises as a function of time. The TRUMP results agreed within +-5% up to about 50 years, where the maximum temperature occurs, and then began an oscillary behavior with up to 25% deviations at longer times. This could have resulted from time steps that were too large or from some unknown system problems. The available version of the SINDA code was not compatible with the IBM compiler without using an alternative method for handling a variable thermal conductivity. The results were about 40% low, but a reasonable agreement was obtained by assuming a uniform thermal conductivity; however, a programming error was later discovered in the alternative method. Some work is required on the IBM version to make it compatible with the system and still use the recommended method of handling variable thermal conductivity. TRANCO can only be run as a 2-D model, and TRUMP and CINDA apparently required longer running times and did not agree in the 2-D case; therefore, only HEATING5, THAC-SIP-3D, and ADINAT were used for the 3-D model calculations. The codes agreed within +-5%; at distances of about 1 ft from the waste canister edge, temperature rises were also close to that predicted by the 3-D model

  18. Development of the MCNPX depletion capability: A Monte Carlo linked depletion method that automates the coupling between MCNPX and CINDER90 for high fidelity burnup calculations

    Science.gov (United States)

    Fensin, Michael Lorne

    Monte Carlo-linked depletion methods have gained recent interest due to the ability to more accurately model complex 3-dimesional geometries and better track the evolution of temporal nuclide inventory by simulating the actual physical process utilizing continuous energy coefficients. The integration of CINDER90 into the MCNPX Monte Carlo radiation transport code provides a high-fidelity completely self-contained Monte-Carlo-linked depletion capability in a well established, widely accepted Monte Carlo radiation transport code that is compatible with most nuclear criticality (KCODE) particle tracking features in MCNPX. MCNPX depletion tracks all necessary reaction rates and follows as many isotopes as cross section data permits in order to achieve a highly accurate temporal nuclide inventory solution. This work chronicles relevant nuclear history, surveys current methodologies of depletion theory, details the methodology in applied MCNPX and provides benchmark results for three independent OECD/NEA benchmarks. Relevant nuclear history, from the Oklo reactor two billion years ago to the current major United States nuclear fuel cycle development programs, is addressed in order to supply the motivation for the development of this technology. A survey of current reaction rate and temporal nuclide inventory techniques is then provided to offer justification for the depletion strategy applied within MCNPX. The MCNPX depletion strategy is then dissected and each code feature is detailed chronicling the methodology development from the original linking of MONTEBURNS and MCNP to the most recent public release of the integrated capability (MCNPX 2.6.F). Calculation results of the OECD/NEA Phase IB benchmark, H. B. Robinson benchmark and OECD/NEA Phase IVB are then provided. The acceptable results of these calculations offer sufficient confidence in the predictive capability of the MCNPX depletion method. This capability sets up a significant foundation, in a well established

  19. Microscopic to macroscopic depletion model development for FORMOSA-P

    International Nuclear Information System (INIS)

    Noh, J.M.; Turinsky, P.J.; Sarsour, H.N.

    1996-01-01

    Microscopic depletion has been gaining popularity with regard to employment in reactor core nodal calculations, mainly attributed to the superiority of microscopic depletion in treating spectral history effects during depletion. Another trend is the employment of loading pattern optimization computer codes in support of reload core design. Use of such optimization codes has significantly reduced design efforts to optimize reload core loading patterns associated with increasingly complicated lattice designs. A microscopic depletion model has been developed for the FORMOSA-P pressurized water reactor (PWR) loading pattern optimization code. This was done for both fidelity improvements and to make FORMOSA-P compatible with microscopic-based nuclear design methods. Needless to say, microscopic depletion requires more computational effort compared with macroscopic depletion. This implies that microscopic depletion may be computationally restrictive if employed during the loading pattern optimization calculation because many loading patterns are examined during the course of an optimization search. Therefore, the microscopic depletion model developed here uses combined models of microscopic and macroscopic depletion. This is done by first performing microscopic depletions for a subset of possible loading patterns from which 'collapsed' macroscopic cross sections are obtained. The collapsed macroscopic cross sections inherently incorporate spectral history effects. Subsequently, the optimization calculations are done using the collapsed macroscopic cross sections. Using this approach allows maintenance of microscopic depletion level accuracy without substantial additional computing resources

  20. Encoded recoupling and decoupling: An alternative to quantum error-correcting codes applied to trapped-ion quantum computation

    International Nuclear Information System (INIS)

    Lidar, D.A.; Wu, L.-A.

    2003-01-01

    A recently developed theory for eliminating decoherence and design constraints in quantum computers, 'encoded recoupling and decoupling', is shown to be fully compatible with a promising proposal for an architecture enabling scalable ion-trap quantum computation [D. Kielpinski et al., Nature (London) 417, 709 (2002)]. Logical qubits are encoded into pairs of ions. Logic gates are implemented using the Soerensen-Moelmer (SM) scheme applied to pairs of ions at a time. The encoding offers continuous protection against collective dephasing. Decoupling pulses, that are also implemented using the SM scheme directly to the encoded qubits, are capable of further reducing various other sources of qubit decoherence, such as due to differential dephasing and due to decohered vibrational modes. The feasibility of using the relatively slow SM pulses in a decoupling scheme quenching the latter source of decoherence follows from the observed 1/f spectrum of the vibrational bath

  1. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  2. A SAS2H/KENO-V Methodology for 3D Full Core depletion analysis

    International Nuclear Information System (INIS)

    Milosevic, M.; Greenspan, E.; Vujic, J.; Petrovic, B.

    2003-04-01

    This paper describes the use of a SAS2H/KENO-V methodology for 3D full core depletion analysis and illustrates its capabilities by applying it to burnup analysis of the IRIS core benchmarks. This new SAS2H/KENO-V sequence combines a 3D Monte Carlo full core calculation of node power distribution and a 1D Wigner-Seitz equivalent cell transport method for independent depletion calculation of each of the nodes. This approach reduces by more than an order of magnitude the time required for getting comparable results using the MOCUP code system. The SAS2H/KENO-V results for the asymmetric IRIS core benchmark are in good agreement with the results of the ALPHA/PHOENIX/ANC code system. (author)

  3. Reactor fuel depletion benchmark of TINDER

    International Nuclear Information System (INIS)

    Martin, W.J.; Oliveira, C.R.E. de; Hecht, A.A.

    2014-01-01

    Highlights: • A reactor burnup benchmark of TINDER, coupling MCNP6 to CINDER2008, was performed. • TINDER is a poor candidate for fuel depletion calculations using its current libraries. • Data library modification is necessary if fuel depletion is desired from TINDER. - Abstract: Accurate burnup calculations are key to proper nuclear reactor design, fuel cycle modeling, and disposal estimations. The TINDER code, originally designed for activation analyses, has been modified to handle full burnup calculations, including the widely used predictor–corrector feature. In order to properly characterize the performance of TINDER for this application, a benchmark calculation was performed. Although the results followed the trends of past benchmarked codes for a UO 2 PWR fuel sample from the Takahama-3 reactor, there were obvious deficiencies in the final result, likely in the nuclear data library that was used. Isotopic comparisons versus experiment and past code benchmarks are given, as well as hypothesized areas of deficiency and future work

  4. Addressing Ozone Layer Depletion

    Science.gov (United States)

    Access information on EPA's efforts to address ozone layer depletion through regulations, collaborations with stakeholders, international treaties, partnerships with the private sector, and enforcement actions under Title VI of the Clean Air Act.

  5. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models

    International Nuclear Information System (INIS)

    Fonseca, Telma Cristina Ferreira

    2009-01-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C ++ programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  6. Proposal to consistently apply the International Code of Nomenclature of Prokaryotes (ICNP) to names of the oxygenic photosynthetic bacteria (cyanobacteria), including those validly published under the International Code of Botanical Nomenclature (ICBN)/International Code of Nomenclature for algae, fungi and plants (ICN), and proposal to change Principle 2 of the ICNP.

    Science.gov (United States)

    Pinevich, Alexander V

    2015-03-01

    This taxonomic note was motivated by the recent proposal [Oren & Garrity (2014) Int J Syst Evol Microbiol 64, 309-310] to exclude the oxygenic photosynthetic bacteria (cyanobacteria) from the wording of General Consideration 5 of the International Code of Nomenclature of Prokaryotes (ICNP), which entails unilateral coverage of these prokaryotes by the International Code of Nomenclature for algae, fungi, and plants (ICN; formerly the International Code of Botanical Nomenclature, ICBN). On the basis of key viewpoints, approaches and rules in the systematics, taxonomy and nomenclature of prokaryotes it is reciprocally proposed to apply the ICNP to names of cyanobacteria including those validly published under the ICBN/ICN. For this purpose, a change to Principle 2 of the ICNP is proposed to enable validation of cyanobacterial names published under the ICBN/ICN rules. © 2015 IUMS.

  7. LAVENDER: A steady-state core analysis code for design studies of accelerator driven subcritical reactors

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Shengcheng; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi, E-mail: yqzheng@mail.xjtu.edu.cn; Huang, Kai; He, Mingtao; Li, Xunzhao

    2014-10-15

    Highlights: • A new code system for design studies of accelerator driven subcritical reactors (ADSRs) is developed. • S{sub N} transport solver in triangular-z meshes, fine deletion analysis and multi-channel thermal-hydraulics analysis are coupled in the code. • Numerical results indicate that the code is reliable and efficient for design studies of ADSRs. - Abstract: Accelerator driven subcritical reactors (ADSRs) have been proposed and widely investigated for the transmutation of transuranics (TRUs). ADSRs have several special characteristics, such as the subcritical core driven by spallation neutrons, anisotropic neutron flux distribution and complex geometry etc. These bring up requirements for development or extension of analysis codes to perform design studies. A code system named LAVENDER has been developed in this paper. It couples the modules for spallation target simulation and subcritical core analysis. The neutron transport-depletion calculation scheme is used based on the homogenized cross section from assembly calculations. A three-dimensional S{sub N} nodal transport code based on triangular-z meshes is employed and a multi-channel thermal-hydraulics analysis model is integrated. In the depletion calculation, the evolution of isotopic composition in the core is evaluated using the transmutation trajectory analysis algorithm (TTA) and fine depletion chains. The new code is verified by several benchmarks and code-to-code comparisons. Numerical results indicate that LAVENDER is reliable and efficient to be applied for the steady-state analysis and reactor core design of ADSRs.

  8. A definition of depletion of fish stocks

    Science.gov (United States)

    Van Oosten, John

    1949-01-01

    Attention was focused on the need of a common and better understanding of the term depletion as applied to the fisheries in order to eliminate if possible the existing inexactness of thought on the subject. Depletion has been confused at various times with at least ten different ideas associated with it but which, as has has heen pointed out, are not synonymous at all. In defining depletion we must recognize that the term represents a condition and must not he confounded with the cause (overfishing) that leads to this condition or with the symptoms that identify it. Depletion was defined as a reduction, through overfishing, in the level of abundance of the exploitable segment of a stock that prevents the realization of the maximum productive capacity.

  9. Revisiting The Depleted Self.

    Science.gov (United States)

    Abraham, Reggie

    2018-04-01

    This article revisits Donald Capps's book The Depleted Self (The depleted self: sin in a narcissistic age. Fortress Press, Minneapolis, 1993), which grew out of his 1990 Schaff Lectures at Pittsburgh Theological Seminary. In these lectures Capps proposed that the theology of guilt had dominated much of post-Reformation discourse. But with the growing prevalence of the narcissistic personality in the late twentieth century, the theology of guilt no longer adequately expressed humanity's sense of "wrongness" before God. Late twentieth-century persons sense this disjunction between God and self through shame dynamics. Narcissists are not "full" of themselves, as popular perspectives might indicate. Instead, they are empty, depleted selves. Psychologists suggest this stems from lack of emotional stimulation and the absence of mirroring in the early stages of life. The narcissist's search for attention and affirmation takes craving, paranoid, manipulative, or phallic forms and is essentially a desperate attempt to fill the internal emptiness. Capps suggests that two narratives from the Gospels are helpful here: the story of the woman with the alabaster jar and the story of Jesus's dialogue with Mary and John at Calvary. These stories provide us with clues as to how depleted selves experienced mirroring and the potential for internal peace in community with Jesus.

  10. Ozone depletion update.

    Science.gov (United States)

    Coldiron, B M

    1996-03-01

    Stratospheric ozone depletion due to chlorofluorocarbons an d increased ultraviolet radiation penetration has long been predicted. To determine if predictions of ozone depletion are correct and, if so, the significance of this depletion. Review of the English literature regarding ozone depletion and solar ultraviolet radiation. The ozone layer is showing definite thinning. Recently, significantly increased ultraviolet radiation transmission has been detected at ground level at several metering stations. It appears that man-made aerosols (air pollution) block increased UVB transmission in urban areas. Recent satellite measurements of stratospheric fluorine levels more directly implicate chlorofluorocarbons as a major source of catalytic stratospheric chlorine, although natural sources may account for up to 40% of stratospheric chlorine. Stratospheric chlorine concentrations, and resultant increased ozone destruction, will be enhanced for at least the next 70 years. The potential for increased transmission of ultraviolet radiation will exist for the next several hundred years. While little damage due to increased ultraviolet radiation has occurred so far, the potential for long-term problems is great.

  11. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    OpenAIRE

    Kim, Hyoung Tae; Chang, Se-Myong; Shin, Jong-Hyeon; Kim, Yong Gwon

    2016-01-01

    The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor), has been modeled in multidimension for the computation based on CFD (computational fluid dynamics) technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchm...

  12. Intrinsic Depletion or Not

    DEFF Research Database (Denmark)

    Klösgen, Beate; Bruun, Sara; Hansen, Søren

      The presence of a depletion layer of water along extended hydrophobic interfaces, and a possibly related formation of nanobubbles, is an ongoing discussion. The phenomenon was initially reported when we, years ago, chose thick films (~300-400Å) of polystyrene as cushions between a crystalline...... carrier and biomimetic membranes deposited thereupon and exposed to bulk water. While monitoring the sequential build-up of the sandwiched composite structure by continuous neutron reflectivity experiments the formation of an unexpected additional layer was detected (1). Located at the polystyrene surface...... in between he polymer cushion and bulk water the layer was attributed to water of reduced density and was called "depletion layer".  Impurities or preparative artefacts were excluded as its origin. Later on, the formation of nanobubbles from this vapour-like water phase was initiated by tipping the surface...

  13. Investigations of space-dependent safety-related parameters of a PBMR-like HTR in transient operating conditions applying a multi-group diffusion code

    Energy Technology Data Exchange (ETDEWEB)

    Druska, C. [Institute for Energy Research, Safety Research and Reactor Technology (IEF-6), Forschungszentrum Juelich (Germany); Kasselmann, St. [Institute for Energy Research, Safety Research and Reactor Technology (IEF-6), Forschungszentrum Juelich (Germany)], E-mail: s.kasselmann@fz-juelich.de; Lauer, A. [Institute for Energy Research, Safety Research and Reactor Technology (IEF-6), Forschungszentrum Juelich (Germany)

    2009-03-15

    So far, the two-dimensional reactor dynamics code TINTE (time-dependent nucleonics and temperatures) was applied for simulations of high-temperature gas cooled reactors. One limitation of TINTE is that the neutron energy spectrum is modeled by only two energy groups, namely a thermal and a fast group. Present demands for increased numerical accuracy leads to the question of how precise the two-group approximation is compared to a multi-group approach. The recently developed multi-group derivative of TINTE called MGT (multi-group TINTE) is able to handle up to 43 neutron energy groups. In this study, different scenarios (normal operation and design-basis accidents) have been simulated for a PBMR-like HTR reactor design with MGT. The effect of an increasing number of energy groups on time- and space-dependent safety-related parameters like the fuel and coolant temperature, the nuclear heat source or the xenon concentration is studied. Different ways of calculating the material cross-sections are compared as well.

  14. Capital expenditure and depletion

    International Nuclear Information System (INIS)

    Rech, O.; Saniere, A.

    2003-01-01

    In the future, the increase in oil demand will be covered for the most part by non conventional oils, but conventional sources will continue to represent a preponderant share of the world oil supply. Their depletion represents a complex challenge involving technological, economic and political factors. At the same time, there is reason for concern about the decrease in exploration budgets at the major oil companies. (author)

  15. MO-G-BRE-05: Clinical Process Improvement and Billing in Radiation Oncology: A Case Study of Applying FMEA for CPT Code 77336 (continuing Medical Physics Consultation)

    International Nuclear Information System (INIS)

    Spirydovich, S; Huq, M

    2014-01-01

    Purpose: The improvement of quality in healthcare can be assessed by Failure Mode and Effects Analysis (FMEA). In radiation oncology, FMEA, as applied to the billing CPT code 77336, can improve both charge capture and, most importantly, quality of the performed services. Methods: We created an FMEA table for the process performed under CPT code 77336. For a given process step, each member of the assembled team (physicist, dosimetrist, and therapist) independently assigned numerical values for: probability of occurrence (O, 1–10), severity (S, 1–10), and probability of detection (D, 1–10) for every failure mode cause and effect combination. The risk priority number, RPN, was then calculated as a product of O, S and D from which an average RPN was calculated for each combination mentioned above. A fault tree diagram, with each process sorted into 6 categories, was created with linked RPN. For processes with high RPN recommended actions were assigned. 2 separate R and V systems (Lantis and EMR-based ARIA) were considered. Results: We identified 9 potential failure modes and corresponding 19 potential causes of these failure modes all resulting in unjustified 77336 charge and compromised quality of care. In Lantis, the range of RPN was 24.5–110.8, and of S values – 2–10. The highest ranking RPN of 110.8 came from the failure mode described as “end-of-treatment check not done before the completion of treatment”, and the highest S value of 10 (RPN=105) from “overrides not checked”. For the same failure modes, within ARIA electronic environment with its additional controls, RPN values were significantly lower (44.3 for end-of-treatment missing check and 20.0 for overrides not checked). Conclusion: Our work has shown that when charge capture was missed that also resulted in some services not being performed. Absence of such necessary services may result in sub-optimal quality of care rendered to patients

  16. Ozone-depleting Substances (ODS)

    Data.gov (United States)

    U.S. Environmental Protection Agency — This site includes all of the ozone-depleting substances (ODS) recognized by the Montreal Protocol. The data include ozone depletion potentials (ODP), global warming...

  17. Consequences of biome depletion

    International Nuclear Information System (INIS)

    Salvucci, Emiliano

    2013-01-01

    The human microbiome is an integral part of the superorganism together with their host and they have co-evolved since the early days of the existence of the human species. The modification of the microbiome as a result changes in food and social habits of human beings throughout their life history has led to the emergence of many diseases. In contrast with the Darwinian view of nature of selfishness and competence, new holistic approaches are rising. Under these views, the reconstitution of the microbiome comes out as a fundamental therapy for emerging diseases related to biome depletion.

  18. Depleted uranium management alternatives

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.

    1994-08-01

    This report evaluates two management alternatives for Department of Energy depleted uranium: continued storage as uranium hexafluoride, and conversion to uranium metal and fabrication to shielding for spent nuclear fuel containers. The results will be used to compare the costs with other alternatives, such as disposal. Cost estimates for the continued storage alternative are based on a life-cycle of 27 years through the year 2020. Cost estimates for the recycle alternative are based on existing conversion process costs and Capital costs for fabricating the containers. Additionally, the recycle alternative accounts for costs associated with intermediate product resale and secondary waste disposal for materials generated during the conversion process.

  19. The Feasibility of Multidimensional CFD Applied to Calandria System in the Moderator of CANDU-6 PHWR Using Commercial and Open-Source Codes

    Directory of Open Access Journals (Sweden)

    Hyoung Tae Kim

    2016-01-01

    Full Text Available The moderator system of CANDU, a prototype of PHWR (pressurized heavy-water reactor, has been modeled in multidimension for the computation based on CFD (computational fluid dynamics technique. Three CFD codes are tested in modeled hydrothermal systems of heavy-water reactors. Commercial codes, COMSOL Multiphysics and ANSYS-CFX with OpenFOAM, an open-source code, are introduced for the various simplified and practical problems. All the implemented computational codes are tested for a benchmark problem of STERN laboratory experiment with a precise modeling of tubes, compared with each other as well as the measured data and a porous model based on the experimental correlation of pressure drop. Also the effect of turbulence model is discussed for these low Reynolds number flows. As a result, they are shown to be successful for the analysis of three-dimensional numerical models related to the calandria system of CANDU reactors.

  20. Riddle of depleted uranium

    International Nuclear Information System (INIS)

    Hussein, A.S.

    2005-01-01

    Depleted Uranium (DU) is the waste product of uranium enrichment from the manufacturing of fuel rods for nuclear reactors in nuclear power plants and nuclear power ships. DU may also results from the reprocessing of spent nuclear reactor fuel. Potentially DU has both chemical and radiological toxicity with two important targets organs being the kidney and the lungs. DU is made into a metal and, due to its availability, low price, high specific weight, density and melting point as well as its pyrophoricity; it has a wide range of civilian and military applications. Due to the use of DU over the recent years, there appeared in some press on health hazards that are alleged to be due to DU. In these paper properties, applications, potential environmental and health effects of DU are briefly reviewed

  1. Depletion analysis of the UMLRR reactor core using MCNP6

    Science.gov (United States)

    Odera, Dim Udochukwu

    Accurate knowledge of the neutron flux and temporal nuclide inventory in reactor physics calculations is necessary for a variety of application in nuclear engineering such as criticality safety, safeguards, and spent fuel storage. The Monte Carlo N- Particle (MCNP6) code with integrated buildup depletion code (CINDER90) provides a high-fidelity tool that can be used to perform 3D, full core simulation to evaluate fissile material utilization, and nuclide inventory calculations as a function of burnup. The University of Massachusetts Lowell Research Reactor (UMLRR) reactor has been modeled with the deterministic based code, VENTURE and with an older version of MCNP (MCNP5). The MIT developed MCODE (MCNP ORIGEN DEPLETION CODE) was used previously to perform some limited depletion calculations. This work chronicles the use of MCNP6, released in June 2013, to perform coupled neutronics and depletion calculation. The results are compared to previously benchmarked results. Furthermore, the code is used to determine the ratio of fission products 134Cs and 137Cs (burnup indicators), and the resultant ratio is compared to the burnup of the UMLRR.

  2. The Toxicity of Depleted Uranium

    OpenAIRE

    Briner, Wayne

    2010-01-01

    Depleted uranium (DU) is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a c...

  3. OECD/NEA International Benchmark exercises: Validation of CFD codes applied nuclear industry; OECD/NEA internatiion Benchmark exercices: La validacion de los codigos CFD aplicados a la industria nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Pena-Monferrer, C.; Miquel veyrat, A.; Munoz-Cobo, J. L.; Chiva Vicent, S.

    2016-08-01

    In the recent years, due, among others, the slowing down of the nuclear industry, investment in the development and validation of CFD codes, applied specifically to the problems of the nuclear industry has been seriously hampered. Thus the International Benchmark Exercise (IBE) sponsored by the OECD/NEA have been fundamental to analyze the use of CFD codes in the nuclear industry, because although these codes are mature in many fields, still exist doubts about them in critical aspects of thermohydraulic calculations, even in single-phase scenarios. The Polytechnic University of Valencia (UPV) and the Universitat Jaume I (UJI), sponsored by the Nuclear Safety Council (CSN), have actively participated in all benchmark's proposed by NEA, as in the expert meetings,. In this paper, a summary of participation in the various IBE will be held, describing the benchmark itself, the CFD model created for it, and the main conclusions. (Author)

  4. Too Depleted to Try? Testing the Process Model of Ego Depletion in the Context of Unhealthy Snack Consumption.

    Science.gov (United States)

    Haynes, Ashleigh; Kemps, Eva; Moffitt, Robyn

    2016-11-01

    The process model proposes that the ego depletion effect is due to (a) an increase in motivation toward indulgence, and (b) a decrease in motivation to control behaviour following an initial act of self-control. In contrast, the reflective-impulsive model predicts that ego depletion results in behaviour that is more consistent with desires, and less consistent with motivations, rather than influencing the strength of desires and motivations. The current study sought to test these alternative accounts of the relationships between ego depletion, motivation, desire, and self-control. One hundred and fifty-six undergraduate women were randomised to complete a depleting e-crossing task or a non-depleting task, followed by a lab-based measure of snack intake, and self-report measures of motivation and desire strength. In partial support of the process model, ego depletion was related to higher intake, but only indirectly via the influence of lowered motivation. Motivation was more strongly predictive of intake for those in the non-depletion condition, providing partial support for the reflective-impulsive model. Ego depletion did not affect desire, nor did depletion moderate the effect of desire on intake, indicating that desire may be an appropriate target for reducing unhealthy behaviour across situations where self-control resources vary. © 2016 The International Association of Applied Psychology.

  5. Depletable resources and the economy

    NARCIS (Netherlands)

    Heijman, W.J.M.

    1991-01-01

    The subject of this thesis is the depletion of scarce resources. The main question to be answered is how to avoid future resource crises. After dealing with the complex relation between nature and economics, three important concepts in relation with resource depletion are discussed: steady state,

  6. The Toxicity of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Wayne Briner

    2010-01-01

    Full Text Available Depleted uranium (DU is an emerging environmental pollutant that is introduced into the environment primarily by military activity. While depleted uranium is less radioactive than natural uranium, it still retains all the chemical toxicity associated with the original element. In large doses the kidney is the target organ for the acute chemical toxicity of this metal, producing potentially lethal tubular necrosis. In contrast, chronic low dose exposure to depleted uranium may not produce a clear and defined set of symptoms. Chronic low-dose, or subacute, exposure to depleted uranium alters the appearance of milestones in developing organisms. Adult animals that were exposed to depleted uranium during development display persistent alterations in behavior, even after cessation of depleted uranium exposure. Adult animals exposed to depleted uranium demonstrate altered behaviors and a variety of alterations to brain chemistry. Despite its reduced level of radioactivity evidence continues to accumulate that depleted uranium, if ingested, may pose a radiologic hazard. The current state of knowledge concerning DU is discussed.

  7. A reflection of the coding of meaning in patient-physician interaction: Jürgen Habermas' theory of communication applied to sequence analysis.

    Science.gov (United States)

    Skirbekk, Helge

    2004-08-01

    This paper introduces parts of Jürgen Habermas' theory of communication in an attempt to understand how meaning is coded in patient-physician communication. By having a closer look at how patients and physicians make assertions with their utterances, light will be shed on difficult aspects of reaching understanding in the clinical encounter. Habermas' theory will be used to differentiate assertions into validity claims referring to truth, truthfulness and rightness. An analysis of hypothetical physician-replies to a patient suffering from back pains will substantiate the necessity for such a theory.

  8. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently

  9. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport, version II. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1977-11-01

    The report documents the computer code block VENTURE designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P/sub 1/) in up to three-dimensional geometry. It uses and generates interface data files adopted in the cooperative effort sponsored by the Reactor Physics Branch of the Division of Reactor Research and Development of the Energy Research and Development Administration. Several different data handling procedures have been incorporated to provide considerable flexibility; it is possible to solve a wide variety of problems on a variety of computer configurations relatively efficiently.

  10. VENTURE: a code block for solving multigroup neutronics problems applying the finite-difference diffusion-theory approximation to neutron transport

    International Nuclear Information System (INIS)

    Vondy, D.R.; Fowler, T.B.; Cunningham, G.W.

    1975-10-01

    The computer code block VENTURE, designed to solve multigroup neutronics problems with application of the finite-difference diffusion-theory approximation to neutron transport (or alternatively simple P 1 ) in up to three-dimensional geometry is described. A variety of types of problems may be solved: the usual eigenvalue problem, a direct criticality search on the buckling, on a reciprocal velocity absorber (prompt mode), or on nuclide concentrations, or an indirect criticality search on nuclide concentrations, or on dimensions. First-order perturbation analysis capability is available at the macroscopic cross section level

  11. Code Calibration Applied to the TCA High-Lift Model in the 14 x 22 Wind Tunnel (Simulation With and Without Model Post-Mount)

    Science.gov (United States)

    Lessard, Wendy B.

    1999-01-01

    The objective of this study is to calibrate a Navier-Stokes code for the TCA (30/10) baseline configuration (partial span leading edge flaps were deflected at 30 degs. and all the trailing edge flaps were deflected at 10 degs). The computational results for several angles of attack are compared with experimental force, moments, and surface pressures. The code used in this study is CFL3D; mesh sequencing and multi-grid were used to full advantage to accelerate convergence. A multi-grid approach was used similar to that used for the Reference H configuration allowing point-to-point matching across all the trailingedge block interfaces. From past experiences with the Reference H (ie, good force, moment, and pressure comparisons were obtained), it was assumed that the mounting system would produce small effects; hence, it was not initially modeled. However, comparisons of lower surface pressures indicated the post mount significantly influenced the lower surface pressures, so the post geometry was inserted into the existing grid using Chimera (overset grids).

  12. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  13. Hsp90 depletion goes wild

    OpenAIRE

    Siegal, Mark L; Masel, Joanna

    2012-01-01

    Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to r...

  14. Hsp90 depletion goes wild

    Directory of Open Access Journals (Sweden)

    Siegal Mark L

    2012-02-01

    Full Text Available Abstract Hsp90 reveals phenotypic variation in the laboratory, but is Hsp90 depletion important in the wild? Recent work from Chen and Wagner in BMC Evolutionary Biology has discovered a naturally occurring Drosophila allele that downregulates Hsp90, creating sensitivity to cryptic genetic variation. Laboratory studies suggest that the exact magnitude of Hsp90 downregulation is important. Extreme Hsp90 depletion might reactivate transposable elements and/or induce aneuploidy, in addition to revealing cryptic genetic variation. See research article http://wwww.biomedcentral.com/1471-2148/12/25

  15. Modeling of precipitation and Cr depletion profiles of Inconel 600 during heat treatments and LSM procedure

    Energy Technology Data Exchange (ETDEWEB)

    Bao Gang [Department of Mechanical System Engineering, Hiroshima University, 1-4-1 Higashi-Hiroshima, Hiroshima (Japan); Shinozaki, Kenji [Department of Mechanical System Engineering, Hiroshima University, 1-4-1 Higashi-Hiroshima, Hiroshima (Japan)]. E-mail: kshino@hiroshima-u.ac.jp; Inkyo, Muneyuki [Department of Mechanical System Engineering, Hiroshima University, 1-4-1 Higashi-Hiroshima, Hiroshima (Japan); Miyoshi, Tomohisa [Department of Mechanical System Engineering, Hiroshima University, 1-4-1 Higashi-Hiroshima, Hiroshima (Japan); Yamamoto, Motomichi [Department of Mechanical System Engineering, Hiroshima University, 1-4-1 Higashi-Hiroshima, Hiroshima (Japan); Mahara, Yoichi [Babcock-Hitachi K.K., 3-36 Takara-machi, Kure, Hiroshima (Japan); Watanabe, Hiroshi [Babcock-Hitachi K.K., 3-36 Takara-machi, Kure, Hiroshima (Japan)

    2006-08-10

    A model based on the thermodynamic and kinetic was conducted to simulate the Cr depletion profiles near the grain boundary in Inconel 600 during the heat treatments and laser surface melting (LSM) process using Thermo-Calc and Dictra code. Based on the good agreement of Cr concentration distribution during heat treatments measured by experiments, the microsegregation of Cr induced by cellular microstructure formed during the LSM process was also modeled. The Cr depletion profile was evaluated using the Cr depletion area below the critical Cr concentration for intergranular cracking/intergranular stress corrosion cracking (IGC/IGSCC) susceptibility (8 mass%). Comparing with the result of Streicher test, the Cr depletion area calculated showed good coherence with the IGC/IGSCC susceptibility. The sample after SR + LTS treatment with the largest Cr depletion area showed the worst IGC/IGSCC resistance, while, the sample after LSM process with the smaller Cr depletion area showed the excellent IGC/IGSCC resistance.

  16. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  17. An Applied Study of Implementation of the Advanced Decommissioning Costing Methodology for Intermediate Storage Facility for Spent Fuel in Studsvik, Sweden with special emphasis to the application of the Omega code

    International Nuclear Information System (INIS)

    Kristofova, Kristina; Vasko, Marek; Daniska, Vladimir; Ondra, Frantisek; Bezak, Peter; Lindskog, Staffan

    2007-01-01

    The presented study is focused on an analysis of decommissioning costs for the Intermediate Storage Facility for Spent Fuel (FA) facility in Studsvik prepared by SVAFO and a proposal of the advanced decommissioning costing methodology application. Therefore, this applied study concentrates particularly in the following areas: 1. Analysis of FA facility cost estimates prepared by SVAFO including description of FA facility in Studsvik, summarised input data, applied cost estimates methodology and summarised results from SVAFO study. 2. Discussion of results of the SVAFO analysis, proposals for enhanced cost estimating methodology and upgraded structure of inputs/outputs for decommissioning study for FA facility. 3. Review of costing methodologies with the special emphasis on the advanced costing methodology and cost calculation code OMEGA. 4. Discussion on implementation of the advanced costing methodology for FA facility in Studsvik together with: - identification of areas of implementation; - analyses of local decommissioning infrastructure; - adaptation of the data for the calculation database; - inventory database; and - implementation of the style of work with the computer code OMEGA

  18. Impact of mineral resource depletion

    CSIR Research Space (South Africa)

    Brent, AC

    2006-09-01

    Full Text Available In a letter to the editor, the authors comment on BA Steen's article on "Abiotic Resource Depletion: different perceptions of the problem with mineral deposits" published in the special issue of the International Journal of Life Cycle Assessment...

  19. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class......Sammenfatning af de mest væsentlige pointer fra hovedrapporten: Dokumentation og evaluering af Coding Class...

  20. The octopus burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de.

    1996-01-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional geometries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (author)

  1. The OCTOPUS burnup and criticality code system

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Kuijper, J.C.; Leege, P.F.A. de

    1996-06-01

    The OCTOPUS burnup and criticality code system is described. This system links the spectrum codes from the SCALE4.1, WIMS7 and MCNP4A packages to the ORIGEN-S and FISPACT4.2 fuel depletion and activation codes, which enables us to perform very accurate burnup calculations in complicated three-dimensional goemetries. The data used by all codes are consistently based on the JEF2.2 evaluated nuclear data file. Some special features of OCTOPUS not available in other codes are described, as well as the validation of the system. (orig.)

  2. Network Coding Over The 232

    DEFF Research Database (Denmark)

    Pedersen, Morten Videbæk; Heide, Janus; Vingelmann, Peter

    2013-01-01

    Creating efficient finite field implementations has been an active research topic for several decades. Many appli- cations in areas such as cryptography, signal processing, erasure coding and now also network coding depend on this research to deliver satisfactory performance. In this paper we...... will be useful in many network coding applications where large field sizes are required....

  3. Evaluation of measured LWR spent fuel composition data for use in code validation end-user manual

    International Nuclear Information System (INIS)

    Hermann, O.W.; DeHart, M.D.; Murphy, B.D.

    1998-02-01

    Burnup credit (BUC) is a concept applied in the criticality safety analysis of spent nuclear fuel in which credit or partial credit is taken for the reduced reactivity worth of the fuel due to both fissile depletion and the buildup of actinides and fission products that act as net neutron absorbers. Typically, a two-step process is applied in BUC analysis: first, depletion calculations are performed to estimate the isotopic content of spent fuel based on its burnup history; second, three-dimensional (3-D) criticality calculations are performed based on specific spent fuel packaging configurations. In seeking licensing approval of any BUC approach (e.g., disposal, transportation, or storage) both of these two computational procedures must be validated. This report was prepared in support of the validation process for depletion methods applied in the analysis of spent fuel from commercial light-water-reactor (LWR) designs. Such validation requires the comparison of computed isotopic compositions with those measured via radiochemical assay to assess the ability of a computer code to predict the contents of spent fuel samples. The purpose of this report is to address the availability and appropriateness of measured data for use in the validation of isotopic depletion methods. Although validation efforts to date at ORNL have been based on calculations using the SAS2H depletion sequence of the SCALE code system, this report has been prepared as an overview of potential sources of validation data independent of the code system used. However, data that are identified as in use in this report refer to earlier validation work performed using SAS2H in support of BUC. This report is the result of a study of available assay data, using the experience gained in spent fuel isotopic validation and with a consideration of the validation issues described earlier. This report recommends the suitability of each set of data for validation work similar in scope to the earlier work

  4. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  5. Doubled Color Codes

    Science.gov (United States)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  6. OSCAR-4 Code System Application to the SAFARI-1 Reactor

    International Nuclear Information System (INIS)

    Stander, Gerhardt; Prinsloo, Rian H.; Tomasevic, Djordje I.; Mueller, Erwin

    2008-01-01

    The OSCAR reactor calculation code system consists of a two-dimensional lattice code, the three-dimensional nodal core simulator code MGRAC and related service codes. The major difference between the new version of the OSCAR system, OSCAR-4, and its predecessor, OSCAR-3, is the new version of MGRAC which contains many new features and model enhancements. In this work some of the major improvements in the nodal diffusion solution method, history tracking, nuclide transmutation and cross section models are described. As part of the validation process of the OSCAR-4 code system (specifically the new MGRAC version), some of the new models are tested by comparing computational results to SAFARI-1 reactor plant data for a number of operational cycles and for varying applications. A specific application of the new features allows correct modeling of, amongst others, the movement of fuel-follower type control rods and dynamic in-core irradiation schedules. It is found that the effect of the improved control rod model, applied over multiple cycles of the SAFARI-1 reactor operation history, has a significant effect on in-cycle reactivity prediction and fuel depletion. (authors)

  7. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  8. Network Coding

    Indian Academy of Sciences (India)

    message symbols downstream, network coding achieves vast performance gains by permitting intermediate nodes to carry out algebraic oper- ations on the incoming data. In this article we present a tutorial introduction to network coding as well as an application to the e±cient operation of distributed data-storage networks.

  9. Applied dynamics

    CERN Document Server

    Schiehlen, Werner

    2014-01-01

    Applied Dynamics is an important branch of engineering mechanics widely applied to mechanical and automotive engineering, aerospace and biomechanics as well as control engineering and mechatronics. The computational methods presented are based on common fundamentals. For this purpose analytical mechanics turns out to be very useful where D’Alembert’s principle in the Lagrangian formulation proves to be most efficient. The method of multibody systems, finite element systems and continuous systems are treated consistently. Thus, students get a much better understanding of dynamical phenomena, and engineers in design and development departments using computer codes may check the results more easily by choosing models of different complexity for vibration and stress analysis.

  10. Uranium, depleted uranium, biological effects

    International Nuclear Information System (INIS)

    2001-01-01

    Physicists, chemists and biologists at the CEA are developing scientific programs on the properties and uses of ionizing radiation. Since the CEA was created in 1945, a great deal of research has been carried out on the properties of natural, enriched and depleted uranium in cooperation with university laboratories and CNRS. There is a great deal of available data about uranium; thousands of analyses have been published in international reviews over more than 40 years. This presentation on uranium is a very brief summary of all these studies. (author)

  11. Efficient characterization of fuel depletion in boiling water reactor

    International Nuclear Information System (INIS)

    Kim, S.H.

    1980-01-01

    An efficient fuel depletion method for boiling water reactor (BWR) fuel assemblies has been developed for fuel cycle analysis. A computer program HISTORY based on this method was designed to carry out accurate and rapid fuel burnup calculation for the fuel assembly. It has been usefully employed to study the depletion characteristics of the fuel assemblies for the preparation of nodal code input data and the fuel management study. The adequacy and the effectiveness of the assessment of this method used in HISTORY were demonstrated by comparing HISTORY results with more detailed CASMO results. The computing cost of HISTORY typically has been less than one dollar for the fuel assembly-level depletion calculations over the full life of the assembly, in contrast to more than $1000 for CASMO. By combining CASMO and HISTORY, a large number of expensive CASMO calculations can be replaced by inexpensive HISTORY. For the depletion calculations via CASMO/HISTORY, CASMO calculations are required only for the reference conditions and just at the beginning of life for other cases such as changes in void fraction, control rod condition and temperature. The simple and inexpensive HISTORY is sufficienty accurate and fast to be used in conjunction with CASMO for fuel cycle analysis and some BWR design calculations

  12. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  13. DOUBLE-SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    International Nuclear Information System (INIS)

    OGDEN DM; KIRCH NW

    2007-01-01

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed

  14. DOUBLE SHELL TANK (DST) HYDROXIDE DEPLETION MODEL FOR CARBON DIOXIDE ABSORPTION

    Energy Technology Data Exchange (ETDEWEB)

    OGDEN DM; KIRCH NW

    2007-10-31

    This document generates a supernatant hydroxide ion depletion model based on mechanistic principles. The carbon dioxide absorption mechanistic model is developed in this report. The report also benchmarks the model against historical tank supernatant hydroxide data and vapor space carbon dioxide data. A comparison of the newly generated mechanistic model with previously applied empirical hydroxide depletion equations is also performed.

  15. Some Families of Asymmetric Quantum MDS Codes Constructed from Constacyclic Codes

    Science.gov (United States)

    Huang, Yuanyuan; Chen, Jianzhang; Feng, Chunhui; Chen, Riqing

    2018-02-01

    Quantum maximal-distance-separable (MDS) codes that satisfy quantum Singleton bound with different lengths have been constructed by some researchers. In this paper, seven families of asymmetric quantum MDS codes are constructed by using constacyclic codes. We weaken the case of Hermitian-dual containing codes that can be applied to construct asymmetric quantum MDS codes with parameters [[n,k,dz/dx

  16. Development and verification of multicycle depletion perturbation theory

    International Nuclear Information System (INIS)

    White, J.R.; Burns, T.J.

    1980-01-01

    Recently, Williams has developed a coupled neutron/nuclide depletion perturbation theory (DPT) applicable to multidimensional and multigroup reactor analysis problems. This theoretical framework has been verified using the newly developed DEPTH module within the context of the VENTURE modular code system. The accuracy and usefulness of this alternate calculational method for burnup analyses has been demonstrated for a variety of final-time response functionals. However, these examples were restricted to single-cycle depletion analyses due to the theoretical assumption that the nuclide density field was continuous in time. Clearly, in multicycle problems, the nuclide concentrations must vary discontinuously with time to model refueling and shuffling operations or discrete control rod movements. Thus, the purpose of this work is to generalize the original DPT framework to include nuclide discontinuities and to verify that this generalization can be employed in realistic multicycle applications

  17. New constructions of MDS codes with complementary duals

    OpenAIRE

    Chen, Bocong; Liu, Hongwei

    2017-01-01

    Linear complementary-dual (LCD for short) codes are linear codes that intersect with their duals trivially. LCD codes have been used in certain communication systems. It is recently found that LCD codes can be applied in cryptography. This application of LCD codes renewed the interest in the construction of LCD codes having a large minimum distance. MDS codes are optimal in the sense that the minimum distance cannot be improved for given length and code size. Constructing LCD MDS codes is thu...

  18. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017....... Coding Class projektet er et pilotprojekt, hvor en række skoler i København og Vejle kommuner har igangsat undervisningsaktiviteter med fokus på kodning og programmering i skolen. Evalueringen og dokumentationen af projektet omfatter kvalitative nedslag i udvalgte undervisningsinterventioner i efteråret...

  19. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  20. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  1. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  2. Depleted uranium disposal options evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Hertzler, T.J.; Nishimoto, D.D.; Otis, M.D. [Science Applications International Corp., Idaho Falls, ID (United States). Waste Management Technology Div.

    1994-05-01

    The Department of Energy (DOE), Office of Environmental Restoration and Waste Management, has chartered a study to evaluate alternative management strategies for depleted uranium (DU) currently stored throughout the DOE complex. Historically, DU has been maintained as a strategic resource because of uses for DU metal and potential uses for further enrichment or for uranium oxide as breeder reactor blanket fuel. This study has focused on evaluating the disposal options for DU if it were considered a waste. This report is in no way declaring these DU reserves a ``waste,`` but is intended to provide baseline data for comparison with other management options for use of DU. To PICS considered in this report include: Retrievable disposal; permanent disposal; health hazards; radiation toxicity and chemical toxicity.

  3. Depleted uranium disposal options evaluation

    International Nuclear Information System (INIS)

    Hertzler, T.J.; Nishimoto, D.D.; Otis, M.D.

    1994-05-01

    The Department of Energy (DOE), Office of Environmental Restoration and Waste Management, has chartered a study to evaluate alternative management strategies for depleted uranium (DU) currently stored throughout the DOE complex. Historically, DU has been maintained as a strategic resource because of uses for DU metal and potential uses for further enrichment or for uranium oxide as breeder reactor blanket fuel. This study has focused on evaluating the disposal options for DU if it were considered a waste. This report is in no way declaring these DU reserves a ''waste,'' but is intended to provide baseline data for comparison with other management options for use of DU. To PICS considered in this report include: Retrievable disposal; permanent disposal; health hazards; radiation toxicity and chemical toxicity

  4. Spatial resolution enhancement residual coding using hybrid ...

    Indian Academy of Sciences (India)

    Traditional video coding uses classical predictive coding techniques, where a signal is initially approximated by taking advantage of the various redundancies present. Most of the video coding standards, including the latest HEVC, use the well-accepted procedure of applying transform coding on self-contained (intra) and ...

  5. High-voltage-compatible, fully depleted CCDs

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Stephen E.; Bebek, Chris J.; Dawson, Kyle S.; Emes, JohnE.; Fabricius, Max H.; Fairfield, Jessaym A.; Groom, Don E.; Karcher, A.; Kolbe, William F.; Palaio, Nick P.; Roe, Natalie A.; Wang, Guobin

    2006-05-15

    We describe charge-coupled device (CCD) developmentactivities at the Lawrence Berkeley National Laboratory (LBNL).Back-illuminated CCDs fabricated on 200-300 mu m thick, fully depleted,high-resistivity silicon substrates are produced in partnership with acommercial CCD foundry.The CCDs are fully depleted by the application ofa substrate bias voltage. Spatial resolution considerations requireoperation of thick, fully depleted CCDs at high substrate bias voltages.We have developed CCDs that are compatible with substrate bias voltagesof at least 200V. This improves spatial resolution for a given thickness,and allows for full depletion of thicker CCDs than previously considered.We have demonstrated full depletion of 650-675 mu m thick CCDs, withpotential applications in direct x-ray detection. In this work we discussthe issues related to high-voltage operation of fully depleted CCDs, aswell as experimental results on high-voltage-compatible CCDs.

  6. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  7. Is gas in the Orion nebula depleted

    International Nuclear Information System (INIS)

    Aiello, S.; Guidi, I.

    1978-01-01

    Depletion of heavy elements has been recognized to be important in the understanding of the chemical composition of the interstellar medium. This problem is also relevant to the study of H II regions. In this paper the gaseous depletion in the physical conditions of the Orion nebula is investigated. The authors reach the conclusion that very probably no depletion of heavy elements, due to sticking on dust grains, took place during the lifetime of the Orion nebula. (Auth.)

  8. [Nutritional depletion in chronic obstructive pulmonary disease].

    Science.gov (United States)

    Chen, Yan; Yao, Wan-zhen

    2004-10-01

    Chronic obstructive pulmonary disease (COPD) is one of the major diseases worldwide. Nutritional depletion is a common problem in COPD patients and also an independant predictor of survival in these patients. Many data are helpful for determining nutritional depletion, including anthropometric measurement, laboratory markers, body composition analysis (fat-free mass and lean mass), and body weight. The mechanism of nutritional depletion in patients with COPD is still uncertain. It may be associated with energy/metabolism imbalance, tissue hypoxia, systemic inflammation, and leptin/orexin disorders. In patients with nutritional depletion, growth hormone and testosterone can be used for nutritional therapy in addition to nutrition supplementation.

  9. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  10. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and softwa...... expression in the public realm. The book’s line of argument defends language against its invasion by economics, arguing that speech continues to underscore the human condition, however paradoxical this may seem in an era of pervasive computing....

  11. High pressure elasticity and thermal properties of depleted uranium

    International Nuclear Information System (INIS)

    Jacobsen, M. K.; Velisavljevic, N.

    2016-01-01

    Studies of the phase diagram of uranium have revealed a wealth of high pressure and temperature phases. Under ambient conditions the crystal structure is well defined up to 100 gigapascals (GPa), but very little information on thermal conduction or elasticity is available over this same range. This work has applied ultrasonic interferometry to determine the elasticity, mechanical, and thermal properties of depleted uranium to 4.5 GPa. Results show general strengthening with applied load, including an overall increase in acoustic thermal conductivity. Further implications are discussed within. This work presents the first high pressure studies of the elasticity and thermal properties of depleted uranium metal and the first real-world application of a previously developed containment system for making such measurements.

  12. Three-dimensional depletion analysis of the axial end of a Takahama fuel rod

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Gauld, Ian C.; Suyama, Kenya

    2008-01-01

    Recent developments in spent fuel characterization methods have involved the development of several three-dimensional depletion algorithms based on Monte Carlo methods for the transport solution. However, most validation done to-date has been based on radiochemical assay data for spent fuel samples selected from locations in fuel assemblies that can be easily analyzed using two-dimensional depletion methods. The development of a validation problem that has a truly three-dimensional nature is desirable to thoroughly test the full capabilities of advanced three-dimensional depletion tools. This paper reports on the results of three-dimensional depletion calculations performed using the T6-DEPL depletion sequence of the SCALE 5.1 code system, which couples the KENO-VI Monte Carlo transport solver with the ORIGEN-S depletion and decay code. Analyses are performed for a spent fuel sample that was extracted from within the last two centimeters of the fuel pellet stack. Although a three-dimensional behavior is clearly seen in the results of a number of calculations performed under different assumptions, the uncertainties associated with the position of the sample and its local surroundings render this sample of little value as a validation data point. (authors)

  13. Simulation of groundwater conditions and streamflow depletion to evaluate water availability in a Freeport, Maine, watershed

    Science.gov (United States)

    Nielsen, Martha G.; Locke, Daniel B.

    2012-01-01

    , the public-supply withdrawals (105.5 million gallons per year (Mgal/yr)) were much greater than those for any other category, being almost 7 times greater than all domestic well withdrawals (15.3 Mgal/yr). Industrial withdrawals in the study area (2.0 Mgal/yr) are mostly by a company that withdraws from an aquifer at the edge of the Merrill Brook watershed. Commercial withdrawals are very small (1.0 Mgal/yr), and no irrigation or other agricultural withdrawals were identified in this study area. A three-dimensional, steady-state groundwater-flow model was developed to evaluate stream-aquifer interactions and streamflow depletion from pumping, to help refine the conceptual model, and to predict changes in streamflow resulting from changes in pumping and recharge. Groundwater levels and flow in the Freeport aquifer study area were simulated with the three-dimensional, finite-difference groundwater-flow modeling code, MODFLOW-2005. Study area hydrology was simulated with a 3-layer model, under steady-state conditions. The groundwater model was used to evaluate changes that could occur in the water budgets of three parts of the local hydrologic system (the Harvey Brook watershed, the Merrill Brook watershed, and the buried aquifer from which pumping occurs) under several different climatic and pumping scenarios. The scenarios were (1) no pumping well withdrawals; (2) current (2009) pumping, but simulated drought conditions (20-percent reduction in recharge); (3) current (2009) recharge, but a 50-percent increase in pumping well withdrawals for public supply; and (4) drought conditions and increased pumping combined. In simulated drought situations, the overall recharge to the buried valley is about 15 percent less and the total amount of streamflow in the model area is reduced by about 19 percent. Without pumping, infiltration to the buried valley aquifer around the confining unit decreased by a small amount (0.05 million gallons per day (Mgal/d)), and discharge to the

  14. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  15. Radiation survey and decontamination of cape Arza from depleted uranium

    Directory of Open Access Journals (Sweden)

    Vukotić Perko

    2003-01-01

    Full Text Available In the action of NATO A-10 airplanes in 1999, the cape Arza, Serbia and Montenegro was contaminated by depleted uranium. The clean-up operations were undertaken at the site, and 242 uranium projectiles and their 49 larger fragments were removed from the cape. That is about 85% of the total number of projectiles by which Arza was contaminated. Here are described details of the applied procedures and results of the soil radioactivity measurements after decontamination.

  16. Plutonium in depleted uranium penetrators

    International Nuclear Information System (INIS)

    McLaughlin, J.P.; Leon-Vintro, L.; Smith, K.; Mitchell, P.I.; Zunic, Z.S.

    2002-01-01

    Depleted Uranium (DU) penetrators used in the recent Balkan conflicts have been found to be contaminated with trace amounts of transuranic materials such as plutonium. This contamination is usually a consequence of DU fabrication being carried out in facilities also using uranium recycled from spent military and civilian nuclear reactor fuel. Specific activities of 239+240 Plutonium generally in the range 1 to 12 Bq/kg have been found to be present in DU penetrators recovered from the attack sites of the 1999 NATO bombardment of Kosovo. A DU penetrator recovered from a May 1999 attack site at Bratoselce in southern Serbia and analysed by University College Dublin was found to contain 43.7 +/- 1.9 Bq/kg of 239+240 Plutonium. This analysis is described. An account is also given of the general population radiation dose implications arising from both the DU itself and from the presence of plutonium in the penetrators. According to current dosimetric models, in all scenarios considered likely ,the dose from the plutonium is estimated to be much smaller than that due to the uranium isotopes present in the penetrators. (author)

  17. ANIMAL code

    Energy Technology Data Exchange (ETDEWEB)

    Lindemuth, I.R.

    1979-02-28

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables.

  18. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  19. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  20. Expander Codes

    Indian Academy of Sciences (India)

    Codes and Channels. A noisy communication channel is illustrated in Fig- ... nication channel. Suppose we want to transmit a message over the unreliable communication channel so that even if the channel corrupts some of the bits we are able to recover ..... is d-regular, meaning thereby that every vertex has de- gree d.

  1. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  2. Network Coding

    Indian Academy of Sciences (India)

    Network coding is a technique to increase the amount of information °ow in a network by mak- ing the key observation that information °ow is fundamentally different from commodity °ow. Whereas, under traditional methods of opera- tion of data networks, intermediate nodes are restricted to simply forwarding their incoming.

  3. The Chemistry and Toxicology of Depleted Uranium

    Directory of Open Access Journals (Sweden)

    Sidney A. Katz

    2014-03-01

    Full Text Available Natural uranium is comprised of three radioactive isotopes: 238U, 235U, and 234U. Depleted uranium (DU is a byproduct of the processes for the enrichment of the naturally occurring 235U isotope. The world wide stock pile contains some 1½ million tons of depleted uranium. Some of it has been used to dilute weapons grade uranium (~90% 235U down to reactor grade uranium (~5% 235U, and some of it has been used for heavy tank armor and for the fabrication of armor-piercing bullets and missiles. Such weapons were used by the military in the Persian Gulf, the Balkans and elsewhere. The testing of depleted uranium weapons and their use in combat has resulted in environmental contamination and human exposure. Although the chemical and the toxicological behaviors of depleted uranium are essentially the same as those of natural uranium, the respective chemical forms and isotopic compositions in which they usually occur are different. The chemical and radiological toxicity of depleted uranium can injure biological systems. Normal functioning of the kidney, liver, lung, and heart can be adversely affected by depleted uranium intoxication. The focus of this review is on the chemical and toxicological properties of depleted and natural uranium and some of the possible consequences from long term, low dose exposure to depleted uranium in the environment.

  4. Development, implementation, and verification of multicycle depletion perturbation theory for reactor burnup analysis

    Energy Technology Data Exchange (ETDEWEB)

    White, J.R.

    1980-08-01

    A generalized depletion perturbation formulation based on the quasi-static method for solving realistic multicycle reactor depletion problems is developed and implemented within the VENTURE/BURNER modular code system. The present development extends the original formulation derived by M.L. Williams to include nuclide discontinuities such as fuel shuffling and discharge. This theory is first described in detail with particular emphasis given to the similarity of the forward and adjoint quasi-static burnup equations. The specific algorithm and computational methods utilized to solve the adjoint problem within the newly developed DEPTH (Depletion Perturbation Theory) module are then briefly discussed. Finally, the main features and computational accuracy of this new method are illustrated through its application to several representative reactor depletion problems.

  5. The scale analysis sequence for LWR fuel depletion

    International Nuclear Information System (INIS)

    Hermann, O.W.; Parks, C.V.

    1991-01-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs

  6. Some improvements in the microscopic depletion of assemblies with gadolinium rods

    International Nuclear Information System (INIS)

    Hoareau, F.; Couyras, D.; Girardi, E.

    2012-01-01

    EDF/R and D is developing a new calculation scheme based on the transport-Simplified Pn (SPn) approach. The lattice code used is the deterministic code APOLLO2, developed at CEA with the support of EDF and AREVA-NP. The core code is the code COCAGNE, developed at EDF R and D. The latter can take advantage of a microscopic depletion solver which improves the treatment of spectral history effects. However, comparisons with reference calculations show that the microscopic mode used in COCAGNE gives slightly less accurate results when used to simulate the depletion of assemblies with gadolinium rods. This study aims at determining whether specific models can be used to improve the results of the depletion of assemblies containing burnable poisons. Three possible models are considered in this paper. The first model consists in describing explicitly the gadolinium isotopic chain into the microscopic model implemented within COCAGNE. In the second model, one also uses an explicit description of the gadolinium chain. In addition, the LOCA concentration of gadolinium isotopes is used as interpolation parameter instead of the LOCA burnup, when evaluation microscopic cross sections. The last model consists in using the concentration of Pu239 as a spectral indicator: microscopic cross sections are then corrected according to the LOCA concentration of this nuclide. Comparisons with APOLLO2 depletion calculations were performed to validate these models in COCAGNE. These APOLLO2 calculations consisted in depleting the fuel from 0 GWd/t to 60 GWd/t while keeping perturbed thermal-hydracrylic conditions. CPCAGNE used as input the neutronic libraries generated via a depletion performed in nominal conditions and then a branch case corresponding to the perturbed thermal-hydracrylic conditions of the reference APOLLO2 calculations. These tests show that the microscopic model using Pu239 as a spectral indicator improves the treatment of spectral effects in COCAGNE

  7. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  8. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....

  9. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  10. Extracting information from partially depleted Si detectors with digital sampling electronics

    Directory of Open Access Journals (Sweden)

    Pastore G.

    2015-01-01

    Full Text Available A study of the identification properties and of the energy response of a Si-Si-CsI(Tl ΔE-E telescope exploiting a partially depleted second Si stage has been performed. Five different bias voltages have been applied to the second stage of the telescope, one corresponding to full depletion, the others associated with a depleted layer ranging from 60% to 90% of the detector thickness. Fragment identification has been obtained using either the ΔE-E technique or the Pulse Shape Analysis (PSA. Charge collection efficiency has been evaluated. The ΔE-E performance is not affected by incomplete depletion. Isotopic separation capability improves at lower bias voltages with respect to full depletion, though charge identification thresholds increase.

  11. Orthogonal spectral coding of entangled photons.

    Science.gov (United States)

    Lukens, Joseph M; Dezfooliyan, Amir; Langrock, Carsten; Fejer, Martin M; Leaird, Daniel E; Weiner, Andrew M

    2014-04-04

    We extend orthogonal optical coding, previously applied to multiuser classical communication networks, to entangled photons. Using a pulse shaper and sum-frequency generation for ultrafast coincidence detection, we demonstrate encoding and decoding of biphoton wave packets. Applying one code to the signal photon spreads the wave packet in time and creates a null at zero delay; filtering the idler with the matched code recovers a narrow correlation peak, whereas applying any other code leaves the wave packet spread. Our results could prove useful in the development of code-based quantum communication networks.

  12. Fully Depleted Charge-Coupled Devices

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Stephen E.

    2006-05-15

    We have developed fully depleted, back-illuminated CCDs thatbuild upon earlier research and development efforts directed towardstechnology development of silicon-strip detectors used inhigh-energy-physics experiments. The CCDs are fabricated on the same typeof high-resistivity, float-zone-refined silicon that is used for stripdetectors. The use of high-resistivity substrates allows for thickdepletion regions, on the order of 200-300 um, with corresponding highdetection efficiency for near-infrared andsoft x-ray photons. We comparethe fully depleted CCD to thep-i-n diode upon which it is based, anddescribe the use of fully depleted CCDs in astronomical and x-ray imagingapplications.

  13. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  14. Depleted UF6 programmatic environmental impact statement

    International Nuclear Information System (INIS)

    1997-01-01

    The US Department of Energy has developed a program for long-term management and use of depleted uranium hexafluoride, a product of the uranium enrichment process. As part of this effort, DOE is preparing a Programmatic Environmental Impact Statement (PEIS) for the depleted UF 6 management program. This report duplicates the information available at the web site (http://www.ead.anl.gov/web/newduf6) set up as a repository for the PEIS. Options for the web site include: reviewing recent additions or changes to the web site; learning more about depleted UF 6 and the PEIS; browsing the PEIS and related documents, or submitting official comments on the PEIS; downloading all or part of the PEIS documents; and adding or deleting one's name from the depleted UF 6 mailing list

  15. Fully Depleted Charge-Coupled Devices

    International Nuclear Information System (INIS)

    Holland, Stephen E.

    2006-01-01

    We have developed fully depleted, back-illuminated CCDs that build upon earlier research and development efforts directed towards technology development of silicon-strip detectors used in high-energy-physics experiments. The CCDs are fabricated on the same type of high-resistivity, float-zone-refined silicon that is used for strip detectors. The use of high-resistivity substrates allows for thick depletion regions, on the order of 200-300 um, with corresponding high detection efficiency for near-infrared and soft x-ray photons. We compare the fully depleted CCD to the p-i-n diode upon which it is based, and describe the use of fully depleted CCDs in astronomical and x-ray imaging applications

  16. Ecological considerations of natural and depleted uranium

    International Nuclear Information System (INIS)

    Hanson, W.C.

    1980-01-01

    Depleted 238 U is a major by-product of the nuclear fuel cycle for which increasing use is being made in counterweights, radiation shielding, and ordnance applications. This paper (1) summarizes the pertinent literature on natural and depleted uranium in the environment, (2) integrates results of a series of ecological studies conducted at Los Alamos Scientific Laboratory (LASL) in New Mexico where 70,000 kg of depleted and natural uranium has been expended to the environment over the past 34 years, and (3) synthesizes the information into an assessment of the ecological consequences of natural and depleted uranium released to the environment by various means. Results of studies of soil, plant, and animal communities exposed to this radiation and chemical environment over a third of a century provide a means of evaluating the behavior and effects of uranium in many contexts

  17. Plasmonic Nanoprobes for Stimulated Emission Depletion Nanoscopy.

    Science.gov (United States)

    Cortés, Emiliano; Huidobro, Paloma A; Sinclair, Hugo G; Guldbrand, Stina; Peveler, William J; Davies, Timothy; Parrinello, Simona; Görlitz, Frederik; Dunsby, Chris; Neil, Mark A A; Sivan, Yonatan; Parkin, Ivan P; French, Paul M W; Maier, Stefan A

    2016-11-22

    Plasmonic nanoparticles influence the absorption and emission processes of nearby emitters due to local enhancements of the illuminating radiation and the photonic density of states. Here, we use the plasmon resonance of metal nanoparticles in order to enhance the stimulated depletion of excited molecules for super-resolved nanoscopy. We demonstrate stimulated emission depletion (STED) nanoscopy with gold nanorods with a long axis of only 26 nm and a width of 8 nm. These particles provide an enhancement of up to 50% of the resolution compared to fluorescent-only probes without plasmonic components irradiated with the same depletion power. The nanoparticle-assisted STED probes reported here represent a ∼2 × 10 3 reduction in probe volume compared to previously used nanoparticles. Finally, we demonstrate their application toward plasmon-assisted STED cellular imaging at low-depletion powers, and we also discuss their current limitations.

  18. A Comparative Depletion Analysis using MCNP6 and REBUS-3 for Advanced SFR Burner Core

    Energy Technology Data Exchange (ETDEWEB)

    You, Wu Seung; Hong, Ser Gi [Kyung Hee University, Yongin (Korea, Republic of)

    2016-05-15

    In this paper, we evaluated the accuracy of fast reactor design codes by comparing with MCNP6-based Monte Carlo simulation and REBUS-3-based the nodal transport theory for an initial cycle of an advanced uranium-free fueled SFR burner core having large heterogeneities. It was shown that the nodal diffusion calculation in REBUS-3 gave a large difference in initial k-effective value by 2132pcm when compared with MCNP6 depletion calculation using heterogeneous model.The code system validation for fast reactor design is one of the important research topics. In our previous studies, depletion analysis and physics parameter evaluation of fast reactor core were done with REBUS-3 code and DIF3D code, respectively. In particular, the depletion analysis was done with lumped fission products. However, it is need to verify the accuracy of these calculation methodologies by using Monte Carlo neutron transport calculation coupled with explicit treatment of fission products. In this study, the accuracy of fast reactor design codes and procedures were evaluated using MCNP6 code and VARIANT nodal transport calculation for an initial cycle of an advanced sodium-cooled burner core loaded with uranium-free fuels. It was considered that the REBUS-3 nodal diffusion option can not be used to accurately estimate the depletion calculations and VARIANT nodal transport or VARIANT SP3 options are required for this purpose for this kind of heterogeneous burner core loaded with uranium-free fuel. The control rod worths with nodal diffusion and transport options were estimated with discrepancies less than 12% while these methods for sodium void worth at BOC gave large discrepancies of 12.2% and 16.9%, respectively. It is considered that these large discrepancies in sodium void worth are resulted from the inaccurate consideration of spectrum change in multi-group cross section.

  19. Depleted uranium: A DOE management guide

    International Nuclear Information System (INIS)

    1995-10-01

    The U.S. Department of Energy (DOE) has a management challenge and financial liability in the form of 50,000 cylinders containing 555,000 metric tons of depleted uranium hexafluoride (UF 6 ) that are stored at the gaseous diffusion plants. The annual storage and maintenance cost is approximately $10 million. This report summarizes several studies undertaken by the DOE Office of Technology Development (OTD) to evaluate options for long-term depleted uranium management. Based on studies conducted to date, the most likely use of the depleted uranium is for shielding of spent nuclear fuel (SNF) or vitrified high-level waste (HLW) containers. The alternative to finding a use for the depleted uranium is disposal as a radioactive waste. Estimated disposal costs, utilizing existing technologies, range between $3.8 and $11.3 billion, depending on factors such as applicability of the Resource Conservation and Recovery Act (RCRA) and the location of the disposal site. The cost of recycling the depleted uranium in a concrete based shielding in SNF/HLW containers, although substantial, is comparable to or less than the cost of disposal. Consequently, the case can be made that if DOE invests in developing depleted uranium shielded containers instead of disposal, a long-term solution to the UF 6 problem is attained at comparable or lower cost than disposal as a waste. Two concepts for depleted uranium storage casks were considered in these studies. The first is based on standard fabrication concepts previously developed for depleted uranium metal. The second converts the UF 6 to an oxide aggregate that is used in concrete to make dry storage casks

  20. Seasonal total methane depletion in limestone caves

    OpenAIRE

    Waring Chris L; Hankin Stuart I; Griffith David W T; Kertesz Michael A; Kobylski Victoria; Wilson Neil L; Coleman Nicholas V; Kettlewell Graham; Zlot Robert; Bosse Michael; Bell Graham

    2017-01-01

    Methane concentration in caves is commonly much lower than the external atmosphere, yet the cave CH4 depletion causal mechanism is contested and dynamic links to external diurnal and seasonal temperature cycles unknown. Here, we report a continuous 3-year record of cave methane and other trace gases in Jenolan Caves, Australia which shows a seasonal cycle of extreme CH4 depletion, from ambient ~1,775?ppb to near zero during summer and to ~800?ppb in winter. Methanotrophic bacteria, some newly...

  1. Depleted Bulk Heterojunction Colloidal Quantum Dot Photovoltaics

    KAUST Repository

    Barkhouse, D. Aaron R.

    2011-05-26

    The first solution-processed depleted bulk heterojunction colloidal quantum dot solar cells are presented. The architecture allows for high absorption with full depletion, thereby breaking the photon absorption/carrier extraction compromise inherent in planar devices. A record power conversion of 5.5% under simulated AM 1.5 illumination conditions is reported. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. 17 CFR 229.406 - (Item 406) Code of ethics.

    Science.gov (United States)

    2010-04-01

    ... 17 Commodity and Securities Exchanges 2 2010-04-01 2010-04-01 false (Item 406) Code of ethics. 229... 406) Code of ethics. (a) Disclose whether the registrant has adopted a code of ethics that applies to... code of ethics, explain why it has not done so. (b) For purposes of this Item 406, the term code of...

  3. The Construction and Performance of a Novel Intergroup Complementary Code

    Directory of Open Access Journals (Sweden)

    Huang Wenzhun

    2013-09-01

    Full Text Available On the basis of the analyses for intergroup complementary (IGC code and zero correlation zone complementary code, a novel IGC code has been proposed to adapt M-ary orthogonal code spreading spectrum system or quasi-synchronous CDMA system. The definition and construction methods of the new IGC codes are presented and an applied example is given in this paper. Theoretical research and simulation results show that the main advantages of the novel IGC code are as following: The code sets of the novel IGC code is more than IGC code under the same code length. The zero correlation zone length is longer than the intergroup IGC code, but shorter than the intergroup IGC code. Under the same code length, the auto-correlation performance of the novel IGC code is better than that of the IGC code, and both are of similar cross-correlation performance.

  4. Molecularly imprinted composite cryogel for albumin depletion from human serum.

    Science.gov (United States)

    Andaç, Müge; Baydemir, Gözde; Yavuz, Handan; Denizli, Adil

    2012-11-01

    A new composite protein-imprinted macroporous cryogel was prepared for depletion of albumin from human serum prior to use in proteom applications. Polyhydroxyethyl-methacylate-based molecularly imprinted polymer (MIP) composite cryogel was prepared with high gel fraction yields up to 83%, and its morphology and porosity were characterized by Fourier transform infrared, scanning electron microscopy, swelling studies, flow dynamics, and surface area measurements. Selective binding experiments were performed in the presence of competitive proteins human transferrin (HTR) and myoglobin (MYB). MIP composite cryogel exhibited a high binding capacity and selectivity for human serum albumin (HSA) in the presence of HTR and MYB. The competitive adsorption amount for HSA in MIP composite cryogel is 722.1 mg/dL in the presence of competitive proteins (HTR and MYB). MIP composite cryogel column was successfully applied in the fast protein liquid chromatography system for selective depletion of albumin in human serum. The depletion ratio was highly increased by embedding beads into cryogel (85%). Finally, MIP composite cryogel can be reused many times with no apparent decrease in HSA adsorption capacity. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Light-water-reactor coupled neutronic and thermal-hydraulic codes

    International Nuclear Information System (INIS)

    Diamond, D.J.

    1982-01-01

    An overview is presented of computer codes that model light water reactor cores with coupled neutronics and thermal-hydraulics. This includes codes for transient analysis and codes for steady state analysis which include fuel depletion and fission product buildup. Applications in nuclear design, reactor operations and safety analysis are given and the major codes in use in the USA are identified. The neutronic and thermal-hydraulic methodologies and other code features are outlined for three steady state codes (PDQ7, NODE-P/B and SIMULATE) and four dynamic codes (BNL-TWIGL, MEKIN, RAMONA-3B, RETRAN-02). Speculation as to future trends with such codes is also presented

  6. Light-water-reactor coupled neutronic and thermal-hydraulic codes

    Energy Technology Data Exchange (ETDEWEB)

    Diamond, D.J.

    1982-01-01

    An overview is presented of computer codes that model light water reactor cores with coupled neutronics and thermal-hydraulics. This includes codes for transient analysis and codes for steady state analysis which include fuel depletion and fission product buildup. Applications in nuclear design, reactor operations and safety analysis are given and the major codes in use in the USA are identified. The neutronic and thermal-hydraulic methodologies and other code features are outlined for three steady state codes (PDQ7, NODE-P/B and SIMULATE) and four dynamic codes (BNL-TWIGL, MEKIN, RAMONA-3B, RETRAN-02). Speculation as to future trends with such codes is also presented.

  7. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  8. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  9. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  10. SCALE Continuous-Energy Monte Carlo Depletion with Parallel KENO in TRITON

    International Nuclear Information System (INIS)

    Goluoglu, Sedat; Bekar, Kursat B.; Wiarda, Dorothea

    2012-01-01

    The TRITON sequence of the SCALE code system is a powerful and robust tool for performing multigroup (MG) reactor physics analysis using either the 2-D deterministic solver NEWT or the 3-D Monte Carlo transport code KENO. However, as with all MG codes, the accuracy of the results depends on the accuracy of the MG cross sections that are generated and/or used. While SCALE resonance self-shielding modules provide rigorous resonance self-shielding, they are based on 1-D models and therefore 2-D or 3-D effects such as heterogeneity of the lattice structures may render final MG cross sections inaccurate. Another potential drawback to MG Monte Carlo depletion is the need to perform resonance self-shielding calculations at each depletion step for each fuel segment that is being depleted. The CPU time and memory required for self-shielding calculations can often eclipse the resources needed for the Monte Carlo transport. This summary presents the results of the new continuous-energy (CE) calculation mode in TRITON. With the new capability, accurate reactor physics analyses can be performed for all types of systems using the SCALE Monte Carlo code KENO as the CE transport solver. In addition, transport calculations can be performed in parallel mode on multiple processors.

  11. Depletion sensitivity predicts unhealthy snack purchases.

    Science.gov (United States)

    Salmon, Stefanie J; Adriaanse, Marieke A; Fennis, Bob M; De Vet, Emely; De Ridder, Denise T D

    2016-01-01

    The aim of the present research is to examine the relation between depletion sensitivity - a novel construct referring to the speed or ease by which one's self-control resources are drained - and snack purchase behavior. In addition, interactions between depletion sensitivity and the goal to lose weight on snack purchase behavior were explored. Participants included in the study were instructed to report every snack they bought over the course of one week. The dependent variables were the number of healthy and unhealthy snacks purchased. The results of the present study demonstrate that depletion sensitivity predicts the amount of unhealthy (but not healthy) snacks bought. The more sensitive people are to depletion, the more unhealthy snacks they buy. Moreover, there was some tentative evidence that this relation is more pronounced for people with a weak as opposed to a strong goal to lose weight, suggesting that a strong goal to lose weight may function as a motivational buffer against self-control failures. All in all, these findings provide evidence for the external validity of depletion sensitivity and the relevance of this construct in the domain of eating behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Taurine depletion alters vascular reactivity in rats.

    Science.gov (United States)

    Abebe, Worku; Mozaffari, Mahmood S

    2003-09-01

    We recently showed that chronic taurine supplementation is associated with attenuation of contractile responses of rat aorta to norepinephrine and potassium chloride. However, the potential involvement of endogenous taurine in modulation of vascular reactivity is not known. Therefore, we examined the effect of beta-alanine-induced taurine depletion on the in vitro reactivity of rat aorta to selected vasoactive agents. The data indicate that both norepinephrine- and potassium-chloride-induced maximum contractile responses of endothelium-denuded aortae were enhanced in taurine-depleted rats compared with control animals. However, taurine depletion did not affect tissue sensitivity to either norepinephrine or potassium chloride. By contrast, sensitivity of the endothelium-denuded aortae to sodium nitroprusside was attenuated by taurine depletion. Similarly, taurine deficiency reduced the relaxant responses of endothelium-intact aortic rings elicited by submaximal concentrations of acetylcholine, and this effect was associated with decreased nitric oxide production. Taken together, the data suggest that taurine depletion augments contractility but attenuates relaxation of vascular smooth muscle in a nonspecific manner. Impairment of endothelium-dependent responses, which is at least in part associated with reduced nitric oxide generation, may contribute to the attenuation of the vasorelaxant responses. These vascular alterations could be of potential consequence in pathological conditions associated with taurine deficiency.

  13. Groundwater depletion embedded in international food trade

    Science.gov (United States)

    Dalin, Carole; Wada, Yoshihide; Kastner, Thomas; Puma, Michael J.

    2017-03-01

    Recent hydrological modelling and Earth observations have located and quantified alarming rates of groundwater depletion worldwide. This depletion is primarily due to water withdrawals for irrigation, but its connection with the main driver of irrigation, global food consumption, has not yet been explored. Here we show that approximately eleven per cent of non-renewable groundwater use for irrigation is embedded in international food trade, of which two-thirds are exported by Pakistan, the USA and India alone. Our quantification of groundwater depletion embedded in the world’s food trade is based on a combination of global, crop-specific estimates of non-renewable groundwater abstraction and international food trade data. A vast majority of the world’s population lives in countries sourcing nearly all their staple crop imports from partners who deplete groundwater to produce these crops, highlighting risks for global food and water security. Some countries, such as the USA, Mexico, Iran and China, are particularly exposed to these risks because they both produce and import food irrigated from rapidly depleting aquifers. Our results could help to improve the sustainability of global food production and groundwater resource management by identifying priority regions and agricultural products at risk as well as the end consumers of these products.

  14. Efficient convolutional sparse coding

    Energy Technology Data Exchange (ETDEWEB)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  15. Fuel depletion calculation in MTR-LEU NUR reactor

    Directory of Open Access Journals (Sweden)

    Zeggar Foudil

    2008-01-01

    Full Text Available In this article, we present the results of a few energy groups calculations for the NUR reactor fuel depletion analysis up to 45 000 MWd/tU taken as the maximum fuel burn up. The WIMSD-4 cell code has been employed as a calculation tool. In this study, we are interested in actinides such as the uranium and plutonium isotopes, as well as fission products Xe-135, Sm-149, Sm-151, Eu-155, and Gd-157. Calculation results regarding the five energy groups are in a good agreement with those obtained with only two energy groups which can, therefore, be used in all subsequent calculations. Calculation results presented in this article can be used as a microscopic data base for estimating the amount of radioactive sources randomly dispersed in the environment. They can also be used to monitor the fuel assemblies inventory at the core level.

  16. Edge equilibrium code for tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xujing [Institute of Computational Mathematics and Scientific/Engineering Computing, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, P.O. Box 2719, Beijing 100190 (China); Zakharov, Leonid E. [Princeton Plasma Physics Laboratory Princeton, MS-27 P.O. Box 451, New Jersey (United States); Drozdov, Vladimir V. [Euratom/CCFE Fusion Association, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  17. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  18. Implementation of LT codes based on chaos

    International Nuclear Information System (INIS)

    Zhou Qian; Li Liang; Chen Zengqiang; Zhao Jiaxiang

    2008-01-01

    Fountain codes provide an efficient way to transfer information over erasure channels like the Internet. LT codes are the first codes fully realizing the digital fountain concept. They are asymptotically optimal rateless erasure codes with highly efficient encoding and decoding algorithms. In theory, for each encoding symbol of LT codes, its degree is randomly chosen according to a predetermined degree distribution, and its neighbours used to generate that encoding symbol are chosen uniformly at random. Practical implementation of LT codes usually realizes the randomness through pseudo-randomness number generator like linear congruential method. This paper applies the pseudo-randomness of chaotic sequence in the implementation of LT codes. Two Kent chaotic maps are used to determine the degree and neighbour(s) of each encoding symbol. It is shown that the implemented LT codes based on chaos perform better than the LT codes implemented by the traditional pseudo-randomness number generator. (general)

  19. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  20. Ozone depletion and chlorine loading potentials

    Science.gov (United States)

    Pyle, John A.; Wuebbles, Donald J.; Solomon, Susan; Zvenigorodsky, Sergei; Connell, Peter; Ko, Malcolm K. W.; Fisher, Donald A.; Stordal, Frode; Weisenstein, Debra

    1991-01-01

    The recognition of the roles of chlorine and bromine compounds in ozone depletion has led to the regulation or their source gases. Some source gases are expected to be more damaging to the ozone layer than others, so that scientific guidance regarding their relative impacts is needed for regulatory purposes. Parameters used for this purpose include the steady-state and time-dependent chlorine loading potential (CLP) and the ozone depletion potential (ODP). Chlorine loading potentials depend upon the estimated value and accuracy of atmospheric lifetimes and are subject to significant (approximately 20-50 percent) uncertainties for many gases. Ozone depletion potentials depend on the same factors, as well as the evaluation of the release of reactive chlorine and bromine from each source gas and corresponding ozone destruction within the stratosphere.

  1. Self-regulation, ego depletion, and inhibition.

    Science.gov (United States)

    Baumeister, Roy F

    2014-12-01

    Inhibition is a major form of self-regulation. As such, it depends on self-awareness and comparing oneself to standards and is also susceptible to fluctuations in willpower resources. Ego depletion is the state of reduced willpower caused by prior exertion of self-control. Ego depletion undermines inhibition both because restraints are weaker and because urges are felt more intensely than usual. Conscious inhibition of desires is a pervasive feature of everyday life and may be a requirement of life in civilized, cultural society, and in that sense it goes to the evolved core of human nature. Intentional inhibition not only restrains antisocial impulses but can also facilitate optimal performance, such as during test taking. Self-regulation and ego depletion- may also affect less intentional forms of inhibition, even chronic tendencies to inhibit. Broadly stated, inhibition is necessary for human social life and nearly all societies encourage and enforce it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Burnup calculation code system COMRAD96

    Energy Technology Data Exchange (ETDEWEB)

    Suyama, Kenya [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, `Cross Section Treatment`, `Generation and Depletion Calculation`, and `Post Process`. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the {gamma} Spectrum on a terminal. This report is the general description and user`s manual of COMRAD96. (author)

  3. Burnup calculation code system COMRAD96

    International Nuclear Information System (INIS)

    Suyama, Kenya; Masukawa, Fumihiro; Ido, Masaru; Enomoto, Masaki; Takyu, Shuiti; Hara, Toshiharu.

    1997-06-01

    COMRAD was one of the burnup code system developed by JAERI. COMRAD96 is a transfered version of COMRAD to Engineering Work Station. It is divided to several functional modules, 'Cross Section Treatment', 'Generation and Depletion Calculation', and 'Post Process'. It enables us to analyze a burnup problem considering a change of neutron spectrum using UNITBURN. Also it can display the γ Spectrum on a terminal. This report is the general description and user's manual of COMRAD96. (author)

  4. Applying no-depletion equilibrium sampling and full-depletion bioaccessibility extraction to 35 historically polycyclic aromatic hydrocarbon contaminated soils

    DEFF Research Database (Denmark)

    Bartolomé, Nora; Hilber, Isabel; Sosa, Dayana

    2018-01-01

    such soils were obtained from various locations in Switzerland and Cuba. They were exposed to different pollution sources (e.g., pyrogenic and petrogenic) at various distance (i.e., urban to rural) and were subject to different land use (e.g., urban gardening and forest). Passive equilibrium sampling...... coefficients of the soils were highest for skeet soils, followed by traffic, urban garden and rural soils. Lowest values were obtained from soil exposed to petrogenic sources. Applicability of SBE to quantify Cbioacc was restricted by silicone rod sorption capacity, as expressed quantitatively by the Sorption......Assessing the bioaccessibility of organic pollutants in contaminated soils is considered a complement to measurements of total concentrations in risk assessment and legislation. Consequently, methods for its quantification require validation with historically contaminated soils. In this study, 35...

  5. A calculational procedure for neutronic and depletion analysis of Molten-Salt reactors based on SCALE6/TRITON

    International Nuclear Information System (INIS)

    Sheu, R.J.; Chang, J.S.; Liu, Y.-W. H.

    2011-01-01

    Molten-Salt Reactors (MSRs) represent one of the selected categories in the GEN-IV program. This type of reactor is distinguished by the use of liquid fuel circulating in and out of the core, which makes it possible for online refueling and salt processing. However, this operation characteristic also complicates the modeling and simulation of reactor core behaviour using conventional neutronic codes. The TRITON sequence in the SCALE6 code system has been designed to provide the combined capabilities of problem-dependent cross-section processing, rigorous treatment of neutron transport, and coupled with the ORIGEN-S depletion calculations. In order to accommodate the simulation of dynamic refueling and processing scheme, an in-house program REFRESH together with a run script are developed for carrying out a series of stepwise TRITON calculations, that makes the work of analyzing the neutronic properties and performance of a MSR core design easier. As a demonstration and cross check, we have applied this method to reexamine the conceptual design of Molten Salt Actinide Recycler & Transmuter (MOSART). This paper summarizes the development of the method and preliminary results of its application on MOSART. (author)

  6. Depleted uranium hexafluoride: Waste or resource?

    International Nuclear Information System (INIS)

    Schwertz, N.; Zoller, J.; Rosen, R.; Patton, S.; Bradley, C.; Murray, A.

    1995-07-01

    The US Department of Energy is evaluating technologies for the storage, disposal, or re-use of depleted uranium hexafluoride (UF 6 ). This paper discusses the following options, and provides a technology assessment for each one: (1) conversion to UO 2 for use as mixed oxide duel, (2) conversion to UO 2 to make DUCRETE for a multi-purpose storage container, (3) conversion to depleted uranium metal for use as shielding, (4) conversion to uranium carbide for use as high-temperature gas-cooled reactor (HTGR) fuel. In addition, conversion to U 3 O 8 as an option for long-term storage is discussed

  7. Elasticity and clustering in concentrated depletion gels.

    Science.gov (United States)

    Ramakrishnan, S; Chen, Y-L; Schweizer, K S; Zukoski, C F

    2004-10-01

    X-ray scattering and rheology are employed to study the volume fraction dependence of the collective structure and elastic moduli of concentrated nanoparticle-polymer depletion gels. The nonequilibrium gel structure consists of locally densified nonfractal clusters and narrow random interfaces. The elastic moduli display a power law dependence on volume fraction with effective exponents that decrease with increasing depletion attraction strength. A microscopic theory that combines local structural information with a dynamic treatment of gelation is in good agreement with the observations.

  8. Physical Layer Network Coding

    OpenAIRE

    Shengli, Zhang; Liew, Soung-Chang; Lam, Patrick P. K.

    2007-01-01

    A main distinguishing feature of a wireless network compared with a wired network is its broadcast nature, in which the signal transmitted by a node may reach several other nodes, and a node may receive signals from several other nodes simultaneously. Rather than a blessing, this feature is treated more as an interference-inducing nuisance in most wireless networks today (e.g., IEEE 802.11). This paper shows that the concept of network coding can be applied at the physical layer to turn the b...

  9. Serial reversal learning and acute tryptophan depletion

    NARCIS (Netherlands)

    van der Plasse, G.; Feenstra, M.G.P.

    2008-01-01

    Cognitive flexibility (i.e. the ability to adapt goal-directed behaviour in response to changed environmental demands) has repeatedly been shown to depend on the prefrontal cortex (PFC). Recent data from primate studies moreover show that depletion of prefrontal 5-HT impairs reversal learning of

  10. Global Warming: Lessons from Ozone Depletion

    Science.gov (United States)

    Hobson, Art

    2010-01-01

    My teaching and textbook have always covered many physics-related social issues, including stratospheric ozone depletion and global warming. The ozone saga is an inspiring good-news story that's instructive for solving the similar but bigger problem of global warming. Thus, as soon as students in my physics literacy course at the University of…

  11. Seasonal total methane depletion in limestone caves.

    Science.gov (United States)

    Waring, Chris L; Hankin, Stuart I; Griffith, David W T; Kertesz, Michael A; Kobylski, Victoria; Wilson, Neil L; Coleman, Nicholas V; Kettlewell, Graham; Zlot, Robert; Bosse, Michael; Bell, Graham

    2017-08-16

    Methane concentration in caves is commonly much lower than the external atmosphere, yet the cave CH 4 depletion causal mechanism is contested and dynamic links to external diurnal and seasonal temperature cycles unknown. Here, we report a continuous 3-year record of cave methane and other trace gases in Jenolan Caves, Australia which shows a seasonal cycle of extreme CH 4 depletion, from ambient ~1,775 ppb to near zero during summer and to ~800 ppb in winter. Methanotrophic bacteria, some newly-discovered, rapidly consume methane on cave surfaces and in external karst soils with lifetimes in the cave of a few hours. Extreme bacterial selection due to the absence of alternate carbon sources for growth in the cave environment has resulted in an extremely high proportion 2-12% of methanotrophs in the total bacteria present. Unexpected seasonal bias in our cave CH 4 depletion record is explained by a three-step process involving methanotrophy in aerobic karst soil above the cave, summer transport of soil-gas into the cave through epikarst, followed by further cave CH 4 depletion. Disentangling cause and effect of cave gas variations by tracing sources and sinks has identified seasonal speleothem growth bias, with implied palaeo-climate record bias.

  12. Ozone depleting substances management inventory system

    Directory of Open Access Journals (Sweden)

    Felix Ivan Romero Rodríguez

    2018-02-01

    Full Text Available Context: The care of the ozone layer is an activity that contributes to the planet's environmental stability. For this reason, the Montreal Protocol is created to control the emission of substances that deplete the ozone layer and reduce its production from an organizational point of view. However, it is also necessary to have control of those that are already circulating and those present in the equipment that cannot be replaced yet because of the context of the companies that keep it. Generally, the control mechanisms for classifying the type of substances, equipment and companies that own them, are carried in physical files, spreadsheets and text documents, which makes it difficult to control and manage the data stored in them. Method: The objective of this research is to computerize the process of control of substances that deplete the ozone layer. An evaluation and description of all process to manage Ozone-Depleting Substances (ODS, and its alternatives, is done. For computerization, the agile development methodology SCRUM is used, and for the technological solution tools and free open source technologies are used. Result: As a result of the research, a computer tool was developed that automates the process of control and management of substances that exhaust the ozone layer and its alternatives. Conclusions: The developed computer tool allows to control and manage the ozone-depleting substances and the equipment that use them. It also manages the substances that arise as alternatives to be used for the protection of the ozone layer.

  13. Nitrogen depletion in field red giants

    DEFF Research Database (Denmark)

    Masseron, T.; Lagarde, N.; Miglio, A.

    2017-01-01

    , the behaviour of nitrogen data along the evolution confirms the existence of non-canonical extramixing on the red giant branch (RGB) for all low-mass stars in the field. But more surprisingly, the data indicate that nitrogen has been depleted between the RGB tip and the red clump. This may suggest that some...

  14. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  15. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  16. [Acute tryptophan depletion in eating disorders].

    Science.gov (United States)

    Díaz-Marsa, M; Lozano, C; Herranz, A S; Asensio-Vegas, M J; Martín, O; Revert, L; Saiz-Ruiz, J; Carrasco, J L

    2006-01-01

    This work describes the rational bases justifying the use of acute tryptophan depletion technique in eating disorders (ED) and the methods and design used in our studies. Tryptophan depletion technique has been described and used in previous studies safely and makes it possible to evaluate the brain serotonin activity. Therefore it is used in the investigation of hypotheses on serotonergic deficiency in eating disorders. Furthermore, and given the relationship of the dysfunctions of serotonin activity with impulsive symptoms, the technique may be useful in biological differentiation of different subtypes, that is restrictive and bulimic, of ED. 57 female patients with DSM-IV eating disorders and 20 female controls were investigated with the tryptophan depletion test. A tryptophan-free amino acid solution was administered orally after a two-day low tryptophan diet to patients and controls. Free plasma tryptophan was measured at two and five hours following administration of the drink. Eating and emotional responses were measured with specific scales for five hours following the depletion. A study of the basic characteristics of the personality and impulsivity traits was also done. Relationship of the response to the test with the different clinical subtypes and with the temperamental and impulsive characteristics of the patients was studied. The test was effective in considerably reducing plasma tryptophan in five hours from baseline levels (76%) in the global sample. The test was well tolerated and no severe adverse effects were reported. Two patients withdrew from the test due to gastric intolerance. The tryptophan depletion test could be of value to study involvement of serotonin deficits in the symptomatology and pathophysiology of eating disorders.

  17. Model Children's Code.

    Science.gov (United States)

    New Mexico Univ., Albuquerque. American Indian Law Center.

    The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…

  18. Affine Grassmann codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Beelen, Peter; Ghorpade, Sudhir Ramakant

    2010-01-01

    We consider a new class of linear codes, called affine Grassmann codes. These can be viewed as a variant of generalized Reed-Muller codes and are closely related to Grassmann codes.We determine the length, dimension, and the minimum distance of any affine Grassmann code. Moreover, we show that af...

  19. Health and environmental impact of depleted uranium

    International Nuclear Information System (INIS)

    Furitsu, Katsumi

    2010-01-01

    Depleted Uranium (DU) is 'nuclear waste' produced from the enrichment process and is mostly made up of 238 U and is depleted in the fissionable isotope 235 U compared to natural uranium (NU). Depleted uranium has about 60% of the radioactivity of natural uranium. Depleted uranium and natural uranium are identical in terms of the chemical toxicity. Uranium's high density gives depleted uranium shells increased range and penetrative power. This density, combined with uranium's pyrophoric nature, results in a high-energy kinetic weapon that can punch and burn through armour plating. Striking a hard target, depleted uranium munitions create extremely high temperatures. The uranium immediately burns and vaporizes into an aerosol, which is easily diffused in the environment. People can inhale the micro-particles of uranium oxide in an aerosol and absorb them mainly from lung. Depleted uranium has both aspects of radiological toxicity and chemical toxicity. The possible synergistic effect of both kinds of toxicities is also pointed out. Animal and cellular studies have been reported the carcinogenic, neurotoxic, immuno-toxic and some other effects of depleted uranium including the damage on reproductive system and foetus. In addition, the health effects of micro/ nano-particles, similar in size of depleted uranium aerosols produced by uranium weapons, have been reported. Aerosolized DU dust can easily spread over the battlefield spreading over civilian areas, sometimes even crossing international borders. Therefore, not only the military personnel but also the civilians can be exposed. The contamination continues after the cessation of hostilities. Taking these aspects into account, DU weapon is illegal under international humanitarian laws and is considered as one of the inhumane weapons of 'indiscriminate destruction'. The international society is now discussing the prohibition of DU weapons based on 'precautionary principle'. The 1991 Gulf War is reportedly the first

  20. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  1. Mantle depletion and metasomatism recorded in orthopyroxene in highly depleted peridotites

    DEFF Research Database (Denmark)

    Scott, James; Liu, Jingao; Pearson, D. Graham

    2016-01-01

    Although trace element concentrations in clinopyroxene serve as a useful tool for assessing the depletion and enrichment history of mantle peridotites, this is not applicable for peridotites in which the clinopyroxene component has been consumed (~ 25% partial melting). Orthopyroxene persists...

  2. A pin by pin microscopic depletion scheme using an homogeneous core calculation with pin-power reconstruction

    International Nuclear Information System (INIS)

    Hoareau, F.; Fliscounakis, M.; Couyras, D.; Guillo, M.; Pora, Y.

    2009-01-01

    EDF/R and D is currently developing a new calculation scheme based on the transport-Simplified P n (SP n ) approach. The lattice code used is the CEA deterministic code APOLLO2, while the core code COCAGNE is currently under development at EDF R and D. This paper presents a new calculation scheme aimed at computing the pin by pin isotopic concentrations of fuel assemblies. This new scheme is based on homogeneous calculations and the use of the pin-power reconstruction method already implemented in COCAGNE. Indeed, this dehomogenization technique can supply the core code with an approximate pin by pin flux. The former is a necessary input for the microscopic depletion solver of COCAGNE. In order to validate this new microscopic depletion scheme, two kind of tests were performed. Firstly, direct comparisons with APOLLO2 results were made on the depletion calculation of a fuel assembly. It is shown that isotopic concentrations and the multiplication factor (keff) obtained by COCAGNE are consistent with APOLLO2 results. Secondly, the new scheme was compared with the existing calculation procedure of COCAGNE, that is heterogeneous calculations coupled with the microscopic depletion solver. The second test indicates that the new procedure gives also fairly accurate results. (author)

  3. Context adaptive coding of bi-level images

    DEFF Research Database (Denmark)

    Forchhammer, Søren

    2008-01-01

    for the modern paradigm of data compression based on a modelling and a coding stage. One advantage of contexts is their flexibility, e.g. choosing a two-dimensional ("-D) context facilitates efficient image coding. The area of image coding has greatly been influenced by context adaptive coding, applied e...

  4. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  5. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  6. Multiplexed coding in the human basal ganglia

    Science.gov (United States)

    Andres, D. S.; Cerquetti, D.; Merello, M.

    2016-04-01

    A classic controversy in neuroscience is whether information carried by spike trains is encoded by a time averaged measure (e.g. a rate code), or by complex time patterns (i.e. a time code). Here we apply a tool to quantitatively analyze the neural code. We make use of an algorithm based on the calculation of the temporal structure function, which permits to distinguish what scales of a signal are dominated by a complex temporal organization or a randomly generated process. In terms of the neural code, this kind of analysis makes it possible to detect temporal scales at which a time patterns coding scheme or alternatively a rate code are present. Additionally, finding the temporal scale at which the correlation between interspike intervals fades, the length of the basic information unit of the code can be established, and hence the word length of the code can be found. We apply this algorithm to neuronal recordings obtained from the Globus Pallidus pars interna from a human patient with Parkinson’s disease, and show that a time pattern coding and a rate coding scheme co-exist at different temporal scales, offering a new example of multiplexed neuronal coding.

  7. Depleted uranium hexafluoride: Waste or resource?

    Energy Technology Data Exchange (ETDEWEB)

    Schwertz, N.; Zoller, J.; Rosen, R.; Patton, S. [Lawrence Livermore National Lab., CA (United States); Bradley, C. [USDOE Office of Nuclear Energy, Science, Technology, Washington, DC (United States); Murray, A. [SAIC (United States)

    1995-07-01

    the US Department of Energy is evaluating technologies for the storage, disposal, or re-use of depleted uranium hexafluoride (UF{sub 6}). This paper discusses the following options, and provides a technology assessment for each one: (1) conversion to UO{sub 2} for use as mixed oxide duel, (2) conversion to UO{sub 2} to make DUCRETE for a multi-purpose storage container, (3) conversion to depleted uranium metal for use as shielding, (4) conversion to uranium carbide for use as high-temperature gas-cooled reactor (HTGR) fuel. In addition, conversion to U{sub 3}O{sub 8} as an option for long-term storage is discussed.

  8. The depletion of the stratospheric ozone layer

    International Nuclear Information System (INIS)

    Sabogal Nelson

    2000-01-01

    The protection of the Earth's ozone layer is of the highest importance to mankind. The dangers of its destruction are by now well known. The depletion of that layer has reached record levels. The Antarctic ozone hole covered this year a record area. The ozone layer is predicted to begin recovery in the next one or two decades and should be restored to pre-1980 levels by 2050. This is the achievement of the regime established by the 1985 Vienna Convention for the Protection of the Ozone Layer and the 1987 Montreal Protocol on Substances that Deplete the Ozone Layer. The regime established by these two agreements has been revised, and made more effective in London (1990), Copenhagen (1992), Vienna (1995), and Beijing (1999)

  9. Depleted uranium plasma reduction system study

    International Nuclear Information System (INIS)

    Rekemeyer, P.; Feizollahi, F.; Quapp, W.J.; Brown, B.W.

    1994-12-01

    A system life-cycle cost study was conducted of a preliminary design concept for a plasma reduction process for converting depleted uranium to uranium metal and anhydrous HF. The plasma-based process is expected to offer significant economic and environmental advantages over present technology. Depleted Uranium is currently stored in the form of solid UF 6 , of which approximately 575,000 metric tons is stored at three locations in the U.S. The proposed system is preconceptual in nature, but includes all necessary processing equipment and facilities to perform the process. The study has identified total processing cost of approximately $3.00/kg of UF 6 processed. Based on the results of this study, the development of a laboratory-scale system (1 kg/h throughput of UF6) is warranted. Further scaling of the process to pilot scale will be determined after laboratory testing is complete

  10. Improvements in EBR-2 core depletion calculations

    International Nuclear Information System (INIS)

    Finck, P.J.; Hill, R.N.; Sakamoto, S.

    1991-01-01

    The need for accurate core depletion calculations in Experimental Breeder Reactor No. 2 (EBR-2) is discussed. Because of the unique physics characteristics of EBR-2, it is difficult to obtain accurate and computationally efficient multigroup flux predictions. This paper describes the effect of various conventional and higher order schemes for group constant generation and for flux computations; results indicate that higher-order methods are required, particularly in the outer regions (i.e. the radial blanket). A methodology based on Nodal Equivalence Theory (N.E.T.) is developed which allows retention of the accuracy of a higher order solution with the computational efficiency of a few group nodal diffusion solution. The application of this methodology to three-dimensional EBR-2 flux predictions is demonstrated; this improved methodology allows accurate core depletion calculations at reasonable cost. 13 refs., 4 figs., 3 tabs

  11. Depletion zones and crystallography on pinched spheres

    Science.gov (United States)

    Chen, Jingyuan; Xing, Xiangjun; Yao, Zhenwei

    2018-03-01

    Understanding the interplay between ordered structures and substrate curvature is an interesting problem with versatile applications, including functionalization of charged supramolecular surfaces and modern microfluidic technologies. In this work, we investigate the two-dimensional packing structures of charged particles confined on a pinched sphere. By continuously pinching the sphere, we observe cleavage of elongated scars into pleats, proliferation of disclinations, and subsequently, emergence of a depletion zone at the negatively curved waist that is completely void of particles. We systematically study the geometrics and energetics of the depletion zone, and reveal its physical origin as a finite size effect, due to the interplay between Coulomb repulsion and concave geometry of the pinched sphere. These results further our understanding of crystallography on curved surfaces, and have implications in design and manipulation of charged, deformable interfaces in various applications.

  12. GAMERA - The New Magnetospheric Code

    Science.gov (United States)

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  13. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  14. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  15. Molten-Salt Depleted-Uranium Reactor

    OpenAIRE

    Dong, Bao-Guo; Dong, Pei; Gu, Ji-Yuan

    2015-01-01

    The supercritical, reactor core melting and nuclear fuel leaking accidents have troubled fission reactors for decades, and greatly limit their extensive applications. Now these troubles are still open. Here we first show a possible perfect reactor, Molten-Salt Depleted-Uranium Reactor which is no above accident trouble. We found this reactor could be realized in practical applications in terms of all of the scientific principle, principle of operation, technology, and engineering. Our results...

  16. The ultimate disposition of depleted uranium

    Energy Technology Data Exchange (ETDEWEB)

    Lemons, T.R. [Uranium Enrichment Organization, Oak Ridge, TN (United States)

    1991-12-31

    Depleted uranium (DU) is produced as a by-product of the uranium enrichment process. Over 340,000 MTU of DU in the form of UF{sub 6} have been accumulated at the US government gaseous diffusion plants and the stockpile continues to grow. An overview of issues and objectives associated with the inventory management and the ultimate disposition of this material is presented.

  17. Optical assessment of phytoplankton nutrient depletion

    DEFF Research Database (Denmark)

    Heath, M.R.; Richardson, Katherine; Kiørboe, Thomas

    1990-01-01

    The ratio of light absorption at 480 and 665 nm by 90% acetone extracts of marine phytoplankton pigments has been examined as a potential indicator of phytoplankton nutritional status in both laboratory and field studies. The laboratory studies demonstrated a clear relationship between nutritiona......-replete and nutrient-depleted cells. The field data suggest that the absorption ratio may be a useful indicator of nutritional status of natural phytoplankton populations, and can be used to augment the interpretation of other data....

  18. Heatstroke Pathophysiology: The Energy Depletion Model

    Science.gov (United States)

    1989-06-12

    metabolic acidosis suggested preexisting whole body K* deficiency. In Kt-depleted dogs (49), muscle weakness occurred when animals had lost...and A.C. Issekutz. Lactate metabolism in resting and exercising dogs . J. ADDI. Physiol. 40:312-319, 1976. 43. Jacobs, I. Brod lactate: Implications...31 50. Kreisberg, R.A., L.F. Pennington, and B.R. Boshell. Lactate turnover and gluconeogenesis in normal and obese humans. Diabetes 19:53-63, 1970

  19. Carbon sequestration in depleted oil shale deposits

    Science.gov (United States)

    Burnham, Alan K; Carroll, Susan A

    2014-12-02

    A method and apparatus are described for sequestering carbon dioxide underground by mineralizing the carbon dioxide with coinjected fluids and minerals remaining from the extraction shale oil. In one embodiment, the oil shale of an illite-rich oil shale is heated to pyrolyze the shale underground, and carbon dioxide is provided to the remaining depleted oil shale while at an elevated temperature. Conditions are sufficient to mineralize the carbon dioxide.

  20. Depleted uranium residual radiological risk assessment for Kosovo sites

    International Nuclear Information System (INIS)

    Durante, Marco; Pugliese, Mariagabriella

    2003-01-01

    During the recent conflict in Yugoslavia, depleted uranium rounds were employed and were left in the battlefield. Health concern is related to the risk arising from contamination of areas in Kosovo with depleted uranium penetrators and dust. Although chemical toxicity is the most significant health risk related to uranium, radiation exposure has been allegedly related to cancers among veterans of the Balkan conflict. Uranium munitions are considered to be a source of radiological contamination of the environment. Based on measurements and estimates from the recent Balkan Task Force UNEP mission in Kosovo, we have estimated effective doses to resident populations using a well-established food-web mathematical model (RESRAD code). The UNEP mission did not find any evidence of widespread contamination in Kosovo. Rather than the actual measurements, we elected to use a desk assessment scenario (Reference Case) proposed by the UNEP group as the source term for computer simulations. Specific applications to two Kosovo sites (Planeja village and Vranovac hill) are described. Results of the simulations suggest that radiation doses from water-independent pathways are negligible (annual doses below 30 μSv). A small radiological risk is expected from contamination of the groundwater in conditions of effective leaching and low distribution coefficient of uranium metal. Under the assumptions of the Reference Case, significant radiological doses (>1 mSv/year) might be achieved after many years from the conflict through water-dependent pathways. Even in this worst-case scenario, DU radiological risk would be far overshadowed by its chemical toxicity

  1. Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell

    OpenAIRE

    Hein, Birka; Willig, Katrin I.; Hell, Stefan W.

    2008-01-01

    We demonstrate far-field optical imaging with subdiffraction resolution of the endoplasmic reticulum (ER) in the interior of a living mammalian cell. The diffraction barrier is overcome by applying stimulated emission depletion (STED) on a yellow fluorescent protein tag. Imaging individual structural elements of the ER revealed a focal plane (x, y) resolution of

  2. Modified BTC Algorithm for Audio Signal Coding

    Directory of Open Access Journals (Sweden)

    TOMIC, S.

    2016-11-01

    Full Text Available This paper describes modification of a well-known image coding algorithm, named Block Truncation Coding (BTC and its application in audio signal coding. BTC algorithm was originally designed for black and white image coding. Since black and white images and audio signals have different statistical characteristics, the application of this image coding algorithm to audio signal presents a novelty and a challenge. Several implementation modifications are described in this paper, while the original idea of the algorithm is preserved. The main modifications are performed in the area of signal quantization, by designing more adequate quantizers for audio signal processing. The result is a novel audio coding algorithm, whose performance is presented and analyzed in this research. The performance analysis indicates that this novel algorithm can be successfully applied in audio signal coding.

  3. High-energy-ion depletion in the charge exchange spectrum of Alcator C

    International Nuclear Information System (INIS)

    Schissel, D.P.

    1982-01-01

    A three-dimensional, guiding center, Monte Carlo code is developed to study ion orbits in Alcator C. The highly peaked ripple of the magnetic field of Alcator is represented by an analytical expression for the vector potential. The analytical ripple field is compared to the resulting magnetic field generated by a current model of the toroidal plates; agreement is excellent. Ion-Ion scattering is simulated by a pitch angle and an energy scattering operator. The equations of motion are integrated with a variable time step, extrapolating integrator. The code produces collisionless banana and ripple trapped loss cones which agree well with present theory. Global energy distributions have been calculated and show a slight depletion above 8.5 keV. Particles which are ripple trapped and lost are at energies below where depletion is observed. It is found that ions pitch angle scatter less as energy is increased. The result is that, when viewed in velocity space, ions form probability lobes the shape of mouse ears which are fat near the thermal energy. Therefore, particles enter the loss cone at low energies near the bottom of the core. Recommendations for future work include improving the analytic model of the ripple field, testing the effect of del . B not equal to 0 on ion orbits, and improving the efficiency of the code by either using a spline fit for the magnetic fields or by creating a vectorized Monte Carlo code

  4. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  5. Error Correcting Codes

    Indian Academy of Sciences (India)

    sound quality is, in essence, obtained by accurate waveform coding and decoding of the audio signals. In addition, the coded audio information is protected against disc errors by the use of a Cross Interleaved Reed-Solomon Code (CIRC). Reed-. Solomon codes were discovered by Irving Reed and Gus Solomon in 1960.

  6. National Workshop on Coding Theory and Cryptography

    Indian Academy of Sciences (India)

    Coding theory and cryptography are two inter-related branches of applied algebra that find increasing applications in communication theory, data security and many other areas of information technology. This workshop will discuss the basics and applications of algebraic coding theory and cryptography, public key ...

  7. 26 CFR 1.642(e)-1 - Depreciation and depletion.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 8 2010-04-01 2010-04-01 false Depreciation and depletion. 1.642(e)-1 Section 1... (CONTINUED) INCOME TAXES Estates, Trusts, and Beneficiaries § 1.642(e)-1 Depreciation and depletion. An estate or trust is allowed the deductions for depreciation and depletion, but only to the extent the...

  8. Characterizing the transcriptome upon depletion of RNA processing factors

    DEFF Research Database (Denmark)

    Herudek, Jan

    consequences of protein depletion. Hence, immediate depletion phenotypes might be shielded due to complementing mechanisms. Here I adopted an auxin inducible degron approach for the rapid protein depletion in mammalian cells, which results in robust protein reduction in a few hours. Moreover, I combined...

  9. 26 CFR 1.613-1 - Percentage depletion; general rule.

    Science.gov (United States)

    2010-04-01

    ... TAX (CONTINUED) INCOME TAXES (CONTINUED) Natural Resources § 1.613-1 Percentage depletion; general rule. (a) In general. In the case of a taxpayer computing the deduction for depletion under section 611... 26 Internal Revenue 7 2010-04-01 2010-04-01 true Percentage depletion; general rule. 1.613-1...

  10. The depletion potential in one, two and three dimensions

    Indian Academy of Sciences (India)

    Abstract. We study the behavior of the depletion potential in binary mixtures of hard particles in one, two, and three dimensions within the framework of a general theory for depletion potential using density functional theory. By doing so we extend earlier studies of the depletion potential in three dimensions to the cases of d ...

  11. Children's Models of the Ozone Layer and Ozone Depletion.

    Science.gov (United States)

    Christidou, Vasilia; Koulaidis, Vasilis

    1996-01-01

    The views of 40 primary students on ozone and its depletion were recorded through individual, semi-structured interviews. The data analysis resulted in the formation of a limited number of models concerning the distribution and role of ozone in the atmosphere, the depletion process, and the consequences of ozone depletion. Identifies five target…

  12. The depletion potential in one, two and three dimensions

    Indian Academy of Sciences (India)

    We study the behavior of the depletion potential in binary mixtures of hard particles in one, two, and three dimensions within the framework of a general theory for depletion potential using density functional theory. By doing so we extend earlier studies of the depletion potential in three dimensions to the cases of = 1 and 2 ...

  13. Two-Layer Coding Rate Optimization in Relay-Aided Systems

    DEFF Research Database (Denmark)

    Sun, Fan

    2011-01-01

    -layer coding scheme is proposed, where physical layer channel coding is utilized within each packet for error-correction and random network coding is applied on top of channel coding for network error-control. There is a natural tradeoff between the physical layer coding rate and the network coding rate given...... requirement. Numerical results are also provided to show the optimized physical layer coding and network coding rate pairs in different system scenarios....

  14. Network Coding Taxonomy

    OpenAIRE

    Adamson , Brian; Adjih , Cédric; Bilbao , Josu; Firoiu , Victor; Fitzek , Frank; Samah , Ghanem ,; Lochin , Emmanuel; Masucci , Antonia; Montpetit , Marie-Jose; Pedersen , Morten V.; Peralta , Goiuri; Roca , Vincent; Paresh , Saxena; Sivakumar , Senthil

    2017-01-01

    Internet Research Task Force - Working document of the Network Coding Research Group (NWCRG), draft-irtf-nwcrg-network-coding-taxonomy-05 (work in progress), https://datatracker.ietf.org/doc/draft-irtf-nwcrg-network-coding-taxonomy/; This document summarizes a recommended terminology for Network Coding concepts and constructs. It provides a comprehensive set of terms with unique names in order to avoid ambiguities in future Network Coding IRTF and IETF documents. This document is intended to ...

  15. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  16. Applied Macroeconomics

    NARCIS (Netherlands)

    Heijman, W.J.M.

    2000-01-01

    This book contains a course in applied macroeconomics. Macroeconomic theory is applied to real world cases. Students are expected to compute model results with the help of a spreadsheet program. To that end the book also contains descriptions of the spreadsheet applications used, such as linear

  17. Applied Electromagnetics

    International Nuclear Information System (INIS)

    Yamashita, H.; Marinova, I.; Cingoski, V.

    2002-01-01

    These proceedings contain papers relating to the 3rd Japanese-Bulgarian-Macedonian Joint Seminar on Applied Electromagnetics. Included are the following groups: Numerical Methods I; Electrical and Mechanical System Analysis and Simulations; Inverse Problems and Optimizations; Software Methodology; Numerical Methods II; Applied Electromagnetics

  18. Uncertainty analysis of the FRAP code

    International Nuclear Information System (INIS)

    Peck, S.O.

    1978-01-01

    A user oriented, automated uncertainty analysis capability has been built into the FRAP code (Fuel Rod Analysis Program) and applied to a PWR fuel rod undergoing a LOCA. The method of uncertainty analysis is the Response Surface Method (RSM). (author)

  19. A modern depleted uranium manufacturing facility

    International Nuclear Information System (INIS)

    Zagula, T.A.

    1995-07-01

    The Specific Manufacturing Capabilities (SMC) Project located at the Idaho National Engineering Laboratory (INEL) and operated by Lockheed Martin Idaho Technologies Co. (LMIT) for the Department of Energy (DOE) manufactures depleted uranium for use in the U.S. Army MIA2 Abrams Heavy Tank Armor Program. Since 1986, SMC has fabricated more than 12 million pounds of depleted uranium (DU) products in a multitude of shapes and sizes with varying metallurgical properties while maintaining security, environmental, health and safety requirements. During initial facility design in the early 1980's, emphasis on employee safety, radiation control and environmental consciousness was gaining momentum throughout the DOE complex. This fact coupled with security and production requirements forced design efforts to focus on incorporating automation, local containment and computerized material accountability at all work stations. The result was a fully automated production facility engineered to manufacture DU armor packages with virtually no human contact while maintaining security, traceability and quality requirements. This hands off approach to handling depleted uranium resulted in minimal radiation exposures and employee injuries. Construction of the manufacturing facility was complete in early 1986 with the first armor package certified in October 1986. Rolling facility construction was completed in 1987 with the first certified plate produced in the fall of 1988. Since 1988 the rolling and manufacturing facilities have delivered more than 2600 armor packages on schedule with 100% final product quality acceptance. During this period there was an annual average of only 2.2 lost time incidents and a single individual maximum radiation exposure of 150 mrem. SMC is an example of designing and operating a facility that meets regulatory requirements with respect to national security, radiation control and personnel safety while achieving production schedules and product quality

  20. Ozone depletion: implications for the veterinarian.

    Science.gov (United States)

    Kopecky, K E

    1978-09-15

    Man has inadvertently modified the stratosphere. There is a good possibility that the ozone layer is being depleted by the use of jet aircraft (SST), chlorofluoromethane propellants, and nitrogen fertilizers. Under unpolluted conditions, the production of ozone equals its destruction. By man's intervention, however, the destruction may exceed the production. The potential outcome is increased intensity of solar ultraviolet (280-400 nm) radiation and penetration to the earth's surface of previously absorbed wavelengths below about 280 nm. The increased ultraviolet radiation would increase the likelihood of skin cancer in man and ocular squamous cell carcinoma in cattle. The climate also might be modified, possibly in an undesirable way.

  1. Capstone Depleted Uranium Aerosols: Generation and Characterization

    Energy Technology Data Exchange (ETDEWEB)

    Parkhurst, MaryAnn; Szrom, Fran; Guilmette, Ray; Holmes, Tom; Cheng, Yung-Sung; Kenoyer, Judson L.; Collins, John W.; Sanderson, T. Ellory; Fliszar, Richard W.; Gold, Kenneth; Beckman, John C.; Long, Julie

    2004-10-19

    In a study designed to provide an improved scientific basis for assessing possible health effects from inhaling depleted uranium (DU) aerosols, a series of DU penetrators was fired at an Abrams tank and a Bradley fighting vehicle. A robust sampling system was designed to collect aerosols in this difficult environment and continuously monitor the sampler flow rates. Aerosols collected were analyzed for uranium concentration and particle size distribution as a function of time. They were also analyzed for uranium oxide phases, particle morphology, and dissolution in vitro. The resulting data provide input useful in human health risk assessments.

  2. Dopamine Depletion Impairs Bilateral Sensory Processing in the Striatum in a Pathway-Dependent Manner.

    Science.gov (United States)

    Ketzef, Maya; Spigolon, Giada; Johansson, Yvonne; Bonito-Oliva, Alessandra; Fisone, Gilberto; Silberberg, Gilad

    2017-05-17

    Parkinson's disease (PD) is a movement disorder caused by the loss of dopaminergic innervation, particularly to the striatum. PD patients often exhibit sensory impairments, yet the underlying network mechanisms are unknown. Here we examined how dopamine (DA) depletion affects sensory processing in the mouse striatum. We used the optopatcher for online identification of direct and indirect pathway projection neurons (MSNs) during in vivo whole-cell recordings. In control mice, MSNs encoded the laterality of sensory inputs with larger and earlier responses to contralateral than ipsilateral whisker deflection. This laterality coding was lost in DA-depleted mice due to adaptive changes in the intrinsic and synaptic properties, mainly, of direct pathway MSNs. L-DOPA treatment restored laterality coding by increasing the separation between ipsilateral and contralateral responses. Our results show that DA depletion impairs bilateral tactile acuity in a pathway-dependent manner, thus providing unexpected insights into the network mechanisms underlying sensory deficits in PD. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A concatenation scheme of LDPC codes and source codes for flash memories

    Science.gov (United States)

    Huang, Qin; Pan, Song; Zhang, Mu; Wang, Zulin

    2012-12-01

    Recently, low-density parity-check (LDPC) codes have been applied in flash memories to correct errors. However, as verified in this article, their performance degrades rapidly as the number of stuck cells increases. Thus, this paper presents a concatenation reliability scheme of LDPC codes and source codes, which aims to improve the performance of LDPC codes for flash memories with stuck cells. In this scheme, the locations of stuck cells is recorded by source codes in the write process such that erasures rather than wrong log-likelihood ratios on these cells are given in the read process. Then, LDPC codes correct these erasures and soft errors caused by cell-to-cell interferences. The analyses of channel capacity and compression rates of source codes with side information show that the memory cost of the proposed scheme is moderately low. Simulation results verify that the proposed scheme outperforms the traditional scheme with only LDPC codes.

  4. A Critical Assessment of the Resource Depletion Potential of Current and Future Lithium-Ion Batteries

    Directory of Open Access Journals (Sweden)

    Jens F. Peters

    2016-12-01

    Full Text Available Resource depletion aspects are repeatedly used as an argument for a shift towards new battery technologies. However, whether serious shortages due to the increased demand for traction and stationary batteries can actually be expected is subject to an ongoing discussion. In order to identify the principal drivers of resource depletion for battery production, we assess different lithium-ion battery types and a new lithium-free battery technology (sodium-ion under this aspect, applying different assessment methodologies. The findings show that very different results are obtained with existing impact assessment methodologies, which hinders clear interpretation. While cobalt, nickel and copper can generally be considered as critical metals, the magnitude of their depletion impacts in comparison with that of other battery materials like lithium, aluminum or manganese differs substantially. A high importance is also found for indirect resource depletion effects caused by the co-extraction of metals from mixed ores. Remarkably, the resource depletion potential per kg of produced battery is driven only partially by the electrode materials and thus depends comparably little on the battery chemistry itself. One of the key drivers for resource depletion seems to be the metals (and co-products in electronic parts required for the battery management system, a component rather independent from the actual battery chemistry. However, when assessing the batteries on a capacity basis (per kWh storage capacity, a high-energy density also turns out to be relevant, since it reduces the mass of battery required for providing one kWh, and thus the associated resource depletion impacts.

  5. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    1. Introduction. Shannon's landmark paper 'A Mathematical Theory of. Communication' [1] laid the foundation for communica- ... coding theory, codes over graphs and iterative techniques, and informa- tion theory. .... An important consequence of independence is that if. {Xb X2 , . Xn} are independent random variables, each.

  6. Turbo Codes Extended with Outer BCH Code

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl

    1996-01-01

    The "error floor" observed in several simulations with the turbo codes is verified by calculation of an upper bound to the bit error rate for the ensemble of all interleavers. Also an easy way to calculate the weight enumerator used in this bound is presented. An extended coding scheme is proposed...

  7. Construction of Capacity Achieving Lattice Gaussian Codes

    KAUST Repository

    Alghamdi, Wael

    2016-04-01

    We propose a new approach to proving results regarding channel coding schemes based on construction-A lattices for the Additive White Gaussian Noise (AWGN) channel that yields new characterizations of the code construction parameters, i.e., the primes and dimensions of the codes, as functions of the block-length. The approach we take introduces an averaging argument that explicitly involves the considered parameters. This averaging argument is applied to a generalized Loeliger ensemble [1] to provide a more practical proof of the existence of AWGN-good lattices, and to characterize suitable parameters for the lattice Gaussian coding scheme proposed by Ling and Belfiore [3].

  8. QR codes: next level of social media.

    Science.gov (United States)

    Gottesman, Wesley; Baum, Neil

    2013-01-01

    The OR code, which is short for quick response code, system was invented in Japan for the auto industry. Its purpose was to track vehicles during manufacture; it was designed to allow high-speed component scanning. Now the scanning can be easily accomplished via cell phone, making the technology useful and within reach of your patients. There are numerous applications for OR codes in the contemporary medical practice. This article describes QR codes and how they might be applied for marketing and practice management.

  9. Revealing water's secrets: deuterium depleted water.

    Science.gov (United States)

    Goncharuk, Vladyslav V; Kavitskaya, Alina A; Romanyukina, Iryna Yu; Loboda, Oleksandr A

    2013-06-17

    The anomalous properties of water have been of great interest for generations of scientists. However the impact of small amount of deuterium content which is always present in water has never been explored before. For the first time the fundamental properties of deuterium depleted (light) water at 4°C and 20°C are here presented. The obtained results show the important role of the deuterium in the properties of bulk water. At 4°C the lowest value of the kinematic viscosity (1.46 mm2/s) has been found for 96.5 ppm D/H ratio. The significant deviation in surface tension values has been observed in deuterium depleted water samples at the both temperature regimes. The experimental data provides direct evidence that density, surface tension and viscosity anomalies of water are caused by the presence of variable concentration of deuterium which leads to the formation of water clusters of different size and quantity. The investigated properties of light water reveal the origin of the water anomalies. The new theoretical model of cluster formation with account of isotope effect is proposed.

  10. Estimation of the Fuel Depletion Code Bias and Uncertainty in Burnup-Credit Criticality Analysis

    International Nuclear Information System (INIS)

    Kim, Jong Woon; Cho, Nam Zin; Lee, Sang Jin; Bae, Chang Yeal

    2006-01-01

    In the past, criticality safety analyses for commercial light-water-reactor (LWR) spent nuclear fuel (SNF) storage and transportation canisters assumed the spent fuel to be fresh (unirradiated) fuel with uniform isotopic compositions. This fresh-fuel assumption provides a well-defined, bounding approach to the criticality safety analysis that eliminates concerns related to the fuel operating history, and thus considerably simplifies the safety analysis. However, because this assumption ignores the inherent decrease in reactivity as a result of irradiation, it is very conservative. The concept of taking credit for the reduction in reactivity due to fuel burnup is commonly referred to as burnup credit. Implementation of burnup credit requires the computational prediction of the nuclide inventories (compositions) for the dominant fissile and absorbing nuclide species in spent fuel. In addition to that, the bias and uncertainty in the predicted concentration of all nuclides used in the analysis be established by comparisons of calculated and measured radiochemical assay data. In this paper, three methods for considering the bias and uncertainty will be reviewed. The estimated bias and uncertainty that the results of 3rd method are presented

  11. Applied superconductivity

    CERN Document Server

    Newhouse, Vernon L

    1975-01-01

    Applied Superconductivity, Volume II, is part of a two-volume series on applied superconductivity. The first volume dealt with electronic applications and radiation detection, and contains a chapter on liquid helium refrigeration. The present volume discusses magnets, electromechanical applications, accelerators, and microwave and rf devices. The book opens with a chapter on high-field superconducting magnets, covering applications and magnet design. Subsequent chapters discuss superconductive machinery such as superconductive bearings and motors; rf superconducting devices; and future prospec

  12. Recent trends in coding theory and its applications

    CERN Document Server

    Li, Wen-Ching Winnie

    2007-01-01

    Coding theory draws on a remarkable selection of mathematical topics, both pure and applied. The various contributions in this volume introduce coding theory and its most recent developments and applications, emphasizing both mathematical and engineering perspectives on the subject. This volume covers four important areas in coding theory: algebraic geometry codes, graph-based codes, space-time codes, and quantum codes. Both students and seasoned researchers will benefit from the extensive and self-contained discussions of the development and recent progress in these areas.

  13. Depletion Calculations Based on Perturbations. Application to the Study of a Rep-Like Assembly at Beginning of Cycle with TRIPOLI-4®.

    Science.gov (United States)

    Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh

    2014-06-01

    For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.

  14. Monte Carlo Depletion Analysis of a PWR Integral Fuel Burnable Absorber by MCNAP

    Science.gov (United States)

    Shim, H. J.; Jang, C. S.; Kim, C. H.

    The MCNAP is a personal computer-based continuous energy Monte Carlo (MC) neutronics analysis program written on C++ language. For the purpose of examining its qualification, a comparison of the depletion analysis of three integral burnable fuel assemblies of the pressurized water reactor (PWR) by the MCNAP and deterministic fuel assembly (FA) design vendor codes is presented. It is demonstrated that the continuous energy MC calculation by the MCNAP can provide a very accurate neutronics analysis method for the burnable absorber FA's. It is also demonstrated that the parallel MC computation by adoption of multiple PC's enables one to complete the lifetime depletion analysis of the FA's within the order of hours instead of order of days otherwise.

  15. Sensitivity theory for reactor burnup analysis based on depletion perturbation theory

    International Nuclear Information System (INIS)

    Yang, Wonsik.

    1989-01-01

    The large computational effort involved in the design and analysis of advanced reactor configurations motivated the development of Depletion Perturbation Theory (DPT) for general fuel cycle analysis. The work here focused on two important advances in the current methods. First, the adjoint equations were developed for using the efficient linear flux approximation to decouple the neutron/nuclide field equations. And second, DPT was extended to the constrained equilibrium cycle which is important for the consistent comparison and evaluation of alternative reactor designs. Practical strategies were formulated for solving the resulting adjoint equations and a computer code was developed for practical applications. In all cases analyzed, the sensitivity coefficients generated by DPT were in excellent agreement with the results of exact calculations. The work here indicates that for a given core response, the sensitivity coefficients to all input parameters can be computed by DPT with a computational effort similar to a single forward depletion calculation

  16. Solution of the isotopic depletion equation using decomposition method and analytical solution

    Energy Technology Data Exchange (ETDEWEB)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: fprata@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@lmp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  17. Solution of the isotopic depletion equation using decomposition method and analytical solution

    International Nuclear Information System (INIS)

    Prata, Fabiano S.; Silva, Fernando C.; Martinez, Aquilino S.

    2011-01-01

    In this paper an analytical calculation of the isotopic depletion equations is proposed, featuring a chain of major isotopes found in a typical PWR reactor. Part of this chain allows feedback reactions of (n,2n) type. The method is based on decoupling the equations describing feedback from the rest of the chain by using the decomposition method, with analytical solutions for the other isotopes present in the chain. The method was implemented in a PWR reactor simulation code, that makes use of the nodal expansion method (NEM) to solve the neutron diffusion equation, describing the spatial distribution of neutron flux inside the reactor core. Because isotopic depletion calculation module is the most computationally intensive process within simulation systems of nuclear reactor core, it is justified to look for a method that is both efficient and fast, with the objective of evaluating a larger number of core configurations in a short amount of time. (author)

  18. Applied number theory

    CERN Document Server

    Niederreiter, Harald

    2015-01-01

    This textbook effectively builds a bridge from basic number theory to recent advances in applied number theory. It presents the first unified account of the four major areas of application where number theory plays a fundamental role, namely cryptography, coding theory, quasi-Monte Carlo methods, and pseudorandom number generation, allowing the authors to delineate the manifold links and interrelations between these areas.  Number theory, which Carl-Friedrich Gauss famously dubbed the queen of mathematics, has always been considered a very beautiful field of mathematics, producing lovely results and elegant proofs. While only very few real-life applications were known in the past, today number theory can be found in everyday life: in supermarket bar code scanners, in our cars’ GPS systems, in online banking, etc.  Starting with a brief introductory course on number theory in Chapter 1, which makes the book more accessible for undergraduates, the authors describe the four main application areas in Chapters...

  19. Manual and Fast C Code Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Fadle Abdulla

    2010-01-01

    Full Text Available Developing an application with high performance through the code optimization places a greater responsibility on the programmers. While most of the existing compilers attempt to automatically optimize the program code, manual techniques remain the predominant method for performing optimization. Deciding where to try to optimize code is difficult, especially for large complex applications. For manual optimization, the programmers can use his experiences in writing the code, and then he can use a software profiler in order to collect and analyze the performance data from the code. In this work, we have gathered the most experiences which can be applied to improve the style of writing programs in C language as well as we present an implementation of the manual optimization of the codes using the Intel VTune profiler. The paper includes two case studies to illustrate our optimization on the Heap Sort and Factorial functions.

  20. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  1. Model and code development

    International Nuclear Information System (INIS)

    Anon.

    1977-01-01

    Progress in model and code development for reactor physics calculations is summarized. The codes included CINDER-10, PHROG, RAFFLE GAPP, DCFMR, RELAP/4, PARET, and KENO. Kinetics models for the PBF were developed

  2. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  3. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  4. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  5. Deuterium-depleted water. Romanian achievements and perspective

    International Nuclear Information System (INIS)

    Stefanescu, Ion; Saros-Rogobete, Irina; Titescu, Gheorghe

    2001-01-01

    Deuterium-depleted water has an isotopic content smaller than 145 ppm D/(D+H) which is the natural isotopic content of water. Beginning with 1996 ICSI Rm. Valcea, deuterium-depleted water producer, co-operated with Romanian specialized institutes for biological effect's evaluation of deuterium-depleted water. These investigations lead to the following conclusions: - Deuterium-depleted water caused a tendency towards the increase of the basal tonus, accompanied by the intensification of the vasoconstrictor effects of phenylefrine, noradrenaline and angiotensin; the increase of the basal tonus and vascular reactivity produced by the deuterium-depleted water persist after the removal of the vascular endothelium; - Animals treated with deuterium-depleted water showed an increase of the resistance both to sublethal and to lethal gamma radiation doses, suggesting a radioprotective action; - Deuterium-depleted water stimulates immune defence reactions and increases the numbers of polymorphonuclear neutrophils; - Investigations regarding artificial reproduction of fish with deuterium-depleted water fecundated solutions confirmed favourable influence in embryo growth stage and resistance in subsequent growth stages; - It was studied germination, growth and quantitative character's variability in plants; one can remark the favourable influence of deuterium-depleted water on biological process in plants in various ontogenetic stages; - The deuterium depletion in seawater produces the diminution of the water spectral energy related to an increased metabolism of Tetraselmis Suecica. (authors)

  6. Applied mathematics

    CERN Document Server

    Logan, J David

    2013-01-01

    Praise for the Third Edition"Future mathematicians, scientists, and engineers should find the book to be an excellent introductory text for coursework or self-study as well as worth its shelf space for reference." -MAA Reviews Applied Mathematics, Fourth Edition is a thoroughly updated and revised edition on the applications of modeling and analyzing natural, social, and technological processes. The book covers a wide range of key topics in mathematical methods and modeling and highlights the connections between mathematics and the applied and nat

  7. Tryptophan depletion affects compulsive behaviour in rats

    DEFF Research Database (Denmark)

    Merchán, A; Navarro, S V; Klein, A B

    2017-01-01

    RATIONALE: Compulsive behaviour, present in different psychiatric disorders, such as obsessive-compulsive disorder, schizophrenia and drug abuse, is associated with altered levels of monoamines, particularly serotonin (5-hydroxytryptamine) and its receptor system. OBJECTIVES: The present study...... evaluated, as well as the 5-HT2A and 5-HT1A receptor binding, in different brain regions. METHODS: Wistar rats were selected as high (HD) or low (LD) drinkers according to their SIP behaviour, while Lister hooded rats did not show SIP acquisition. Both strains were fed for 14 days with either a TRP...... in the striatum was significantly reduced in the TRP-depleted HD Wistar rats. CONCLUSIONS: These results suggest that alterations of the serotonergic system could be involved in compulsive behaviour in vulnerable populations....

  8. Anxiety, ego depletion, and sports performance.

    Science.gov (United States)

    Englert, Chris; Bertrams, Alex

    2012-10-01

    In the present article, we analyzed the role of self-control strength and state anxiety in sports performance. We tested the hypothesis that self-control strength and state anxiety interact in predicting sports performance on the basis of two studies, each using a different sports task (Study 1: performance in a basketball free throw task, N = 64; Study 2: performance in a dart task, N = 79). The patterns of results were as expected in both studies: Participants with depleted self-control strength performed worse in the specific tasks as their anxiety increased, whereas there was no significant relation for participants with fully available self-control strength. Furthermore, different degrees of available self-control strength did not predict performance in participants who were low in state anxiety, but did in participants who were high in state anxiety. Thus increasing self-control strength could reduce the negative anxiety effects in sports and improve athletes' performance under pressure.

  9. Directional depletion interactions in shaped particles

    Directory of Open Access Journals (Sweden)

    A. Scala

    2014-09-01

    Full Text Available Entropic forces in colloidal suspensions and in polymer-colloid systems are of long-standing and continuing interest. Experiments show how entropic forces can be used to control the self-assembly of colloidal particles. Significant advances in colloidal synthesis made in the past two decades have enabled the preparation of high quality nano-particles with well-controlled sizes, shapes, and compositions, indicating that such particles can be utilized as "artificial atoms" to build new materials. To elucidate the effects of the shape of particles upon the magnitude of entropic interaction, we analyse the entropic interactions of two cut-spheres. We show that the solvent induces a strong directional depletion attraction among flat faces of the cut-spheres. Such an effect highlights the possibility of using the shape of particles to control directionality and strength of interaction.

  10. Design optimization using depletion perturbation theory

    International Nuclear Information System (INIS)

    Worley, B.A.

    1984-06-01

    Analysis of the fuel cycle performance of a reactor requires knowledge of the entire fuel burnup history. The optimal design depends upon the desired performance parameter or combination of parameters to be minimized (or maximized). The emphasis to date has been to use some combination of iterations involving a number of direct calculations, static perturbation theory, binary exchange methods, and empirical relationships. The object of this study is to demonstrate an approach to optimization based upon Depletion Perturbation Theory (DPT). The DPT equations directly couple the nuclide burnup equations and the neutron balance equations. The equations require the calculation of forward and adjoint solutions for the neutron flux and nuclide transmutations. The application is for analysis of a modular HTGR. The reactor has axially dependent fuel loadings in order to achieve an axial power shape that keeps fuel temperatures below a specified maximum

  11. Kinetic depletion model for pellet ablation

    Energy Technology Data Exchange (ETDEWEB)

    Kuteev, Boris V. [State Technical Univ., St. Petersburg (Russian Federation)

    2001-11-01

    A kinetic model for depletion effect, which determines pellet ablation when the pellet passes a rational magnetic surface, is formulated. The model predicts a moderate decrease of the ablation rate compared with the earlier considered monoenergy versions [1, 2]. For typical T-10 conditions the ablation rate reduces by a reactor of 2.5 when the 1-mm pellet penetrates through the plasma center. A substantial deceleration of pellets -about 15% per centimeter of low shire rational q region; is predicted. Penetration for Low Field Side and High Field Side injections is considered taking into account modification of the electron distribution function by toroidal magnetic field. It is shown that Shafranov shift and toroidal effects yield the penetration length for HFS injection higher by a factor of 1.5. This fact should be taken into account when plasma-shielding effects on penetration are considered. (author)

  12. ARC Code TI: CODE Software Framework

    Data.gov (United States)

    National Aeronautics and Space Administration — CODE is a software framework for control and observation in distributed environments. The basic functionality of the framework allows a user to observe a distributed...

  13. ARC Code TI: ROC Curve Code Augmentation

    Data.gov (United States)

    National Aeronautics and Space Administration — ROC (Receiver Operating Characteristic) curve Code Augmentation was written by Rodney Martin and John Stutz at NASA Ames Research Center and is a modification of ROC...

  14. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  15. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    the reading of data from memory the receiving process. Protecting data in computer memories was one of the earliest applications of Hamming codes. We now describe the clever scheme invented by Hamming in 1948. To keep things simple, we describe the binary length 7 Hamming code. Encoding in the Hamming Code.

  16. Morse Code Activity Packet.

    Science.gov (United States)

    Clinton, Janeen S.

    This activity packet offers simple directions for setting up a Morse Code system appropriate to interfacing with any of several personal computer systems. Worksheets are also included to facilitate teaching Morse Code to persons with visual or other disabilities including blindness, as it is argued that the code is best learned auditorily. (PB)

  17. Modelling chemical depletion profiles in regolith

    Science.gov (United States)

    Brantley, S.L.; Bandstra, J.; Moore, J.; White, A.F.

    2008-01-01

    Chemical or mineralogical profiles in regolith display reaction fronts that document depletion of leachable elements or minerals. A generalized equation employing lumped parameters was derived to model such ubiquitously observed patterns:C = frac(C0, frac(C0 - Cx = 0, Cx = 0) exp (??ini ?? over(k, ??) ?? x) + 1)Here C, Cx = 0, and Co are the concentrations of an element at a given depth x, at the top of the reaction front, or in parent respectively. ??ini is the roughness of the dissolving mineral in the parent and k???? is a lumped kinetic parameter. This kinetic parameter is an inverse function of the porefluid advective velocity and a direct function of the dissolution rate constant times mineral surface area per unit volume regolith. This model equation fits profiles of concentration versus depth for albite in seven weathering systems and is consistent with the interpretation that the surface area (m2 mineral m- 3 bulk regolith) varies linearly with the concentration of the dissolving mineral across the front. Dissolution rate constants can be calculated from the lumped fit parameters for these profiles using observed values of weathering advance rate, the proton driving force, the geometric surface area per unit volume regolith and parent concentration of albite. These calculated values of the dissolution rate constant compare favorably to literature values. The model equation, useful for reaction fronts in both steady-state erosional and quasi-stationary non-erosional systems, incorporates the variation of reaction affinity using pH as a master variable. Use of this model equation to fit depletion fronts for soils highlights the importance of buffering of pH in the soil system. Furthermore, the equation should allow better understanding of the effects of important environmental variables on weathering rates. ?? 2008.

  18. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  19. Applied Macroeconometrics

    OpenAIRE

    Nektarios Aslanidis

    2017-01-01

    This book treats econometric methods for analysis of applied econometrics with a particular focus on applications in macroeconomics. Topics include macroeconomic data, panel data models, unobserved heterogeneity, model comparison, endogeneity, dynamic econometric models, vector autoregressions, forecast evaluation, structural identification. The books provides undergraduate students with the necessary knowledge to be able to undertake econometric analysis in modern macroeconomic research.

  20. Accurate discrimination of conserved coding and non-coding regions through multiple indicators of evolutionary dynamics

    Directory of Open Access Journals (Sweden)

    Pesole Graziano

    2009-09-01

    Full Text Available Abstract Background The conservation of sequences between related genomes has long been recognised as an indication of functional significance and recognition of sequence homology is one of the principal approaches used in the annotation of newly sequenced genomes. In the context of recent findings that the number non-coding transcripts in higher organisms is likely to be much higher than previously imagined, discrimination between conserved coding and non-coding sequences is a topic of considerable interest. Additionally, it should be considered desirable to discriminate between coding and non-coding conserved sequences without recourse to the use of sequence similarity searches of protein databases as such approaches exclude the identification of novel conserved proteins without characterized homologs and may be influenced by the presence in databases of sequences which are erroneously annotated as coding. Results Here we present a machine learning-based approach for the discrimination of conserved coding sequences. Our method calculates various statistics related to the evolutionary dynamics of two aligned sequences. These features are considered by a Support Vector Machine which designates the alignment coding or non-coding with an associated probability score. Conclusion We show that our approach is both sensitive and accurate with respect to comparable methods and illustrate several situations in which it may be applied, including the identification of conserved coding regions in genome sequences and the discrimination of coding from non-coding cDNA sequences.

  1. The MC21 Monte Carlo Transport Code

    International Nuclear Information System (INIS)

    Sutton TM; Donovan TJ; Trumbull TH; Dobreff PS; Caro E; Griesheimer DP; Tyburski LJ; Carpenter DC; Joo H

    2007-01-01

    MC21 is a new Monte Carlo neutron and photon transport code currently under joint development at the Knolls Atomic Power Laboratory and the Bettis Atomic Power Laboratory. MC21 is the Monte Carlo transport kernel of the broader Common Monte Carlo Design Tool (CMCDT), which is also currently under development. The vision for CMCDT is to provide an automated, computer-aided modeling and post-processing environment integrated with a Monte Carlo solver that is optimized for reactor analysis. CMCDT represents a strategy to push the Monte Carlo method beyond its traditional role as a benchmarking tool or ''tool of last resort'' and into a dominant design role. This paper describes various aspects of the code, including the neutron physics and nuclear data treatments, the geometry representation, and the tally and depletion capabilities

  2. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  3. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    , Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  4. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  5. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models; Desenvolvimento de um sistema computacional para o planejamento radioterapico com a tecnica IMRT aplicado ao codigo MCNP com interface grafica 3D para modelos de voxel

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Telma Cristina Ferreira

    2009-07-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  6. Environmental performance of green building code and certification systems.

    Science.gov (United States)

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  7. Failure to Replicate Depletion of Self-Control

    Science.gov (United States)

    Xu, Xiaomeng; Demos, Kathryn E.; Leahey, Tricia M.; Hart, Chantelle N.; Trautvetter, Jennifer; Coward, Pamela; Middleton, Kathryn R.; Wing, Rena R.

    2014-01-01

    The limited resource or strength model of self-control posits that the use of self-regulatory resources leads to depletion and poorer performance on subsequent self-control tasks. We conducted four studies (two with community samples, two with young adult samples) utilizing a frequently used depletion procedure (crossing out letters protocol) and the two most frequently used dependent measures of self-control (handgrip perseverance and modified Stroop). In each study, participants completed a baseline self-control measure, a depletion or control task (randomized), and then the same measure of self-control a second time. There was no evidence for significant depletion effects in any of these four studies. The null results obtained in four attempts to replicate using strong methodological approaches may indicate that depletion has more limited effects than implied by prior publications. We encourage further efforts to replicate depletion (particularly among community samples) with full disclosure of positive and negative results. PMID:25333564

  8. Failure to replicate depletion of self-control.

    Directory of Open Access Journals (Sweden)

    Xiaomeng Xu

    Full Text Available The limited resource or strength model of self-control posits that the use of self-regulatory resources leads to depletion and poorer performance on subsequent self-control tasks. We conducted four studies (two with community samples, two with young adult samples utilizing a frequently used depletion procedure (crossing out letters protocol and the two most frequently used dependent measures of self-control (handgrip perseverance and modified Stroop. In each study, participants completed a baseline self-control measure, a depletion or control task (randomized, and then the same measure of self-control a second time. There was no evidence for significant depletion effects in any of these four studies. The null results obtained in four attempts to replicate using strong methodological approaches may indicate that depletion has more limited effects than implied by prior publications. We encourage further efforts to replicate depletion (particularly among community samples with full disclosure of positive and negative results.

  9. Depletion-induced biaxial nematic states of boardlike particles

    International Nuclear Information System (INIS)

    Belli, S; Van Roij, R; Dijkstra, M

    2012-01-01

    With the aim of investigating the stability conditions of biaxial nematic liquid crystals, we study the effect of adding a non-adsorbing ideal depletant on the phase behavior of colloidal hard boardlike particles. We take into account the presence of the depletant by introducing an effective depletion attraction between a pair of boardlike particles. At fixed depletant fugacity, the stable liquid-crystal phase is determined through a mean-field theory with restricted orientations. Interestingly, we predict that for slightly elongated boardlike particles a critical depletant density exists, where the system undergoes a direct transition from an isotropic liquid to a biaxial nematic phase. As a consequence, by tuning the depletant density, an easy experimental control parameter, one can stabilize states of high biaxial nematic order even when these states are unstable for pure systems of boardlike particles. (paper)

  10. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  11. Generalized perturbation theory for LWR depletion analysis and core design applications

    International Nuclear Information System (INIS)

    White, J.R.; Frank, B.R.

    1986-01-01

    A comprehensive time-dependent perturbation theory formulation that includes macroscopic depletion, thermal-hydraulic and poison feedback effects, and a criticality reset mechanism is developed. The methodology is compatible with most current LWR design codes. This new development allows GTP/DTP methods to be used quantitatively in a variety of realistic LWR physics applications that were not possible prior to this work. A GTP-based optimization technique for incore fuel management analyses is addressed as a promising application of the new formulation

  12. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  13. Technological change, population dynamics, and natural resource depletion

    OpenAIRE

    Schaefer, Andreas

    2014-01-01

    In this paper, we integrate fertility and educational choices into a scale-invariant model of directed technological change with non-renewable natural resources, in order to reveal the interaction between population dynamics, technological change, and natural resource depletion. In line with empirical regularities, skill-biased technological change induces a decline in population growth and a transitory increase in the depletion rate of natural resources. In the long-run, the depletion rate a...

  14. Analytical Modeling of Unsteady Aluminum Depletion in Thermal Barrier Coatings

    OpenAIRE

    YEŞİLATA, Bülent

    2014-01-01

    The oxidation behavior of thermal barrier coatings (TBCs) in aircraft turbines is studied. A simple, unsteady and one-dimensional, diffusion model based on aluminum depletion from a bond-coat to form an oxide layer of Al2O3 is introduced. The model is employed for a case study with currently available experimental data. The diffusion coefficient of the depleted aluminum in the alloy, the concentration profiles at different oxidation times, and the thickness of Al-depleted region are...

  15. Shifting Codes: Education or Regulation? Trainee Teachers and the Code of Conduct and Practice in England

    Science.gov (United States)

    Spendlove, David; Barton, Amanda; Hallett, Fiona; Shortt, Damien

    2012-01-01

    In 2009, the General Teaching Council for England (GTCE) introduced a revised Code of Conduct and Practice (2009) for registered teachers. The code also applies to all trainee teachers who are provisionally registered with the GTCE and who could be liable to a charge of misconduct during their periods of teaching practice. This paper presents the…

  16. Applied optics

    International Nuclear Information System (INIS)

    Orszag, A.; Antonetti, A.

    1988-01-01

    The 1988 progress report, of the Applied Optics laboratory, of the (Polytechnic School, France), is presented. The optical fiber activities are focused on the development of an optical gyrometer, containing a resonance cavity. The following domains are included, in the research program: the infrared laser physics, the laser sources, the semiconductor physics, the multiple-photon ionization and the nonlinear optics. Investigations on the biomedical, the biological and biophysical domains are carried out. The published papers and the congress communications are listed [fr

  17. Quality assurance requirements in various codes and standards

    International Nuclear Information System (INIS)

    Shaaban, H.I.; EL-Sayed, A.; Aly, A.E.

    1987-01-01

    The quality assurance requirements in various countries and according to various international codes and standards are presented, compared and critically discussed. Cases of developing countries are also discussed, and the use of IAEA code of practice and other codes for quality assurance in these countries is reviewed. Recommendations are made regarding the quality assurance system to be applied for Egypt's nuclear power plants

  18. Test Code Quality and Its Relation to Issue Handling Performance

    NARCIS (Netherlands)

    Athanasiou, D.; Nugroho, A.; Visser, J.; Zaidman, A.

    2014-01-01

    Automated testing is a basic principle of agile development. Its benefits include early defect detection, defect cause localization and removal of fear to apply changes to the code. Therefore, maintaining high quality test code is essential. This study introduces a model that assesses test code

  19. Raptor Codes for Use in Opportunistic Error Correction

    NARCIS (Netherlands)

    Zijnge, T.; Goseling, Jasper; Weber, Jos H.; Schiphorst, Roelof; Shao, X.; Slump, Cornelis H.

    2010-01-01

    In this paper a Raptor code is developed and applied in an opportunistic error correction (OEC) layer for Coded OFDM systems. Opportunistic error correction [3] tries to recover information when it is available with the least effort. This is achieved by using Fountain codes in a COFDM system, which

  20. ADORAVA - A computer code to sum random variables

    International Nuclear Information System (INIS)

    Fleming, P.V.; Oliveira, L.F.S. de; Senna, V.; Salles, M.R.

    1985-06-01

    The ADORAVA computer code was carried out aiming to determine the moments of random variable sum distribution when moments are known. The ADORAVA computer code was developed to be applied in probabilistic safety analysis, more specifically for uncertainty propagation in fault trees. The description of ADORAVA algorithm, input, examples and the output of compiled code are presented. (M.C.K.) [pt

  1. Gas generation matrix depletion quality assurance project plan

    International Nuclear Information System (INIS)

    1998-01-01

    The Los Alamos National Laboratory (LANL) is to provide the necessary expertise, experience, equipment and instrumentation, and management structure to: Conduct the matrix depletion experiments using simulated waste for quantifying matrix depletion effects; and Conduct experiments on 60 cylinders containing simulated TRU waste to determine the effects of matrix depletion on gas generation for transportation. All work for the Gas Generation Matrix Depletion (GGMD) experiment is performed according to the quality objectives established in the test plan and under this Quality Assurance Project Plan (QAPjP)

  2. Producing, Importing, and Exporting Ozone-Depleting Substances

    Science.gov (United States)

    Overview page provides links to information on producing, importing, and exporting ozone-depleting substances, including information about the HCFC allowance system, importing, labeling, recordkeeping and reporting.

  3. Gas generation matrix depletion quality assurance project plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-05-01

    The Los Alamos National Laboratory (LANL) is to provide the necessary expertise, experience, equipment and instrumentation, and management structure to: Conduct the matrix depletion experiments using simulated waste for quantifying matrix depletion effects; and Conduct experiments on 60 cylinders containing simulated TRU waste to determine the effects of matrix depletion on gas generation for transportation. All work for the Gas Generation Matrix Depletion (GGMD) experiment is performed according to the quality objectives established in the test plan and under this Quality Assurance Project Plan (QAPjP).

  4. How Ego Depletion Affects Sexual Self-Regulation: Is It More Than Resource Depletion?

    Science.gov (United States)

    Nolet, Kevin; Rouleau, Joanne-Lucine; Benbouriche, Massil; Carrier Emond, Fannie; Renaud, Patrice

    2015-12-21

    Rational thinking and decision making are impacted when in a state of sexual arousal. The inability to self-regulate arousal can be linked to numerous problems, like sexual risk taking, infidelity, and sexual coercion. Studies have shown that most men are able to exert voluntary control over their sexual excitation with various levels of success. Both situational and dispositional factors can influence self-regulation achievement. The goal of this research was to investigate how ego depletion, a state of low self-control capacity, interacts with personality traits-propensities for sexual excitation and inhibition-and cognitive absorption, to cause sexual self-regulation failure. The sexual responses of 36 heterosexual males were assessed using penile plethysmography. They were asked to control their sexual arousal in two conditions, with and without ego depletion. Results suggest that ego depletion has opposite effects based on the trait sexual inhibition, as individuals moderately inhibited showed an increase in performance while highly inhibited ones showed a decrease. These results challenge the limited resource model of self-regulation and point to the importance of considering how people adapt to acute and high challenging conditions.

  5. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  6. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  7. Propagation of Neutron Cross Section, Fission Yield, and Decay Data Uncertainties in Depletion Calculations

    Science.gov (United States)

    Martinez, J. S.; Zwermann, W.; Gallner, L.; Puente-Espel, F.; Cabellos, O.; Velkov, K.; Hannstein, V.

    2014-04-01

    Propagation of nuclear data uncertainties in reactor calculations is interesting for design purposes and libraries evaluation. Previous versions of the GRS XSUSA library propagated only neutron cross section uncertainties. We have extended XSUSA uncertainty assessment capabilities by including propagation of fission yields and decay data uncertainties due to the their relevance in depletion simulations. We apply this extended methodology to the UAM6 PWR Pin-Cell Burnup Benchmark, which involves uncertainty propagation through burnup.

  8. Observed and simulated depletion layers with southward IMF

    Directory of Open Access Journals (Sweden)

    N. C. Maynard

    2004-06-01

    Full Text Available We present observations from the Polar satellite that confirm the existence of two types of depletion layers predicted under southward interplanetary magnetic field (IMF conditions in magnetohydrodynamic simulations. The first depletion type occurs along the stagnation line when IMF BX and/or dipole tilt are/is present. Magnetic merging occurred away from the equator (Maynard et al., 2003 and flux pile-ups developed while the field lines drape to the high-latitude merging sites. This high-shear type of depletion is consistent with the depletion layer model suggested by Zwan and Wolf (1976 for low-shear northward IMF conditions. Expected sites for depletion layers are associated with places where IMF tubes of force first impinge upon the magnetopause. The second depletion type develops poleward of the cusp. Under strongly driven conditions, magnetic fields from Region 1 current closure over the lobes (Siscoe et al., 2002c cause the high-latitude magnetopause to bulge outward, creating a shoulder above the cusp. These shoulders present the initial obstacle with which the IMF interacts. Flow is impeded, causing local flux pile-ups and low-shear depletion layers to form poleward of the cusps. Merging at the high-shear dayside magnetopause is consequently delayed. In both low- and high-shear cases, we show that the depletion layer structure is part of a slow mode wave standing in front of the magnetopause. As suggested by Southwood and Kivelson (1995, the depletions are rarefactions on the magnetopause side of slow-mode density compressions. While highly sheared magnetic fields are often used as proxies for ongoing local magnetic merging, depletion layers are prohibited at merging locations. Therefore, the existence of a depletion layer is evidence that the location of merging must be remote relative to the observation.

  9. Observed and simulated depletion layers with southward IMF

    Directory of Open Access Journals (Sweden)

    N. C. Maynard

    2004-06-01

    Full Text Available We present observations from the Polar satellite that confirm the existence of two types of depletion layers predicted under southward interplanetary magnetic field (IMF conditions in magnetohydrodynamic simulations. The first depletion type occurs along the stagnation line when IMF BX and/or dipole tilt are/is present. Magnetic merging occurred away from the equator (Maynard et al., 2003 and flux pile-ups developed while the field lines drape to the high-latitude merging sites. This high-shear type of depletion is consistent with the depletion layer model suggested by Zwan and Wolf (1976 for low-shear northward IMF conditions. Expected sites for depletion layers are associated with places where IMF tubes of force first impinge upon the magnetopause. The second depletion type develops poleward of the cusp. Under strongly driven conditions, magnetic fields from Region 1 current closure over the lobes (Siscoe et al., 2002c cause the high-latitude magnetopause to bulge outward, creating a shoulder above the cusp. These shoulders present the initial obstacle with which the IMF interacts. Flow is impeded, causing local flux pile-ups and low-shear depletion layers to form poleward of the cusps. Merging at the high-shear dayside magnetopause is consequently delayed. In both low- and high-shear cases, we show that the depletion layer structure is part of a slow mode wave standing in front of the magnetopause. As suggested by Southwood and Kivelson (1995, the depletions are rarefactions on the magnetopause side of slow-mode density compressions. While highly sheared magnetic fields are often used as proxies for ongoing local magnetic merging, depletion layers are prohibited at merging locations. Therefore, the existence of a depletion layer is evidence that the location of merging must be remote relative to the observation.

  10. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  11. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  12. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow t...... the transversal implementation of a universal set of gates by gauge fixing, while error-dectecting measurements involve only four or six qubits....

  13. Phonological coding during reading

    Science.gov (United States)

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  14. Sensitivity Analysis of Depletion Parameters for Heat Load Evaluation of PWR Spent Fuel Storage Pool

    International Nuclear Information System (INIS)

    Kim, In Young; Lee, Un Chul

    2011-01-01

    As necessity of safety re-evaluation for spent fuel storage facility has emphasized after the Fukushima accident, accuracy improvement of heat load evaluation has become more important to acquire reliable thermal-hydraulic evaluation results. As groundwork, parametric and sensitivity analyses of various storage conditions for Kori Unit 4 spent fuel storage pool and spent fuel depletion parameters such as axial burnup effect, operation history, and specific heat are conducted using ORIGEN2 code. According to heat load evaluation and parametric sensitivity analyses, decay heat of last discharged fuel comprises maximum 80.42% of total heat load of storage facility and there is a negative correlation between effect of depletion parameters and cooling period. It is determined that specific heat is most influential parameter and operation history is secondly influential parameter. And decay heat of just discharged fuel is varied from 0.34 to 1.66 times of average value and decay heat of 1 year cooled fuel is varied from 0.55 to 1.37 times of average value in accordance with change of specific power. Namely depletion parameters can cause large variation in decay heat calculation of short-term cooled fuel. Therefore application of real operation data instead of user selection value is needed to improve evaluation accuracy. It is expected that these results could be used to improve accuracy of heat load assessment and evaluate uncertainty of calculated heat load.

  15. Analysis of visual coding variables on CRT generated displays

    International Nuclear Information System (INIS)

    Blackman, H.S.; Gilmore, W.E.

    1985-01-01

    Cathode ray tube generated safety parameter display systems in a nuclear power plant control room situation have been found to be improved in effectiveness when color coding is employed. Research has indicated strong support for graphic coding techniques particularly in redundant coding schemes. In addition, findings on pictographs, as applied in coding schemes, indicate the need for careful application and for further research in the development of a standardized set of symbols

  16. MORSE Monte Carlo code

    Energy Technology Data Exchange (ETDEWEB)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  17. Bar Code Labels

    Science.gov (United States)

    1988-01-01

    American Bar Codes, Inc. developed special bar code labels for inventory control of space shuttle parts and other space system components. ABC labels are made in a company-developed anodizing aluminum process and consecutively marketed with bar code symbology and human readable numbers. They offer extreme abrasion resistance and indefinite resistance to ultraviolet radiation, capable of withstanding 700 degree temperatures without deterioration and up to 1400 degrees with special designs. They offer high resistance to salt spray, cleaning fluids and mild acids. ABC is now producing these bar code labels commercially or industrial customers who also need labels to resist harsh environments.

  18. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  19. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  20. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  1. ARC Code TI: ACCEPT

    Data.gov (United States)

    National Aeronautics and Space Administration — ACCEPT consists of an overall software infrastructure framework and two main software components. The software infrastructure framework consists of code written to...

  2. Applied mathematics

    International Nuclear Information System (INIS)

    Nedelec, J.C.

    1988-01-01

    The 1988 progress report of the Applied Mathematics center (Polytechnic School, France), is presented. The research fields of the Center are the scientific calculus, the probabilities and statistics and the video image synthesis. The research topics developed are: the analysis of numerical methods, the mathematical analysis of the physics and mechanics fundamental models, the numerical solution of complex models related to the industrial problems, the stochastic calculus and the brownian movement, the stochastic partial differential equations, the identification of the adaptive filtering parameters, the discrete element systems, statistics, the stochastic control and the development, the image synthesis techniques for education and research programs. The published papers, the congress communications and the thesis are listed [fr

  3. Depletion-induced biaxial nematic states of boardlike particles

    NARCIS (Netherlands)

    Belli, S; Dijkstra, M.; van Roij, R.H.H.G.

    2012-01-01

    With the aim of investigating the stability conditions of biaxial nematic liquid crystals, we study the effect of adding a non-adsorbing ideal depletant on the phase behavior of colloidal hard boardlike particles. We take into account the presence of the depletant by introducing an effective

  4. Effect of greenhouse gas emissions on stratospheric ozone depletion

    NARCIS (Netherlands)

    Velders GJM; LLO

    1997-01-01

    The depletion of the ozone layer is caused mainly by the increase in emissions of chlorine- and bromine-containing compounds like CFCs, halons, carbon tetrachloride, methyl chloroform and methyl bromide. Emissions of greenhouse gases can affect the depletion of the ozone layer through atmospheric

  5. Optimizing the Benefits of Conversion of Depleted Oil Reservoirs for ...

    African Journals Online (AJOL)

    Optimizing the Benefits of Conversion of Depleted Oil Reservoirs for Underground Natural Gas Storage in Nigeria. ... (1) utilization of the abandoned oil wells of known production histories; (2) recovery of substantial quantities of oil that otherwise might not have been recovered; (3) converting partially depleted oil reservoirs ...

  6. Edge Equilibrium Code (EEC) For Tokamaks

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xujling

    2014-02-24

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids

  7. QR Code: An Interactive Mobile Advertising Tool

    OpenAIRE

    Ela Sibel Bayrak Meydanoglu

    2013-01-01

    Easy and rapid interaction between consumers and marketers enabled by mobile technology prompted  an increase in the usage of mobile media as an interactive marketing tool in recent years. One of the mobile technologies that can be used in interactive marketing for advertising is QR code (Quick Response Code). Interactive advertising brings back some advantages for the companies that apply it. For example, interaction with consumers provides significant information about consumers' preference...

  8. Model code for energy conservation in new building construction

    Energy Technology Data Exchange (ETDEWEB)

    None

    1977-12-01

    In response to the recognized lack of existing consensus standards directed to the conservation of energy in building design and operation, the preparation and publication of such a standard was accomplished with the issuance of ASHRAE Standard 90-75 ''Energy Conservation in New Building Design,'' by the American Society of Heating, Refrigerating and Air-Conditioning Engineers, Inc., in 1975. This standard addressed itself to recommended practices for energy conservation, using both depletable and non-depletable sources. A model code for energy conservation in building construction has been developed, setting forth the minimum regulations found necessary to mandate such conservation. The code addresses itself to the administration, design criteria, systems elements, controls, service water heating and electrical distribution and use, both for depletable and non-depletable energy sources. The technical provisions of the document are based on ASHRAE 90-75 and it is intended for use by state and local building officials in the implementation of a statewide energy conservation program.

  9. On {\\sigma}-LCD codes

    OpenAIRE

    Carlet, Claude; Mesnager, Sihem; Tang, Chunming; Qi, Yanfeng

    2017-01-01

    Linear complementary pairs (LCP) of codes play an important role in armoring implementations against side-channel attacks and fault injection attacks. One of the most common ways to construct LCP of codes is to use Euclidean linear complementary dual (LCD) codes. In this paper, we first introduce the concept of linear codes with $\\sigma$ complementary dual ($\\sigma$-LCD), which includes known Euclidean LCD codes, Hermitian LCD codes, and Galois LCD codes. As Euclidean LCD codes, $\\sigma$-LCD ...

  10. Latest improvements on TRACPWR six-equations thermohydraulic code

    International Nuclear Information System (INIS)

    Rivero, N.; Batuecas, T.; Martinez, R.; Munoz, J.; Lenhardt, G.; Serrano, P.

    1999-01-01

    The paper presents the latest improvements on TRACPWR aimed at adapting the code to present trends on computer platforms, architectures and training requirements as well as extending the scope of the code itself and its applicability to other technologies different from Westinghouse PWR one. Firstly major features of TRACPWR as best estimate and real time simulation code are summed, then the areas where TRACPWR is being improved are presented. These areas comprising: (1) Architecture: integrating TRACPWR and RELAP5 codes, (2) Code scope enhancement: modelling the Mid-Loop operation, (3) Code speed-up: applying parallelization techniques, (4) Code platform downswing: porting to Windows N1 platform, (5) On-line performance: allowing simulation initialisation from a Plant Process Computer, and (6) Code scope extension: using the code for modelling VVER and PHWR technology. (author)

  11. Dynamic Reverse Code Generation for Backward Execution

    DEFF Research Database (Denmark)

    Lee, Jooyong

    2007-01-01

    he need for backward execution in debuggers has been raised a number of times. Backward execution helps a user naturally think backwards and, in turn, easily locate the cause of a bug. Backward execution has been implemented mostly by state-saving or checkpointing, which are inherently not scalable....... In this paper, we present a method to generate reverse code, so that backtracking can be performed by executing reverse code. The novelty of our work is that we generate reverse code on-the-fly, while running a debugger, which makes it possible to apply the method even to debugging multi-threaded programs....

  12. Electrical, instrumentation, and control codes and standards

    International Nuclear Information System (INIS)

    Kranning, A.N.

    1978-01-01

    During recent years numerous documents in the form of codes and standards have been developed and published to provide design, fabrication and construction rules and criteria applicable to instrumentation, control and power distribution facilities for nuclear power plants. The contents of this LTR were prepared by NUS Corporation under Subcontract K5108 and provide a consolidated index and listing of the documents selected for their application to procurement of materials and design of modifications and new construction at the LOFT facility. These codes and standards should be applied together with the National Electrical Code, the ID Engineering Standards and LOFT Specifications to all LOFT instrument and electrical design activities

  13. Possible Application of Wavefront Coding to the LSST

    Energy Technology Data Exchange (ETDEWEB)

    Langeveld, Willy; /SLAC

    2006-06-30

    Wavefront Coding has been applied as a means to increase the effective depth of focus of optical systems. In this note I discuss the potential for this technique to increase the depth of focus of the LSST and the resulting advantages for the construction and operation of the facility, as well as possible drawbacks. It may be possible to apply Wavefront Coding without changing the current LSST design, in which case Wavefront Coding might merit further study as a risk mitigation strategy.

  14. Analysis and Application of Whey Protein Depleted Skim Milk Systems

    DEFF Research Database (Denmark)

    Sørensen, Hanne

    homogenisation (UHPH). The microfiltration will result in a milk fraction more or less depleted from whey protein, and could probably in combination with UHPH treatment contribute to milk fractions and cheeses with novel micro and macrostructures. These novel fractions could be used as new ingredients to improve......-destructive methods for this purpose. A significant changed structure was observed in skim milk depleted or partly depleted for whey protein, acidified and UHPH treated. Some of the properties of the UHPH treated skim milk depleted from whey protein observed in this study support the idea, that UHPH treatment has...... this. LF-NMR relaxation were utilised to obtain information about the water mobility (relaxation time), in diluted skim milk systems depleted from whey protein. Obtained results indicate that measuring relaxation times with LF-NMR could be difficult to utilize, since no clear relationship between...

  15. The Abiotic Depletion Potential: Background, Updates, and Future

    Directory of Open Access Journals (Sweden)

    Lauran van Oers

    2016-03-01

    Full Text Available Depletion of abiotic resources is a much disputed impact category in life cycle assessment (LCA. The reason is that the problem can be defined in different ways. Furthermore, within a specified problem definition, many choices can still be made regarding which parameters to include in the characterization model and which data to use. This article gives an overview of the problem definition and the choices that have been made when defining the abiotic depletion potentials (ADPs for a characterization model for abiotic resource depletion in LCA. Updates of the ADPs since 2002 are also briefly discussed. Finally, some possible new developments of the impact category of abiotic resource depletion are suggested, such as redefining the depletion problem as a dilution problem. This means taking the reserves in the environment and the economy into account in the reserve parameter and using leakage from the economy, instead of extraction rate, as a dilution parameter.

  16. Enhanced viral activity and dark CO2fixation rates under oxygen depletion: the case study of the marine Lake Rogoznica.

    Science.gov (United States)

    Rastelli, Eugenio; Corinaldesi, Cinzia; Petani, Bruna; Dell'Anno, Antonio; Ciglenečki, Irena; Danovaro, Roberto

    2016-12-01

    Global change is determining the expansion of marine oxygen-depleted zones, which are hot spots of microbial-driven biogeochemical processes. However, information on the functioning of the microbial assemblages and the role of viruses in such low-oxygen systems remains largely unknown. Here, we used the marine Rogoznica Lake as a natural model to investigate the possible consequences of oxygen depletion on virus-prokaryote interactions and prokaryotic metabolism in pelagic and benthic ecosystems. We found higher bacterial and archaeal abundances in oxygen-depleted than in oxic conditions, associated with higher heterotrophic carbon production, enzymatic activities and dark inorganic carbon fixation (DCF) rates. The oxygen-depleted systems were also characterized by higher viral abundance, production and virus-induced prokaryotic mortality. The highest DCF relative contribution to the whole total C production (> 30%) was found in oxygen-depleted systems, at the highest virus-induced prokaryotic mortality values (> 90%). Our results suggest that the higher rates of viral lysis in oxygen-depleted conditions can significantly enhance DCF by accelerating heterotrophic processes, organic matter cycling, and hence the supply of inorganic reduced compounds fuelling chemosynthesis. These findings suggest that the expansion of low-oxygen zones can trigger higher viral impacts on prokaryotic heterotrophic and chemoautotrophic metabolism, with cascading effects, neglected so far, on biogeochemical processes. © 2016 Society for Applied Microbiology and John Wiley & Sons Ltd.

  17. Clinical case of Mitochondrial DNA Depletion

    Directory of Open Access Journals (Sweden)

    A. V. Degtyareva

    2017-01-01

    Full Text Available The article reports clinical case of early neonatal manifestation of a rare genetic disease – mitochondrial DNA depletion syndrome, confirmed in laboratory in Russia. Mutations of FBXL4, which encodes an orphan mitochondrial F-box protein, involved in the maintenance of mitochondrial DNA (mtDNA, ultimately leading to disruption of mtDNA replication and decreased activity of mitochondrial respiratory chain complexes. It’s a reason of abnormalities in clinically affected tissues, most of all the muscular system and the brain. In our case hydronephrosis on the right, subependimal cysts of the brain, partial intestinal obstruction accompanied by polyhydramnios were diagnosed antenatal. Baby’s condition at birth was satisfactory and worsened dramatically towards the end of the first day of life. Clinical presentation includes sepsis-like symptom complex, neonatal depression, muscular hypotonia, persistent decompensated lactic acidosis, increase in the concentration of mitochondrial markers in blood plasma and urine, and changes in the basal ganglia of the brain. Imaging of the brain by magnetic resonance imaging (MRI demonstrated global volume loss particularly the subcortical and periventricular white matter with significant abnormal signal in bilateral basal ganglia and brainstem with associated delayed myelination. Differential diagnosis was carried out with hereditary diseases that occur as a «sepsis-like» symptom complex, accompanied by lactic acidosis: a group of metabolic disorders of amino acids, organic acids, β-oxidation defects of fatty acids, respiratory mitochondrial chain disorders and glycogen storage disease. The diagnosis was confirmed after sequencing analysis of 62 mytochondrial genes by NGS (Next Generation Sequencing. Reported disease has an unfavorable prognosis, however, accurate diagnosis is very important for genetic counseling and helps prevent the re-birth of a sick child in the family.

  18. Interstellar Silicon Depletion and the Ultraviolet Extinction

    Science.gov (United States)

    Mishra, Ajay; Li, Aigen

    2018-01-01

    Spinning small silicate grains were recently invoked to account for the Galactic foreground anomalous microwave emission. These grains, if present, will absorb starlight in the far ultraviolet (UV). There is also renewed interest in attributing the enigmatic 2175 Å interstellar extinction bump to small silicates. To probe the role of silicon in the UV extinction, we explore the relations between the amount of silicon required to be locked up in silicates [Si/H]dust and the 2175 Å bump or the far-UV extinction rise, based on an analysis of the extinction curves along 46 Galactic sightlines for which the gas-phase silicon abundance [Si/H]gas is known. We derive [Si/H]dust either from [Si/H]ISM - [Si/H]gas or from the Kramers- Kronig relation which relates the wavelength-integrated extinction to the total dust volume, where [Si/H]ISM is the interstellar silicon reference abundance and taken to be that of proto-Sun or B stars. We also derive [Si/H]dust from fi�tting the observed extinction curves with a mixture of amorphous silicates and graphitic grains. We fi�nd that in all three cases [Si/H]dust shows no correlation with the 2175 Å bump, while the carbon depletion [C/H]dust tends to correlate with the 2175 Å bump. This supports carbon grains instead of silicates as the possible carrier of the 2175 Å bump. We also �find that neither [Si/H]dust nor [C/H]dust alone correlates with the far-UV extinction, suggesting that the far-UV extinction is a combined effect of small carbon grains and silicates.

  19. Barium Depletion in Hollow Cathode Emitters

    Science.gov (United States)

    Polk, James E.; Capece, Angela M.; Mikellides, Ioannis G.; Katz, Ira

    2009-01-01

    The effect of tungsten erosion, transport and redeposition on the operation of dispenser hollow cathodes was investigated in detailed examinations of the discharge cathode inserts from an 8200 hour and a 30,352 hour ion engine wear test. Erosion and subsequent re-deposition of tungsten in the electron emission zone at the downstream end of the insert reduces the porosity of the tungsten matrix, preventing the ow of barium from the interior. This inhibits the interfacial reactions of the barium-calcium-aluminate impregnant with the tungsten in the pores. A numerical model of barium transport in the internal xenon discharge plasma shows that the barium required to reduce the work function in the emission zone can be supplied from upstream through the gas phase. Barium that flows out of the pores of the tungsten insert is rapidly ionized in the xenon discharge and pushed back to the emitter surface by the electric field and drag from the xenon ion flow. This barium ion flux is sufficient to maintain a barium surface coverage at the downstream end greater than 0.6, even if local barium production at that point is inhibited by tungsten deposits. The model also shows that the neutral barium pressure exceeds the equilibrium vapor pressure of the impregnant decomposition reaction over much of the insert length, so the reactions are suppressed. Only a small region upstream of the zone blocked by tungsten deposits is active and supplies the required barium. These results indicate that hollow cathode failure models based on barium depletion rates in vacuum dispenser cathodes are very conservative.

  20. Deuterium depleted water. Romanian achievements and prospects

    International Nuclear Information System (INIS)

    Stefanescu, Ioan; Steflea, Dumitru; Titescu, Gheorghe; Tamaian, Radu

    2002-01-01

    The deuterium depleted water (DDW) is microbiologically pure distilled water with a deuterium content lower than that of natural waters which amounts to 140 - 150 ppm D/(D+H); variations depend on geographical zone and altitude. The procedure of obtaining DDW is based on isotopic separation of natural water by vacuum distillation. Isotope concentration can be chosen within 20 to 120 ppm D/(D+H). The ICSI at Rm. Valcea has patented the procedure and equipment for the production of DDW. According to the document SF-01-2002/INC-DTCI - ICSI Rm. Valcea, the product has a D/(D+H) isotope concentration of 25 ± 5. Studies and research for finding the effects and methods of application in different fields were initiated and developed in collaboration with different institutes in Romania. The following important results obtained so far could be mentioned: - absence of toxicity upon organisms; - activation of vascular reactivity; - enhancement of defence capacity of the organism through non-specific immunity activation; - increase of salmonid reproduction capacity and enhancement of the adaptability of alevins to the environmental conditions; - radioprotective effect to ionizing radiation; - maintaining meat freshness through osmotic shock; - stimulation of growth of aquatic macrophytes; - enhancement of culture plant development in certain ontogenetic stages. Mostly, the results and practical applications of the research were patented and awarded with gold medals at international invention fairs. At present, research-development programmes are undergoing to find active biological features of DDW in fighting cancer, on one hand, and its applicability as food additive of pets or performing animals, on the other hand

  1. Development of authentication code for multi-access optical code division multiplexing based quantum key distribution

    Science.gov (United States)

    Taiwo, Ambali; Alnassar, Ghusoon; Bakar, M. H. Abu; Khir, M. F. Abdul; Mahdi, Mohd Adzir; Mokhtar, M.

    2018-05-01

    One-weight authentication code for multi-user quantum key distribution (QKD) is proposed. The code is developed for Optical Code Division Multiplexing (OCDMA) based QKD network. A unique address assigned to individual user, coupled with degrading probability of predicting the source of the qubit transmitted in the channel offer excellent secure mechanism against any form of channel attack on OCDMA based QKD network. Flexibility in design as well as ease of modifying the number of users are equally exceptional quality presented by the code in contrast to Optical Orthogonal Code (OOC) earlier implemented for the same purpose. The code was successfully applied to eight simultaneous users at effective key rate of 32 bps over 27 km transmission distance.

  2. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    Science.gov (United States)

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  3. Phyto remediation of Depleted Uranium from Contaminated Soil and Sediments

    International Nuclear Information System (INIS)

    Al-Saad, K.A.; Amr, M.A.

    2012-01-01

    Seedlings of sunflower (Helianthus annuus L.) was used to test the effect of ph, citric acid, phosphoric acid, and ethylene-diamine-tetraacetic acid (EDTA) on the uptake and the translocation of depleted uranium (DU). The experiments was performed in hydroponic cultures and environmental soil samples collected from Qatar. The results of hydroponic experiment indicated that DU accumulated more in the roots than leaves, in the plants that was grown in contaminated water. The presence of phosphoric acid, citric acid, or EDTA showed different patterns of DU uptake. Higher transfer factor was observed when phosphoric acid was added. When EDTA was added, higher DU uptake was observed. The data suggested the DU was mostly retained to the root when EDTA was added. Also, the experiments were applied on environmental soil samples collected from Qatar. The presence of phosphoric acid, citric acid, or EDTA showed different patterns of DU uptake for the three different soil samples. The addition of EDTA increased the DU uptake in the sunflowers planted in the three types of soils. The results indicated that, generally, DU accumulated more in the roots compared to leaves and stems, except when soil was spiked with phosphoric acid. The translocation ratio was limited but highest ( 1.4) in the sunflower planted in soil S2705 when spiked with phosphoric acid. In the three soils tested, the result suggested higher DU translocation of sunflower with the presence of phosphoric acid.

  4. Applying radiation

    International Nuclear Information System (INIS)

    Mallozzi, P.J.; Epstein, H.M.; Jung, R.G.; Applebaum, D.C.; Fairand, B.P.; Gallagher, W.J.; Uecker, R.L.; Muckerheide, M.C.

    1979-01-01

    The invention discloses a method and apparatus for applying radiation by producing X-rays of a selected spectrum and intensity and directing them to a desired location. Radiant energy is directed from a laser onto a target to produce such X-rays at the target, which is so positioned adjacent to the desired location as to emit the X-rays toward the desired location; or such X-rays are produced in a region away from the desired location, and are channeled to the desired location. The radiant energy directing means may be shaped (as with bends; adjustable, if desired) to circumvent any obstruction between the laser and the target. Similarly, the X-ray channeling means may be shaped (as with fixed or adjustable bends) to circumvent any obstruction between the region where the X-rays are produced and the desired location. For producing a radiograph in a living organism the X-rays are provided in a short pulse to avoid any blurring of the radiograph from movement of or in the organism. For altering tissue in a living organism the selected spectrum and intensity are such as to affect substantially the tissue in a preselected volume without injuring nearby tissue. Typically, the selected spectrum comprises the range of about 0.1 to 100 keV, and the intensity is selected to provide about 100 to 1000 rads at the desired location. The X-rays may be produced by stimulated emission thereof, typically in a single direction

  5. The influence of fog parameters on aerosol depletion measured in the KAEVER experiments

    International Nuclear Information System (INIS)

    Poss, G.; Weber, D.; Fritsche, B.

    1995-01-01

    The release of radioactive aerosols in the environment is one of the most serious hazards in case of an accident in nuclear power plant. Many efforts have been made in the past in numerous experimental programs like NSPP, DEMONA, VANAM, LACE, MARVIKEN, others are still underway to improve the knowledge of the aerosol behavior and depletion in a reactor containment in order to estimate the possible source term and to validate computer codes. In the German single compartment KAEVER facility the influence of size distribution, morphology, composition and solubility on the aerosol behavior is investigated. One of the more specific items is to learn about open-quotes wet depletionclose quotes means, the aerosol depletion behavior in condensing atmospheres. There are no experiments known where the fog parameters like droplet size distribution, volume concentration, respectively airborne liquid water content have been measured in- and on-line explicitly. To the authors knowledge the use of the Battelle FASP photometer, which was developed especially for this reason, for the first time gives insight in condensation behavior under accident typical thermal hydraulic conditions. It delivers a basis for code validation in terms of a real comparison of measurements and calculations. The paper presents results from open-quotes wet depletionclose quotes aerosol experiments demonstrating how depletion velocity depends on the fog parameters and where obviously critical fog parameter seem to change the regime from a open-quotes pseudo dry depletionclose quotes at a relative humidity of 100% but quasi no or very low airborne liquid water content to a real open-quotes wet depletionclose quotes under the presence of fogs with varying densities. Characteristics are outlined how soluble and insoluble particles as well as aerosol mixtures behave under condensing conditions

  6. Quantitative evaluation of radiation dose rates for depleted uranium in PRIDE facility

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Il Je; Sim, Jee Hyung; KIm, Yong Soo [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2016-12-15

    Radiation dose rates in PRIDE facility is evaluated quantitatively for assessing radiation safety of workers because of large amounts of depleted uranium being handled in PRIDE facility. Even if direct radiation from depleted uranium is very low and will not expose a worker to significant amounts of external radiation. ORIGEN-ARP code was used for calculating the neutron and gamma source term being generated from depleted uranium (DU), and the MCNP5 code was used for calculating the neutron and gamma fluxes and dose rates. The neutron and gamma fluxes and dose rates due to DU on spherical surface of 30 cm radius were calculated with the variation of DU mass and density. In this calculation, an imaginary case in which DU density is zero was added to check the self-shielding effect of DU. In this case, the DU sphere was modeled as a point. In case of DU mixed with molten salt of 50-250 g, the neutron and gamma fluxes were calculated respectively. It was found that the molten salt contents in DU had little effect on the neutron and the gamma fluxes. The neutron and the gamma fluxes, under the respective conditions of 1 and 5 kg mass of DU, and 5 and 19.1 g.cm{sup -3} density of DU, were calculated with the molten salt (LiCl+KCl) of 50 g fixed, and compared with the source term. As the results, similar tendency was found in neutron and gamma fluxes with the variation of DU mass and density when compared with source spectra, except their magnitudes. In the case of the DU mass over 5 kg, the dose rate was shown to be higher than the environmental dose rate. From these results, it is concluded that if a worker would do an experiment with DU having over 5 kg of mass, the worker should be careful in order not to be exposed to the radiation.

  7. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    We welcome Tanya Stivers’s discussion (Stivers, 2015/this issue) of coding social interaction and find that her descriptions of the processes of coding open up important avenues for discussion, among other things of the precise ad hoc considerations that researchers need to bear in mind, both when...

  8. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    Science, Bangalore. Her interests are in. Theoretical Computer. Science. SERIES I ARTICLE. Error Correcting Codes. 2. The Hamming Codes. Priti Shankar. In the first article of this series we showed how redundancy introduced into a message transmitted over a noisy channel could improve the reliability of transmission. In.

  9. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    set up a well defined goal - that of achieving a per- formance bound set by the noisy channel coding theo- rem, proved in the paper. Whereas the goal appeared elusive twenty five years ago, today, there are practi- cal codes and decoding algorithms that come close to achieving it. It is interesting to note that all known.

  10. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March 1997 pp 33-47. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/002/03/0033-0047 ...

  11. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  12. Codes of Conduct

    Science.gov (United States)

    Million, June

    2004-01-01

    Most schools have a code of conduct, pledge, or behavioral standards, set by the district or school board with the school community. In this article, the author features some schools that created a new vision of instilling code of conducts to students based on work quality, respect, safety and courtesy. She suggests that communicating the code…

  13. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Department of Computer. Science 'and Automation,. lISe. Their research addresses ... The fifty five year old history of error correcting codes began with Claude Shannon's path-breaking paper en- titled 'A ... given the limited computing power available then, Gal- lager's codes were not considered practical. A landmark.

  14. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  15. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 8; Issue 9. Decoding Codes on Graphs - Low Density Parity Check Codes. A S Madhu Aditya Nori. General Article Volume 8 Issue 9 September 2003 pp 49-59. Fulltext. Click here to view fulltext PDF. Permanent link:

  16. READING A NEURAL CODE

    NARCIS (Netherlands)

    BIALEK, W; RIEKE, F; VANSTEVENINCK, RRD; WARLAND, D

    1991-01-01

    Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task - extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from

  17. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  18. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  19. BEAVRS full core burnup calculation in hot full power condition by RMC code

    International Nuclear Information System (INIS)

    Liu, Shichang; Liang, Jingang; Wu, Qu; Guo, JuanJuan; Huang, Shanfang; Tang, Xiao; Li, Zeguang; Wang, Kan

    2017-01-01

    Highlights: • TMS and thermal scattering interpolation were developed to treat cross sections OTF. • Hybrid coupling system was developed for HFP burnup calculation of BEAVRS benchmark. • Domain decomposition was applied to handle memory problem of full core burnup. • Critical boron concentration with burnup by RMC agrees with the benchmark results. • RMC is capable of multi-physics coupling for simulations of nuclear reactors in HFP. - Abstract: Monte Carlo method can provide high fidelity neutronics analysis of different types of nuclear reactors, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. However, nuclear reactors are complex systems with multi-physics interacting and coupling. MC codes can couple with depletion solver and thermal-hydraulics (T/H) codes simultaneously for the “transport-burnup-thermal-hydraulics” coupling calculations. MIT BEAVRS is a typical “transport-burnup-thermal-hydraulics” coupling benchmark. In this paper, RMC was coupled with sub-channel code COBRA, equipped with on-the-fly temperature-dependent cross section treatment and large-scale detailed burnup calculation based on domain decomposition. Then RMC was applied to the full core burnup calculations of BEAVRS benchmark in hot full power (HFP) condition. The numerical tests show that domain decomposition method can achieve the consistent results compared with original version of RMC while enlarging the computational burnup regions. The results of HFP by RMC agree well with the reference values of BEAVRS benchmark and also agree well with those of MC21. This work proves the feasibility and accuracy of RMC in multi-physics coupling and lifecycle simulations of nuclear reactors.

  20. Synthetic liquid fuels development: assessment of critical factors. Volume III. Coal resource depletion

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, E.M.; Yabroff, I.W.; Kroll, C.A.; White, R.K.; Walton, B.L.; Ivory, M.E.; Fullen, R.E.; Weisbecker, L.W.; Hays, R.L.

    1977-01-01

    While US coal resources are known to be vast, their rate of depletion in a future based predominantly on coal has not been examined analytically heretofore. The Coal Depletion Model inventories the coal resource on a regional basis and calculates the cost of coal extraction by three technologies - strip and underground mining and in-situ combustion. A plausible coal demand scenario extending from 1975 to the year 2050 is used as a basis in applying the model. In the year 2050, plants in operation include 285 syncrude plants, each producing 100,000 B/D; 312 SNG plants, each producing 250 million SCF/D and 722 coal-fired electric power plants, each of 1000 MW capacity. In addition, there is 890 million tons per year of industrial coal consumption. Such a high level of coal use would deplete US coal resources much more rapidly than most people appreciate. Of course, the actual amount of US coal is unknown, and if the coal in the hypothetical reliability category is included, depletion is delayed. Coal in this category, however, has not been mapped; it is only presumed to exist on the basis of geological theory. The coal resource depletion model shows that unilateral imposition of a severance tax by a state tends to shift production to other coal producing regions. Boom and bust cycles are both delayed and reduced in their magnitude. When several states simultaneously impose severance taxes, the effect of each is weakened.Key policy issues that emerge from this analysis concern the need to reduce the uncertainty of the magnitude and geographic distribution of the US coal resource and the need to stimulate interaction among the parties at interest to work out equitable and acceptable coal conversion plant location strategies capable of coping with the challenges of a high-coal future.

  1. GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis

    Science.gov (United States)

    Kennedy, Christopher Brandon

    Introduced in this dissertation is a novel approach that forms a reduced-order model (ROM), based on subspace methods, that allows for the generation of response sensitivity profiles without the need to set up or solve the generalized inhomogeneous perturbation theory (GPT) equations. The new approach, denoted hereinafter as the generalized perturbation theory free (GPT-Free) approach, computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error associated with the ROM is quantified by means of a Wilks' order statistics error metric denoted by the kappa-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally overwhelming. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT (inhomogenous) capabilities unless envisioned during code development. Additionally, codes that use a stochastic algorithm, i.e. Monte Carlo methods, may have difficult or undefined GPT equations. When GPT calculations are available through software, the aforementioned efficiency gained from the GPT approach diminishes when the model has both many output responses and many input parameters. The GPT-Free approach addresses these limitations, first by only requiring the ability to compute the fundamental adjoint from perturbation theory, and second by constructing a ROM from fundamental adjoint calculations, constraining input parameters to a subspace. This approach bypasses the requirement to perform GPT calculations while simultaneously reducing the number of simulations required. In addition to the reduction of simulations, a major benefit of the GPT-Free approach is explicit control of the reduced order

  2. IAEA Coordinated Research Project on HTGR Reactor Physics, Thermal-hydraulics and Depletion Uncertainty Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Strydom, Gerhard [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bostelmann, F. [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on

  3. Economic aspects and models for building codes

    DEFF Research Database (Denmark)

    Bonke, Jens; Pedersen, Dan Ove; Johnsen, Kjeld

    It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study.......It is the purpose of this bulletin to present an economic model for estimating the consequence of new or changed building codes. The object is to allow comparative analysis in order to improve the basis for decisions in this field. The model is applied in a case study....

  4. NET IBK Computer code package for the needs of planning, construction and operation of nuclear power plants

    International Nuclear Information System (INIS)

    Matausek, M.V.; Kocic, A.; Marinkovic, N.; Milosevic, M.; Stancic, V.

    1978-01-01

    Within the Nuclear Engineering Laboratory of the Boris Kidric Institute of Nuclear Sciences (NET IBK) a systematic work has been performed on collecting nuclear data for reactor calculation needs, on developing own methods and computing programs for reactor calculations, as well as on adapting and applying the foreign methods and codes. In this way a complete library of computer programs was formed for precise prediction of nuclear fuel burnup and depletion, for evaluation of the Power distribution variations with irradiation, for computing the amount of produced plutonium and its number densities etc. Programs for evaluation of location of different types of safety and economic analysis have been developed as well. The aim of this paper is to present our abilities to perform complex computations needed for planning, constructing and operating the nuclear power plants, by describing the NET IBK computer programs package. (author)

  5. The TESS [Tandem Experiment Simulation Studies] computer code user's manual

    International Nuclear Information System (INIS)

    Procassini, R.J.

    1990-01-01

    TESS (Tandem Experiment Simulation Studies) is a one-dimensional, bounded particle-in-cell (PIC) simulation code designed to investigate the confinement and transport of plasma in a magnetic mirror device, including tandem mirror configurations. Mirror plasmas may be modeled in a system which includes an applied magnetic field and/or a self-consistent or applied electrostatic potential. The PIC code TESS is similar to the PIC code DIPSI (Direct Implicit Plasma Surface Interactions) which is designed to study plasma transport to and interaction with a solid surface. The codes TESS and DIPSI are direct descendants of the PIC code ES1 that was created by A. B. Langdon. This document provides the user with a brief description of the methods used in the code and a tutorial on the use of the code. 10 refs., 2 tabs

  6. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  7. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  8. It is chloride depletion alkalosis, not contraction alkalosis.

    Science.gov (United States)

    Luke, Robert G; Galla, John H

    2012-02-01

    Maintenance of metabolic alkalosis generated by chloride depletion is often attributed to volume contraction. In balance and clearance studies in rats and humans, we showed that chloride repletion in the face of persisting alkali loading, volume contraction, and potassium and sodium depletion completely corrects alkalosis by a renal mechanism. Nephron segment studies strongly suggest the corrective response is orchestrated in the collecting duct, which has several transporters integral to acid-base regulation, the most important of which is pendrin, a luminal Cl/HCO(3)(-) exchanger. Chloride depletion alkalosis should replace the notion of contraction alkalosis.

  9. Depletion and Development: Natural Resource Supply with Endogenous Field Opening

    OpenAIRE

    Anthony J. Venables

    2014-01-01

    Supply of a nonrenewable resource adjusts through two margins: the rate at which new fields are opened and the rate of depletion of open fields. The paper combines these margins in a model in which there is a continuum of fields with varying capital costs. Opening a new field involves sinking a capital cost, and the date of opening is chosen to maximize the present value of the field. Depletion of each open field follows a Hotelling rule, modified by the fact that faster depletion reduces the...

  10. Depletion interaction of casein micelles and an exocellular polysaccharide

    Science.gov (United States)

    Tuinier, R.; Ten Grotenhuis, E.; Holt, C.; Timmins, P. A.; de Kruif, C. G.

    1999-07-01

    Casein micelles become mutually attractive when an exocellular polysaccharide produced by Lactococcus lactis subsp. cremoris NIZO B40 (hereafter called EPS) is added to skim milk. The attraction can be explained as a depletion interaction between the casein micelles induced by the nonadsorbing EPS. We used three scattering techniques (small-angle neutron scattering, turbidity measurements, and dynamic light scattering) to measure the attraction. In order to connect the theory of depletion interaction with experiment, we calculated structure factors of hard spheres interacting by a depletion pair potential. Theoretical predictions and all the experiments showed that casein micelles became more attractive upon increasing the EPS concentration.

  11. Laser propagation code study

    OpenAIRE

    Rockower, Edward B.

    1985-01-01

    A number of laser propagation codes have been assessed as to their suitability for modeling Army High Energy Laser (HEL) weapons used in an anti- sensor mode. We identify a number of areas in which systems analysis HEL codes are deficient. Most notably, available HEL scaling law codes model the laser aperture as circular, possibly with a fixed (e.g. 10%) obscuration. However, most HELs have rectangular apertures with up to 30% obscuration. We present a beam-quality/aperture shape scaling rela...

  12. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  13. Decoding the productivity code

    DEFF Research Database (Denmark)

    Hansen, David

    , that is, the productivity code of the 21st century, is dissolved. Today, organizations are pressured for operational efficiency, often in terms of productivity, due to increased global competition, demographical changes, and use of natural resources. Taylor’s principles for rationalization founded...... that swing between rationalization and employee development. The productivity code is the lack of alternatives to this ineffective approach. This thesis decodes the productivity code based on the results from a 3-year action research study at a medium-sized manufacturing facility. During the project period...

  14. CALIPSOS code report

    International Nuclear Information System (INIS)

    Fanselau, R.W.; Thakkar, J.G.; Hiestand, J.W.; Cassell, D.S.

    1980-04-01

    CALIPSOS is a steady-state three-dimensional flow distribution code which predicts the fluid dynamics and heat transfer interactions of the secondary two-phase flow in a steam generator. The mathematical formulation is sufficiently general to accommodate two fluid models described by separate gas and liquid momentum equations. However, if the user selects the homogeneous flow option, the code automatically equates the gas and liquid phase velocities (thereby reducing the number of momentum equations solved to three) and utilizes a homogeneous density mixture. This report presents the basic features of the CALIPSOS code and includes assumptions, equations solved, the finite-difference grid, and highlights of the solution procedure

  15. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  16. Deep-depletion physics-based analytical model for scanning capacitance microscopy carrier profile extraction

    International Nuclear Information System (INIS)

    Wong, K. M.; Chim, W. K.

    2007-01-01

    An approach for fast and accurate carrier profiling using deep-depletion analytical modeling of scanning capacitance microscopy (SCM) measurements is shown for an ultrashallow p-n junction with a junction depth of less than 30 nm and a profile steepness of about 3 nm per decade change in carrier concentration. In addition, the analytical model is also used to extract the SCM dopant profiles of three other p-n junction samples with different junction depths and profile steepnesses. The deep-depletion effect arises from rapid changes in the bias applied between the sample and probe tip during SCM measurements. The extracted carrier profile from the model agrees reasonably well with the more accurate carrier profile from inverse modeling and the dopant profile from secondary ion mass spectroscopy measurements

  17. Network coding at different layers in wireless networks

    CERN Document Server

    2016-01-01

    This book focuses on how to apply network coding at different layers in wireless networks – including MAC, routing, and TCP – with special focus on cognitive radio networks. It discusses how to select parameters in network coding (e.g., coding field, number of packets involved, and redundant information ration) in order to be suitable for the varying wireless environments. The book explores how to deploy network coding in MAC to improve network performance and examines joint network coding with opportunistic routing to improve the successful rate of routing. In regards to TCP and network coding, the text considers transport layer protocol working with network coding to overcome the transmission error rate, particularly with how to use the ACK feedback of TCP to enhance the efficiency of network coding. The book pertains to researchers and postgraduate students, especially whose interests are in opportunistic routing and TCP in cognitive radio networks.

  18. Locality-preserving logical operators in topological stabilizer codes

    Science.gov (United States)

    Webster, Paul; Bartlett, Stephen D.

    2018-01-01

    Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.

  19. Time-on-task effects in children with and without ADHD: depletion of executive resources or depletion of motivation?

    Science.gov (United States)

    Dekkers, Tycho J; Agelink van Rentergem, Joost A; Koole, Alette; van den Wildenberg, Wery P M; Popma, Arne; Bexkens, Anika; Stoffelsen, Reino; Diekmann, Anouk; Huizenga, Hilde M

    2017-12-01

    Children with attention-deficit/hyperactivity disorder (ADHD) are characterized by deficits in their executive functioning and motivation. In addition, these children are characterized by a decline in performance as time-on-task increases (i.e., time-on-task effects). However, it is unknown whether these time-on-task effects should be attributed to deficits in executive functioning or to deficits in motivation. Some studies in typically developing (TD) adults indicated that time-on-task effects should be interpreted as depletion of executive resources, but other studies suggested that they represent depletion of motivation. We, therefore, investigated, in children with and without ADHD, whether there were time-on-task effects on executive functions, such as inhibition and (in)attention, and whether these were best explained by depletion of executive resources or depletion of motivation. The stop-signal task (SST), which generates both indices of inhibition (stop-signal reaction time) and attention (reaction time variability and errors), was administered in 96 children (42 ADHD, 54 TD controls; aged 9-13). To differentiate between depletion of resources and depletion of motivation, the SST was administered twice. Half of the participants was reinforced during second task performance, potentially counteracting depletion of motivation. Multilevel analyses indicated that children with ADHD were more affected by time-on-task than controls on two measures of inattention, but not on inhibition. In the ADHD group, reinforcement only improved performance on one index of attention (i.e., reaction time variability). The current findings suggest that time-on-task effects in children with ADHD occur specifically in the attentional domain, and seem to originate in both depletion of executive resources and depletion of motivation. Clinical implications for diagnostics, psycho-education, and intervention are discussed.

  20. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  1. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  2. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  3. Coded Random Access

    DEFF Research Database (Denmark)

    Paolini, Enrico; Stefanovic, Cedomir; Liva, Gianluigi

    2015-01-01

    The rise of machine-to-machine communications has rekindled the interest in random access protocols as a support for a massive number of uncoordinatedly transmitting devices. The legacy ALOHA approach is developed under a collision model, where slots containing collided packets are considered...... as waste. However, if the common receiver (e.g., base station) is capable to store the collision slots and use them in a transmission recovery process based on successive interference cancellation, the design space for access protocols is radically expanded. We present the paradigm of coded random access......, in which the structure of the access protocol can be mapped to a structure of an erasure-correcting code defined on graph. This opens the possibility to use coding theory and tools for designing efficient random access protocols, offering markedly better performance than ALOHA. Several instances of coded...

  4. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  5. Code de conduite

    International Development Research Centre (IDRC) Digital Library (Canada)

    irocca

    le respect de telles normes. Ce faisant, nous contribuons à la bonne réputation et à l'intégrité du Centre et allons dans le sens du Code de valeurs et d'éthique du secteur public du gouvernement du Canada. Je vous invite à prendre connaissance de cette nouvelle mouture du Code de conduite et à appliquer ses principes ...

  6. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  7. Aphasia for Morse code.

    Science.gov (United States)

    Wyler, A R; Ray, M W

    1986-03-01

    The ability to communicate by Morse code at high speed has, to our knowledge, not been localized within the cerebral cortex, but might be suspected as residing within the left (dominant) hemisphere. We report a case of a 54-year-old male who suffered a left temporal tip intracerebral hematoma and who temporarily lost his ability to communicate in Morse code, but who was minimally aphasic.

  8. Discretized screening to apply EOR process in Western field, Venezuela

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez, E.; Rodriguez, T.; Gonzalez, O. [PDVSA Petroleos de Venezuela SA, Caracas (Venezuela, Bolivarian Republic of). INTEVEP; Lara, V. [PDVSA Petroleos de Venezuela SA, Caracas (Venezuela, Bolivarian Republic of). CVP

    2009-07-01

    Increases in oil recovery factors through enhanced oil recovery (EOR) technologies has become an important issue in the petroleum industry because of depleting reserves of conventional fossil fuels and the low low mobility of extra heavy oils. Methodologies with different approaches have been developed to define the most suitable technology in specific reservoirs. The purpose of this paper was to determine which EOR technologies were the most appropriate for the entire Urdaneta reservoir in Venezuela, and to determine where the technologies could be applied in terms of reservoir volume. Specifically, the paper described the discretized screening methodology and showed an example of its application in the Urdaneta field. The processing of the static model of this field was described, since this is an input requirement for the EOR screening methodology. Screening results were also analysed and shown as color codes maps. The EOR screening methodology demonstrates that it is possible to evaluate the reservoir using very detailed input information. 4 refs., 3 tabs., 10 figs.

  9. Macrophage Depletion in Hypertensive Rats Accelerates Development of Cardiomyopathy

    NARCIS (Netherlands)

    Zandbergen, H.R.; Sharma, U.C.; Gupta, S.; Verjans, J.W.H.; van den Borne, S.; Pokharel, S.; Brakel, T.; Duijvestijn, A.; van Rooijen, N.; Maessen, J.G.; Reutelingsperger, C.P.M.; Pinto, Y.; Narula, J.; Hofstra, L.

    2009-01-01

    Inflammation contributes to the process of ventricular remodeling after acute myocardial injury. To investigate the role of macrophages in the chronic process of cardiac remodeling, they were selectively depleted by intravenous administration of liposomal clodronate in heart failure-prone

  10. AFSC/REFM: Pacific cod Localized Depletion Study

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Data from Localized Depletion study for Pacific cod 2001-2005. Study was conducted using cod pot gear to measure localized abundance of Pacific cod inside and...

  11. Adding trend data to Depletion-Based Stock Reduction Analysis

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — A Bayesian model of Depletion-Based Stock Reduction Analysis (DB-SRA), informed by a time series of abundance indexes, was developed, using the Sampling Importance...

  12. Image Coding using Markov Models with Hidden States

    DEFF Research Database (Denmark)

    Forchhammer, Søren Otto

    1999-01-01

    The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same.......The Cylinder Partially Hidden Markov Model (CPH-MM) is applied to lossless coding of bi-level images. The original CPH-MM is relaxed for the purpose of coding by not imposing stationarity, but otherwise the model description is the same....

  13. Strength evaluation code STEP for brittle materials

    International Nuclear Information System (INIS)

    Ishihara, Masahiro; Futakawa, Masatoshi.

    1997-12-01

    In a structural design using brittle materials such as graphite and/or ceramics it is necessary to evaluate the strength of component under complex stress condition. The strength of ceramic materials is said to be influenced by the stress distribution. However, in the structural design criteria simplified stress limits had been adopted without taking account of the strength change with the stress distribution. It is, therefore, important to evaluate the strength of component on the basis of the fracture model for brittle material. Consequently, the strength evaluation program, STEP, on a brittle fracture of ceramic materials based on the competing risk theory had been developed. Two different brittle fracture modes, a surface layer fracture mode dominated by surface flaws and an internal fracture mode by internal flaws, are treated in the STEP code in order to evaluate the strength of brittle fracture. The STEP code uses stress calculation results including complex shape of structures analyzed by the generalized FEM stress analysis code, ABAQUS, so as to be possible to evaluate the strength of brittle fracture for the structures having complicate shapes. This code is, therefore, useful to evaluate the structural integrity of arbitrary shapes of components such as core graphite components in the HTTR, heat exchanger components made of ceramics materials etc. This paper describes the basic equations applying to the STEP code, code system with a combination of the STEP and the ABAQUS codes and the result of the verification analysis. (author)

  14. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  15. QR Code: An Interactive Mobile Advertising Tool

    Directory of Open Access Journals (Sweden)

    Ela Sibel Bayrak Meydanoglu

    2013-10-01

    Full Text Available Easy and rapid interaction between consumers and marketers enabled by mobile technology prompted  an increase in the usage of mobile media as an interactive marketing tool in recent years. One of the mobile technologies that can be used in interactive marketing for advertising is QR code (Quick Response Code. Interactive advertising brings back some advantages for the companies that apply it. For example, interaction with consumers provides significant information about consumers' preferences. Marketers can use information obtained from consumers for various marketing activities such as customizing advertisement messages, determining  target audience, improving future products and services. QR codes used in marketing campaigns can provide links to specific websites in which through various tools (e.g. questionnaires, voting information about the needs and wants of customers are collected. The aim of this basic research is to illustrate the contribution of  QR codes to the realization of the advantages gained by interactive advertising.

  16. Algebraic coding theory over finite commutative rings

    CERN Document Server

    Dougherty, Steven T

    2017-01-01

    This book provides a self-contained introduction to algebraic coding theory over finite Frobenius rings. It is the first to offer a comprehensive account on the subject. Coding theory has its origins in the engineering problem of effective electronic communication where the alphabet is generally the binary field. Since its inception, it has grown as a branch of mathematics, and has since been expanded to consider any finite field, and later also Frobenius rings, as its alphabet. This book presents a broad view of the subject as a branch of pure mathematics and relates major results to other fields, including combinatorics, number theory and ring theory. Suitable for graduate students, the book will be of interest to anyone working in the field of coding theory, as well as algebraists and number theorists looking to apply coding theory to their own work.

  17. APR1400 Containment Simulation with CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Chung, Bub Dong [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2010-05-15

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  18. APR1400 Containment Simulation with CONTAIN code

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Chung, Bub Dong

    2010-01-01

    The more realistic containment pressure variation predicted by the CONTAIN code through the coupled analysis during a large break loss of coolant accident in the nuclear power plant is expected to provide more accurate prediction for the plant behavior than a standalone MARS-KS calculation. The input deck has been generated based on the already available ARP- 1400 input for CONTEMPT code. Similarly to the CONTEMPT input deck, a simple two-cell model was adopted to model the containment behavior, one cell for the containment inner volume and another cell for the environment condition. The developed input for the CONTAIN code is to be eventually applied for the coupled code calculation of MARS-KS/CONTAIN

  19. Large-Scale Physical Separation of Depleted Uranium from Soil

    Science.gov (United States)

    2012-09-01

    unweathered depleted uranium rods illustrating the formation of uranyl oxides and salts . Unfired penetrator rods can range from 10 to 50 cm in length...specific area ratio (as thin sections, fine particles, or molten states). Uranium in finely divided form is prone to ignition. Uranium also has an...ER D C/ EL T R -1 2 -2 5 Army Range Technology Program Large-Scale Physical Separation of Depleted Uranium from Soil E nv ir on m en ta l

  20. Metabolite Depletion Affects Flux Profiling of Cell Lines

    DEFF Research Database (Denmark)

    Nilsson, A.; Haanstra, J. R.; Teusink, B.

    2018-01-01

    Quantifying the rate of consumption and release of metabolites (i.e., flux profiling) has become integral to the study of cancer. The fluxes as well as the growth of the cells may be affected by metabolite depletion during cultivation.......Quantifying the rate of consumption and release of metabolites (i.e., flux profiling) has become integral to the study of cancer. The fluxes as well as the growth of the cells may be affected by metabolite depletion during cultivation....

  1. Cholinergic depletion and basal forebrain volume in primary progressive aphasia

    Directory of Open Access Journals (Sweden)

    Jolien Schaeverbeke

    2017-01-01

    In the PPA group, only LV cases showed decreases in AChE activity levels compared to controls. Surprisingly, a substantial number of SV cases showed significant AChE activity increases compared to controls. BF volume did not correlate with AChE activity levels in PPA. To conclude, in our sample of PPA patients, LV but not SV was associated with cholinergic depletion. BF atrophy in PPA does not imply cholinergic depletion.

  2. Retrieval of buried depleted uranium from the T-1 trench

    International Nuclear Information System (INIS)

    Burmeister, M.; Castaneda, N.; Hull, C.; Barbour, D.; Quapp, W.J.

    1998-01-01

    The Trench 1 remediation project will be conducted this year to retrieve depleted uranium and other associated materials from a trench at Rocky Flats Environmental Technology Site. The excavated materials will be segregated and stabilized for shipment. The depleted uranium will be treated at an offsite facility which utilizes a novel approach for waste minimization and disposal through utilization of a combination of uranium recycling and volume efficient uranium stabilization

  3. NKT cell depletion in humans during early HIV infection.

    Science.gov (United States)

    Fernandez, Caroline S; Kelleher, Anthony D; Finlayson, Robert; Godfrey, Dale I; Kent, Stephen J

    2014-08-01

    Natural killer T (NKT) cells bridge across innate and adaptive immune responses and have an important role in chronic viral infections such as human immunodeficiency virus (HIV). NKT cells are depleted during chronic HIV infection, but the timing, drivers and implications of this NKT cell depletion are poorly understood. We studied human peripheral blood NKT cell levels, phenotype and function in 31 HIV-infected subjects not on antiretroviral treatment from a mean of 4 months to 2 years after HIV infection. We found that peripheral CD4(+) NKT cells were substantially depleted and dysfunctional by 4 months after HIV infection. The depletion of CD4(+) NKT cells was more marked than the depletion of total CD4(+) T cells. Further, the early depletion of NKT cells correlated with CD4(+) T-cell decline, but not HIV viral levels. Levels of activated CD4(+) T cells correlated with the loss of NKT cells. Our studies suggest that the early loss of NKT cells is associated with subsequent immune destruction during HIV infection.

  4. Inositol depletion restores vesicle transport in yeast phospholipid flippase mutants.

    Science.gov (United States)

    Yamagami, Kanako; Yamamoto, Takaharu; Sakai, Shota; Mioka, Tetsuo; Sano, Takamitsu; Igarashi, Yasuyuki; Tanaka, Kazuma

    2015-01-01

    In eukaryotic cells, type 4 P-type ATPases function as phospholipid flippases, which translocate phospholipids from the exoplasmic leaflet to the cytoplasmic leaflet of the lipid bilayer. Flippases function in the formation of transport vesicles, but the mechanism remains unknown. Here, we isolate an arrestin-related trafficking adaptor, ART5, as a multicopy suppressor of the growth and endocytic recycling defects of flippase mutants in budding yeast. Consistent with a previous report that Art5p downregulates the inositol transporter Itr1p by endocytosis, we found that flippase mutations were also suppressed by the disruption of ITR1, as well as by depletion of inositol from the culture medium. Interestingly, inositol depletion suppressed the defects in all five flippase mutants. Inositol depletion also partially restored the formation of secretory vesicles in a flippase mutant. Inositol depletion caused changes in lipid composition, including a decrease in phosphatidylinositol and an increase in phosphatidylserine. A reduction in phosphatidylinositol levels caused by partially depleting the phosphatidylinositol synthase Pis1p also suppressed a flippase mutation. These results suggest that inositol depletion changes the lipid composition of the endosomal/TGN membranes, which results in vesicle formation from these membranes in the absence of flippases.

  5. Applicability of Transactional Memory to Modern Codes

    Science.gov (United States)

    Bihari, Barna L.

    2010-09-01

    In this paper we illustrate the features and study the applicability of transactional memory ™ as an efficient and easy-to-use alternative for handling memory conflicts in multi-theaded physics simulations that use shared memory. The tool used for our preliminary analysis of this novel construct is IBM's freely available Software Transactional Memory (STM) system. Instead of attempting to apply it to a production grade simulation code, we developed a much simpler test code that exhibits most of the salient features of modern unstructured mesh algorithms, but without the complicated physical models. We apply STM to two frequently used algorithms in realistic multi-physics codes. Our computational experiments indicate a good fit between these application scenarios and the TM features.

  6. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... that are subcodes of the binary Reed-Muller codes and can be very simply instrumented, 3) a new class of constacyclic codes that are subcodes of thep-ary "Reed-Muller codes," 4) two new classes of binary convolutional codes with large "free distance" derived from known binary cyclic codes, 5) two new classes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm....

  7. Computer codes in particle transport physics

    International Nuclear Information System (INIS)

    Pesic, M.

    2004-01-01

    is given. Importance of validation and verification of data and computer codes is underlined briefly. Examples of applications of the MCNPX, FLUKA and SHIELD codes to simulation of some of processes in nature, from reactor physics, ion medical therapy, cross section calculations, design of accelerator driven sub-critical systems to astrophysics and shielding of spaceships, are shown. More reliable and more frequent cross sections data in intermediate and high- energy range for particles transport and interactions with mater are expected in near future, as a result of new experimental investigations that are under way with the aim to validate theoretical models applied currently in the codes. These new data libraries are expected to be much larger and more comprehensive than existing ones requiring more computer memory and faster CPUs. Updated versions of the codes to be developed in future, beside sequential computation versions, will also include the MPI or PVM options to allow faster ru: ming of the code at acceptable cost for an end-user. A new option to be implemented in the codes is expected too - an end-user written application for particular problem could be added relatively simple to the general source code script. Initial works on full implementation of graphic user interface for preparing input and analysing output of codes and ability to interrupt and/or continue code running should be upgraded to user-friendly level. (author)

  8. Symbol synchronization in convolutionally coded systems

    Science.gov (United States)

    Baumert, L. D.; Mceliece, R. J.; Van Tilborg, H. C. A.

    1979-01-01

    Alternate symbol inversion is sometimes applied to the output of convolutional encoders to guarantee sufficient richness of symbol transition for the receiver symbol synchronizer. A bound is given for the length of the transition-free symbol stream in such systems, and those convolutional codes are characterized in which arbitrarily long transition free runs occur.

  9. Parallel processing Monte Carlo radiation transport codes

    International Nuclear Information System (INIS)

    McKinney, G.W.

    1994-01-01

    Issues related to distributed-memory multiprocessing as applied to Monte Carlo radiation transport are discussed. Measurements of communication overhead are presented for the radiation transport code MCNP which employs the communication software package PVM, and average efficiency curves are provided for a homogeneous virtual machine

  10. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  11. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  12. Radiochemical Analysis Methodology for uranium Depletion Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Scatena-Wachel DE

    2007-01-09

    This report provides sufficient material for a test sponsor with little or no radiochemistry background to understand and follow physics irradiation test program execution. Most irradiation test programs employ similar techniques and the general details provided here can be applied to the analysis of other irradiated sample types. Aspects of program management directly affecting analysis quality are also provided. This report is not an in-depth treatise on the vast field of radiochemical analysis techniques and related topics such as quality control. Instrumental technology is a very fast growing field and dramatic improvements are made each year, thus the instrumentation described in this report is no longer cutting edge technology. Much of the background material is still applicable and useful for the analysis of older experiments and also for subcontractors who still retain the older instrumentation.

  13. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  14. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  15. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  16. Modular ORIGEN-S for multi-physics code systems

    International Nuclear Information System (INIS)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack

    2011-01-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  17. Design of variable-weight quadratic congruence code for optical CDMA

    Science.gov (United States)

    Feng, Gang; Cheng, Wen-Qing; Chen, Fu-Jun

    2015-09-01

    A variable-weight code family referred to as variable-weight quadratic congruence code (VWQCC) is constructed by algebraic transformation for incoherent synchronous optical code division multiple access (OCDMA) systems. Compared with quadratic congruence code (QCC), VWQCC doubles the code cardinality and provides the multiple code-sets with variable code-weight. Moreover, the bit-error rate (BER) performance of VWQCC is superior to those of conventional variable-weight codes by removing or padding pulses under the same chip power assumption. The experiment results show that VWQCC can be well applied to the OCDMA with quality of service (QoS) requirements.

  18. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  19. Serotonin depletion induces pessimistic-like behavior in a cognitive bias paradigm in pigs.

    Science.gov (United States)

    Stracke, Jenny; Otten, Winfried; Tuchscherer, Armin; Puppe, Birger; Düpjan, Sandra

    2017-05-15

    Cognitive and affective processes are highly interrelated. This has implications for neuropsychiatric disorders such as major depressive disorder in humans but also for the welfare of non-human animals. The brain serotonergic system might play a key role in mediating the relationship between cognitive functions and affective regulation. The aim of our study was to examine the influence of serotonin depletion on the affective state and cognitive processing in pigs, an important farm animal species but also a potential model species for biomedical research in humans. For this purpose, we modified a serotonin depletion model using para-chlorophenylalanine (pCPA) to decrease serotonin levels in brain areas involved in cognitive and affective processing (part 1). The consequences of serotonin depletion were then measured in two behavioral tests (part 2): the spatial judgement task (SJT), providing information about the effects of the affective state on cognitive processing, and the open field/novel object (OFNO) test, which measures behavioral reactions to novelty that are assumed to reflect affective state. In part 1, 40 pigs were treated with either pCPA or saline for six consecutive days. Serotonin levels were assessed in seven different brain regions 4, 5, 6, 11 and 13days after the first injection. Serotonin was significantly depleted in all analyzed brain regions up to 13days after the first application. In part 2, the pCPA model was applied to 48 animals in behavioral testing. Behavioral tests, the OFNO test and the SJT, were conducted both before and after pCPA/saline injections. While results from the OFNO tests were inconclusive, an effect of treatment as well as an effect of the phase (before and after treatment) was observed in the SJT. Animals treated with pCPA showed more pessimistic-like behavior, suggesting a more negative affective state due to serotonin depletion. Thus, our results confirm that the serotonergic system is a key player in cognitive

  20. Ego depletion decreases trust in economic decision making

    Science.gov (United States)

    Ainsworth, Sarah E.; Baumeister, Roy F.; Vohs, Kathleen D.; Ariely, Dan

    2014-01-01

    Three experiments tested the effects of ego depletion on economic decision making. Participants completed a task either requiring self-control or not. Then participants learned about the trust game, in which senders are given an initial allocation of $10 to split between themselves and another person, the receiver. The receiver receives triple the amount given and can send any, all, or none of the tripled money back to the sender. Participants were assigned the role of the sender and decided how to split the initial allocation. Giving less money, and therefore not trusting the receiver, is the safe, less risky response. Participants who had exerted self-control and were depleted gave the receiver less money than those in the non-depletion condition (Experiment 1). This effect was replicated and moderated in two additional experiments. Depletion again led to lower amounts given (less trust), but primarily among participants who were told they would never meet the receiver (Experiment 2) or who were given no information about how similar they were to the receiver (Experiment 3). Amounts given did not differ for depleted and non-depleted participants who either expected to meet the receiver (Experiment 2) or were led to believe that they were very similar to the receiver (Experiment 3). Decreased trust among depleted participants was strongest among neurotics. These results imply that self-control facilitates behavioral trust, especially when no other cues signal decreased social risk in trusting, such as if an actual or possible relationship with the receiver were suggested. PMID:25013237

  1. Global storm time depletion of the outer electron belt.

    Science.gov (United States)

    Ukhorskiy, A Y; Sitnov, M I; Millan, R M; Kress, B T; Fennell, J F; Claudepierre, S G; Barnes, R J

    2015-04-01

    The outer radiation belt consists of relativistic (>0.5 MeV) electrons trapped on closed trajectories around Earth where the magnetic field is nearly dipolar. During increased geomagnetic activity, electron intensities in the belt can vary by orders of magnitude at different spatial and temporal scales. The main phase of geomagnetic storms often produces deep depletions of electron intensities over broad regions of the outer belt. Previous studies identified three possible processes that can contribute to the main-phase depletions: adiabatic inflation of electron drift orbits caused by the ring current growth, electron loss into the atmosphere, and electron escape through the magnetopause boundary. In this paper we investigate the relative importance of the adiabatic effect and magnetopause loss to the rapid depletion of the outer belt observed at the Van Allen Probes spacecraft during the main phase of 17 March 2013 storm. The intensities of >1 MeV electrons were depleted by more than an order of magnitude over the entire radial extent of the belt in less than 6 h after the sudden storm commencement. For the analysis we used three-dimensional test particle simulations of global evolution of the outer belt in the Tsyganenko-Sitnov (TS07D) magnetic field model with an inductive electric field. Comparison of the simulation results with electron measurements from the Magnetic Electron Ion Spectrometer experiment shows that magnetopause loss accounts for most of the observed depletion at L >5, while at lower L shells the depletion is adiabatic. Both magnetopause loss and the adiabatic effect are controlled by the change in global configuration of the magnetic field due to storm time development of the ring current; a simulation of electron evolution without a ring current produces a much weaker depletion.

  2. Basolateral cholesterol depletion alters Aquaporin-2 post-translational modifications and disrupts apical plasma membrane targeting.

    Science.gov (United States)

    Moeller, Hanne B; Fuglsang, Cecilia Hvitfeldt; Pedersen, Cecilie Nøhr; Fenton, Robert A

    2018-01-01

    Apical plasma membrane accumulation of the water channel Aquaporin-2 (AQP2) in kidney collecting duct principal cells is critical for body water homeostasis. Posttranslational modification (PTM) of AQP2 is important for regulating AQP2 trafficking. The aim of this study was to determine the role of cholesterol in regulation of AQP2 PTM and in apical plasma membrane targeting of AQP2. Cholesterol depletion from the basolateral plasma membrane of a collecting duct cell line (mpkCCD14) using methyl-beta-cyclodextrin (MBCD) increased AQP2 ubiquitylation. Forskolin, cAMP or dDAVP-mediated AQP2 phosphorylation at Ser269 (pS269-AQP2) was prevented by cholesterol depletion from the basolateral membrane. None of these effects on pS269-AQP2 were observed when cholesterol was depleted from the apical side of cells, or when MBCD was applied subsequent to dDAVP stimulation. Basolateral, but not apical, MBCD application prevented cAMP-induced apical plasma membrane accumulation of AQP2. These studies indicate that manipulation of the cholesterol content of the basolateral plasma membrane interferes with AQP2 PTM and subsequently regulated apical plasma membrane targeting of AQP2. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. A semi-empirical model for the formation and depletion of the high burnup structure in UO2

    Science.gov (United States)

    Pizzocri, D.; Cappia, F.; Luzzi, L.; Pastore, G.; Rondinella, V. V.; Van Uffelen, P.

    2017-04-01

    In the rim zone of UO2 nuclear fuel pellets, the combination of high burnup and low temperature drives a microstructural change, leading to the formation of the high burnup structure (HBS). In this work, we propose a semi-empirical model to describe the formation of the HBS, which embraces the polygonisation/recrystallization process and the depletion of intra-granular fission gas, describing them as inherently related. For this purpose, we performed grain-size measurements on samples at radial positions in which the restructuring was incomplete. Based on these new experimental data, we infer an exponential reduction of the average grain size with local effective burnup, paired with a simultaneous depletion of intra-granular fission gas driven by diffusion. The comparison with currently used models indicates the applicability of the herein developed model within integral fuel performance codes.

  4. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  5. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  6. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  7. Graph Codes with Reed-Solomon Component Codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2006-01-01

    We treat a specific case of codes based on bipartite expander graphs coming from finite geometries. The code symbols are associated with the branches and the symbols connected to a given node are restricted to be codewords in a Reed-Solomon code. We give results on the parameters of the codes...

  8. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  9. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  10. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Pries-Heje, Lene; Dahlgaard, Bente

    2013-01-01

    A classic way to choose a supplier is through a bidding process where tenders from competing companies are evaluated in relation to the customer’s requirements. If the customer wants to hire an agile software developing team instead of buying a software product, a new approach for comparing tenders...... is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  11. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  12. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  13. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    Energy Technology Data Exchange (ETDEWEB)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver, E-mail: jasmina@physics.ucf.edu [Planetary Sciences Group, Department of Physics, University of Central Florida, Orlando, FL 32816-2385 (United States)

    2016-07-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  14. TEA: A CODE CALCULATING THERMOCHEMICAL EQUILIBRIUM ABUNDANCES

    International Nuclear Information System (INIS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, M. Oliver

    2016-01-01

    We present an open-source Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. The code is based on the methodology of White et al. and Eriksson. It applies Gibbs free-energy minimization using an iterative, Lagrangian optimization scheme. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature–pressure pairs. We tested the code against the method of Burrows and Sharp, the free thermochemical equilibrium code Chemical Equilibrium with Applications (CEA), and the example given by Burrows and Sharp. Using their thermodynamic data, TEA reproduces their final abundances, but with higher precision. We also applied the TEA abundance calculations to models of several hot-Jupiter exoplanets, producing expected results. TEA is written in Python in a modular format. There is a start guide, a user manual, and a code document in addition to this theory paper. TEA is available under a reproducible-research, open-source license via https://github.com/dzesmin/TEA.

  15. AAV-CRISPR/Cas9-Mediated Depletion of VEGFR2 Blocks Angiogenesis In Vitro.

    Science.gov (United States)

    Wu, Wenyi; Duan, Yajian; Ma, Gaoen; Zhou, Guohong; Park-Windhol, Cindy; D'Amore, Patricia A; Lei, Hetian

    2017-12-01

    Pathologic angiogenesis is a component of many diseases, including neovascular age-related macular degeneration, proliferation diabetic retinopathy, as well as tumor growth and metastasis. The purpose of this project was to examine whether the system of adeno-associated viral (AAV)-mediated CRISPR (clustered regularly interspaced short palindromic repeats)-associated endonuclease (Cas)9 can be used to deplete expression of VEGF receptor 2 (VEGFR2) in human vascular endothelial cells in vitro and thus suppress its downstream signaling events. The dual AAV system of CRISPR/Cas9 from Streptococcus pyogenes (AAV-SpGuide and -SpCas9) was adapted to edit genomic VEGFR2 in primary human retinal microvascular endothelial cells (HRECs). In this system, the endothelial-specific promoter for intercellular adhesion molecule 2 (ICAM2) was cloned into the dual AAV vectors of SpGuide and SpCas9 for driving expression of green fluorescence protein (GFP) and SpCas9, respectively. These two AAV vectors were applied to production of recombinant AAV serotype 5 (rAAV5), which were used to infect HRECs for depletion of VEGFR2. Protein expression was determined by Western blot; and cell proliferation, migration, as well as tube formation were examined. AAV5 effectively infected vascular endothelial cells (ECs) and retinal pigment epithelial (RPE) cells; the ICAM2 promoter drove expression of GFP and SpCas9 in HRECs, but not in RPE cells. The results showed that the rAAV5-CRISPR/Cas9 depleted VEGFR2 by 80% and completely blocked VEGF-induced activation of Akt, and proliferation, migration as well as tube formation of HRECs. AAV-CRISRP/Cas9-mediated depletion of VEGFR2 is a potential therapeutic strategy for pathologic angiogenesis.

  16. Lignin depletion enhances the digestibility of cellulose in cultured xylem cells.

    Directory of Open Access Journals (Sweden)

    Catherine I Lacayo

    Full Text Available Plant lignocellulose constitutes an abundant and sustainable source of polysaccharides that can be converted into biofuels. However, the enzymatic digestion of native plant cell walls is inefficient, presenting a considerable barrier to cost-effective biofuel production. In addition to the insolubility of cellulose and hemicellulose, the tight association of lignin with these polysaccharides intensifies the problem of cell wall recalcitrance. To determine the extent to which lignin influences the enzymatic digestion of cellulose, specifically in secondary walls that contain the majority of cellulose and lignin in plants, we used a model system consisting of cultured xylem cells from Zinniaelegans. Rather than using purified cell wall substrates or plant tissue, we have applied this system to study cell wall degradation because it predominantly consists of homogeneous populations of single cells exhibiting large deposits of lignocellulose. We depleted lignin in these cells by treating with an oxidative chemical or by inhibiting lignin biosynthesis, and then examined the resulting cellulose digestibility and accessibility using a fluorescent cellulose-binding probe. Following cellulase digestion, we measured a significant decrease in relative cellulose content in lignin-depleted cells, whereas cells with intact lignin remained essentially unaltered. We also observed a significant increase in probe binding after lignin depletion, indicating that decreased lignin levels improve cellulose accessibility. These results indicate that lignin depletion considerably enhances the digestibility of cellulose in the cell wall by increasing the susceptibility of cellulose to enzymatic attack. Although other wall components are likely to contribute, our quantitative study exploits cultured Zinnia xylem cells to demonstrate the dominant influence of lignin on the enzymatic digestion of the cell wall. This system is simple enough for quantitative image analysis

  17. Depletion of the Complex Multiple Aquifer System of Jordan

    Science.gov (United States)

    Rödiger, T.; Siebert, C.; Geyer, S.; Merz, R.

    2017-12-01

    In many countries worldwide water scarcity pose a significant risk to the environment and the socio-economy. Particularly in countries where the available water resources are strongly limited by climatic conditions an accurate determination of the available water resources is of high priority, especially when water supply predominantly rely oon groundwater resources and their recharge. If groundwater abstraction exceeds the natural groundwater recharge in heavily used well field areas, overexploitation or persistent groundwater depletion occurs. This is the case in the Kingdom of Jordan, where a multi-layer aquifer complex forms the eastern subsurface catchment of the Dead Sea basin. Since the begin of the industrial and agricultural development of the country, dramatically falling groundwater levels, the disappearance of springs and saltwater intrusions from deeper aquifers is documented nation-wide. The total water budget is influenced by (i) a high climatic gradient from hyperarid to semiarid and (ii) the intnese anthropogenic abstraction. For this multi-layered aquifer system we developed a methodology to evaluate groundwater depletion by linking a hydrological and a numerical flow model including estimates of groundwater abstraction. Hence, we define groundwater depletion as the rate of groundwater abstraction in excess of natural recharge rate. Restricting our analysis, we calculated a range of groundwater depletion from 0% in the eastern Hamad basin to around 40% in the central part of Jordan and to extreme values of 100% of depletion in the Azraq and Disi basin.

  18. Examining depletion theories under conditions of within-task transfer.

    Science.gov (United States)

    Brewer, Gene A; Lau, Kevin K H; Wingert, Kimberly M; Ball, B Hunter; Blais, Chris

    2017-07-01

    In everyday life, mental fatigue can be detrimental across many domains including driving, learning, and working. Given the importance of understanding and accounting for the deleterious effects of mental fatigue on behavior, a growing body of literature has studied the role of motivational and executive control processes in mental fatigue. In typical laboratory paradigms, participants complete a task that places demand on these self-control processes and are later given a subsequent task. Generally speaking, decrements to subsequent task performance are taken as evidence that the initial task created mental fatigue through the continued engagement of motivational and executive functions. Several models have been developed to account for negative transfer resulting from this "ego depletion." In the current study, we provide a brief literature review, specify current theoretical approaches to ego-depletion, and report an empirical test of current models of depletion. Across 4 experiments we found minimal evidence for executive control depletion along with strong evidence for motivation mediated ego depletion. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. OZONE DEPLETING SUBSTANCES ELIMINATION MANAGEMENT: THE SUCCESS STORY OF MACEDONIA

    Directory of Open Access Journals (Sweden)

    Margarita Matlievska

    2013-04-01

    Full Text Available Man, with its activities, produces and uses substances that have negative impact on the environment and the human health, and can cause an economic damage. Consequently, they have a great impact on quality of life. Among the most harmful chemicals are Ozone Depleting Substances that are subject of regulation with international conventions. This Paper supports the fact that each country has to undertake national efforts for ozone depleting substances reduction and elimination. In that respect, the general objective of the Paper is to present the Macedonian unique experience regarding its efforts to reduce or eliminate these substances. The following two aspects were subject to the research: national legislation which regulates the Ozone Depleting Substances import and export as well as the implementation of the projects that resulted with the elimination of Ozone Depleting Substances quantities in the period 1995 – 2010. The research outcomes confirm the starting research hypothesis i.e. that with adequately created and implemented national action, the amount of Ozone Depleting Substances consumption can dramatically fall.

  20. The Physical Origin of Long Gas Depletion Times in Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Semenov, Vadim A.; Kravtsov, Andrey V.; Gnedin, Nickolay Y.

    2017-08-18

    We present a model that elucidates why gas depletion times in galaxies are long compared to the time scales of the processes driving the evolution of the interstellar medium. We show that global depletion times are not set by any "bottleneck" in the process of gas evolution towards the star-forming state. Instead, depletion times are long because star-forming gas converts only a small fraction of its mass into stars before it is dispersed by dynamical and feedback processes. Thus, complete depletion requires that gas transitions between star-forming and non-star-forming states multiple times. Our model does not rely on the assumption of equilibrium and can be used to interpret trends of depletion times with the properties of observed galaxies and the parameters of star formation and feedback recipes in galaxy simulations. In particular, the model explains the mechanism by which feedback self-regulates star formation rate in simulations and makes it insensitive to the local star formation efficiency. We illustrate our model using the results of an isolated $L_*$-sized disk galaxy simulation that reproduces the observed Kennicutt-Schmidt relation for both molecular and atomic gas. Interestingly, the relation for molecular gas is close to linear on kiloparsec scales, even though a non-linear relation is adopted in simulation cells. This difference is due to stellar feedback, which breaks the self-similar scaling of the gas density PDF with the average gas surface density.

  1. Source Coding for Wireless Distributed Microphones in Reverberant Environments

    DEFF Research Database (Denmark)

    Zahedi, Adel

    2016-01-01

    's recording. This means that ignoring this correlation will be a waste of the scarce power and bandwidth resources. In this thesis, we study both information-theoretic and audio coding aspects of the coding problem in the above-mentioned framework. We formulate rate-distortion problems which take into account...... on the performance of the audio coding system. We derive explicit formulas for the rate-distortion functions, and design coding schemes that asymptotically achieve the performance bounds. We justify the Gaussianity assumption by showing that the results will still be relevant for non-Gaussian sources including audio...... for the transmitting microphone to reduce the rate by using distributed source coding. Within this framework, we apply our coding schemes to Gaussian signals as well as audio measurements and compare the rate-distortion performances for distributed and non-distributed source coding scenarios. We also compare...

  2. Development of code PRETOR for stellarator simulation

    International Nuclear Information System (INIS)

    Dies, J.; Fontanet, J.; Fontdecaba, J.M.; Castejon, F.; Alejandre, C.

    1998-01-01

    The Department de Fisica i Enginyeria Nuclear (DFEN) of the UPC has some experience in the development of the transport code PRETOR. This code has been validated with shots of DIII-D, JET and TFTR, it has also been used in the simulation of operational scenarios of ITER fast burnt termination. Recently, the association EURATOM-CIEMAT has started the operation of the TJ-II stellarator. Due to the need of validating the results given by others transport codes applied to stellarators and because all of them made some approximations, as a averaging magnitudes in each magnetic surface, it was thought suitable to adapt the PRETOR code to devices without axial symmetry, like stellarators, which is very suitable for the specific needs of the study of TJ-II. Several modifications are required in PRETOR; the main concerns to the models of: magnetic equilibrium, geometry and transport of energy and particles. In order to solve the complex magnetic equilibrium geometry the powerful numerical code VMEC has been used. This code gives the magnetic surface shape as a Fourier series in terms of the harmonics (m,n). Most of the geometric magnitudes are also obtained from the VMEC results file. The energy and particle transport models will be replaced by other phenomenological models that are better adapted to stellarator simulation. Using the proposed models, it is pretended to reproduce experimental data available from present stellarators, given especial attention to the TJ-II of the association EURATOM-CIEMAT. (Author)

  3. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    lowing function is maximized,. This kind of decoding strategy is called the maximum a posteriori probability (MAP) decoding strategy as it attempts to estimate each symbol of the codeword that ..... gate the effects of packet loss over digital networks. Un- doubtedly other applications will use these codes in the years to come.

  4. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  5. CERN Code of Conduct

    CERN Document Server

    Department, HR

    2010-01-01

    The Code is intended as a guide in helping us, as CERN contributors, to understand how to conduct ourselves, treat others and expect to be treated. It is based around the five core values of the Organization. We should all become familiar with it and try to incorporate it into our daily life at CERN.

  6. Error Correcting Codes

    Indian Academy of Sciences (India)

    focused pictures of Triton, Neptune's largest moon. This great feat was in no small measure due to the fact that the sophisticated communication system on Voyager had an elaborate error correcting scheme built into it. At Jupiter and Saturn, a convolutional code was used to enhance the reliability of transmission, and at ...

  7. Nuclear safety code study

    Energy Technology Data Exchange (ETDEWEB)

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  8. Student Dress Codes.

    Science.gov (United States)

    Uerling, Donald F.

    School officials see a need for regulations that prohibit disruptive and inappropriate forms of expression and attire; students see these regulations as unwanted restrictions on their freedom. This paper reviews court litigation involving constitutional limitations on school authority, dress and hair codes, state law constraints, and school…

  9. Differential pulse code modulation

    Science.gov (United States)

    Herman, C. F. (Inventor)

    1976-01-01

    A differential pulse code modulation (DPCM) encoding and decoding method is described along with an apparatus which is capable of transmission with minimum bandwidth. The apparatus is not affected by data transition density, requires no direct current (DC) response of the transmission link, and suffers from minimal ambiguity in resolution of the digital data.

  10. Error Correcting Codes

    Indian Academy of Sciences (India)

    syndrome is an indicator of underlying disease. Here too, a non zero syndrome is an indication that something has gone wrong during transmission. SERIES I ARTICLE. The first matrix on the left hand side is called the parity check matrix H. Thus every codeword c satisfies the equation o o. HcT = o o. Therefore the code can ...

  11. Focusing Automatic Code Inspections

    NARCIS (Netherlands)

    Boogerd, C.J.

    2010-01-01

    Automatic Code Inspection tools help developers in early detection of defects in software. A well-known drawback of many automatic inspection approaches is that they yield too many warnings and require a clearer focus. In this thesis, we provide such focus by proposing two methods to prioritize

  12. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  13. Broadcast Coded Slotted ALOHA

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives...

  14. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  15. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  16. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  17. Corporate governance through codes

    NARCIS (Netherlands)

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  18. Ptolemy Coding Style

    Science.gov (United States)

    2014-09-05

    because this would combine Ptolemy II with the GPL’d code and thus encumber Ptolemy II with the GPL. Another GNU license is the GNU Library General...permission on the source.eecs.berkeley.edu repositories, then use your local repository. bash-3.2$ svn co svn+ ssh ://source.eecs.berkeley.edu/chess

  19. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  20. Testing efficiency transfer codes for equivalence

    International Nuclear Information System (INIS)

    Vidmar, T.; Celik, N.; Cornejo Diaz, N.; Dlabac, A.; Ewa, I.O.B.; Carrazana Gonzalez, J.A.; Hult, M.; Jovanovic, S.; Lepy, M.-C.; Mihaljevic, N.; Sima, O.; Tzika, F.; Jurado Vargas, M.; Vasilopoulou, T.; Vidmar, G.

    2010-01-01

    Four general Monte Carlo codes (GEANT3, PENELOPE, MCNP and EGS4) and five dedicated packages for efficiency determination in gamma-ray spectrometry (ANGLE, DETEFF, GESPECOR, ETNA and EFFTRAN) were checked for equivalence by applying them to the calculation of efficiency transfer (ET) factors for a set of well-defined sample parameters, detector parameters and energies typically encountered in environmental radioactivity measurements. The differences between the results of the different codes never exceeded a few percent and were lower than 2% in the majority of cases.

  1. New Channel Coding Methods for Satellite Communication

    Directory of Open Access Journals (Sweden)

    J. Sebesta

    2010-04-01

    Full Text Available This paper deals with the new progressive channel coding methods for short message transmission via satellite transponder using predetermined length of frame. The key benefits of this contribution are modification and implementation of a new turbo code and utilization of unique features with applications of methods for bit error rate estimation and algorithm for output message reconstruction. The mentioned methods allow an error free communication with very low Eb/N0 ratio and they have been adopted for satellite communication, however they can be applied for other systems working with very low Eb/N0 ratio.

  2. Accumulate Repeat Accumulate Coded Modulation

    Science.gov (United States)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  3. 10 CFR 50.55a - Codes and standards.

    Science.gov (United States)

    2010-01-01

    ... not affected by these limitations. (ii) Pressure-retaining welds in ASME Code Class 1 piping (applies... 10 Energy 1 2010-01-01 2010-01-01 false Codes and standards. 50.55a Section 50.55a Energy NUCLEAR... specified in § 50.55, except that each combined license for a boiling or pressurized water-cooled nuclear...

  4. 22 CFR 518.42 - Codes of conduct.

    Science.gov (United States)

    2010-04-01

    ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true Codes of conduct. 518.42 Section 518.42 Foreign... Procurement Standards § 518.42 Codes of conduct. The recipient shall maintain written standards of conduct... conduct shall provide for disciplinary actions to be applied for violations of such standards by officers...

  5. Control rod computer code IAMCOS: general theory and numerical methods

    International Nuclear Information System (INIS)

    West, G.

    1982-11-01

    IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr

  6. Greedy vs. L1 convex optimization in sparse coding

    DEFF Research Database (Denmark)

    Ren, Huamin; Pan, Hong; Olsen, Søren Ingvor

    2015-01-01

    Sparse representation has been applied successfully in many image analysis applications, including abnormal event detection, in which a baseline is to learn a dictionary from the training data and detect anomalies from its sparse codes. During this procedure, sparse codes which can be achieved...... their performance from various aspects to better understand their applicability, including computation time, reconstruction error, sparsity, detection...

  7. Compressible ferrimagnetism in the depleted periodic Anderson model

    Science.gov (United States)

    Costa, N. C.; Araújo, M. V.; Lima, J. P.; Paiva, T.; dos Santos, R. R.; Scalettar, R. T.

    2018-02-01

    Tight-binding Hamiltonians with single and multiple orbitals exhibit an intriguing array of magnetic phase transitions. In most cases the spin ordered phases are insulating, while the disordered phases may be either metallic or insulating. In this paper we report a determinant quantum Monte Carlo study of interacting electrons in a geometry which can be regarded as a two-dimensional periodic Anderson model with depleted interacting (f ) orbitals. For a single depletion, we observe an enhancement of antiferromagnetic correlations and formation of localized states. For half of the f orbitals regularly depleted, the system exhibits a ferrimagnetic ground state. We obtain a quantitative determination of the nature of magnetic order, which we discuss in the context of Tsunetsugu's theorem, and show that, although the dc conductivity indicates insulating behavior at half filling, the compressibility remains finite.

  8. Tuning of depletion interaction in nanoparticle-surfactant systems

    International Nuclear Information System (INIS)

    Ray, D.; Aswal, V. K.

    2014-01-01

    The interaction of anionic silica nanoparticles (Ludox LS30) and non-ionic surfactants decaethylene glycol monododecylether (C12E10) without and with anionic sodium dodecyl sulfate (SDS) in aqueous electrolyte solution has been studied by small-angle neutron scattering (SANS). The measurements have been carried out for fixed concentrations of nanoparticle (1 wt%), surfactants (1 wt%) and electrolyte (0.1 M NaCl). Each of these nanoparticlesurfactant systems has been examined for different contrast conditions where individual components (nanoparticle or surfactant) are made visible. It is observed that the nanoparticle-C12E10 system leads to the depletion-induced aggregation of nanoparticles. The system however behaves very differently on addition of SDS where depletion interaction gets suppressed and aggregation of nanoparticles can be prevented. We show that C12E10 and SDS form mixed micelles and the charge on these micelles plays important role in tuning the depletion interaction

  9. Tuning of depletion interaction in nanoparticle-surfactant systems

    Energy Technology Data Exchange (ETDEWEB)

    Ray, D., E-mail: debes@barc.gov.in; Aswal, V. K., E-mail: debes@barc.gov.in [Solid State Physics Division, Bhabha Atomic Research Centre, Mumbai-400085 (India)

    2014-04-24

    The interaction of anionic silica nanoparticles (Ludox LS30) and non-ionic surfactants decaethylene glycol monododecylether (C12E10) without and with anionic sodium dodecyl sulfate (SDS) in aqueous electrolyte solution has been studied by small-angle neutron scattering (SANS). The measurements have been carried out for fixed concentrations of nanoparticle (1 wt%), surfactants (1 wt%) and electrolyte (0.1 M NaCl). Each of these nanoparticlesurfactant systems has been examined for different contrast conditions where individual components (nanoparticle or surfactant) are made visible. It is observed that the nanoparticle-C12E10 system leads to the depletion-induced aggregation of nanoparticles. The system however behaves very differently on addition of SDS where depletion interaction gets suppressed and aggregation of nanoparticles can be prevented. We show that C12E10 and SDS form mixed micelles and the charge on these micelles plays important role in tuning the depletion interaction.

  10. Tuning of depletion interaction in nanoparticle-surfactant systems

    Science.gov (United States)

    Ray, D.; Aswal, V. K.

    2014-04-01

    The interaction of anionic silica nanoparticles (Ludox LS30) and non-ionic surfactants decaethylene glycol monododecylether (C12E10) without and with anionic sodium dodecyl sulfate (SDS) in aqueous electrolyte solution has been studied by small-angle neutron scattering (SANS). The measurements have been carried out for fixed concentrations of nanoparticle (1 wt%), surfactants (1 wt%) and electrolyte (0.1 M NaCl). Each of these nanoparticlesurfactant systems has been examined for different contrast conditions where individual components (nanoparticle or surfactant) are made visible. It is observed that the nanoparticle-C12E10 system leads to the depletion-induced aggregation of nanoparticles. The system however behaves very differently on addition of SDS where depletion interaction gets suppressed and aggregation of nanoparticles can be prevented. We show that C12E10 and SDS form mixed micelles and the charge on these micelles plays important role in tuning the depletion interaction.

  11. International aspects of restrictions of ozone-depleting substances

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, S.C.

    1989-10-01

    This report summarizes international efforts to protect stratospheric ozone. Also included in this report is a discussion of activities in other countries to meet restrictions in the production and use of ozone-depleting substances. Finally, there is a brief presentation of trade and international competitiveness issues relating to the transition to alternatives for the regulated chlorofluorocarbons (CFCs) and halons. The stratosphere knows no international borders. Just as the impact of reduced stratospheric ozone will be felt internationally, so protection of the ozone layer is properly an international effort. Unilateral action, even by a country that produces and used large quantities of ozone-depleting substances, will not remedy the problem of ozone depletion if other countries do not follow suit. 32 refs., 7 tabs.

  12. Causation, constructors and codes.

    Science.gov (United States)

    Hofmeyr, Jan-Hendrik S

    2018-02-01

    Relational biology relies heavily on the enriched understanding of causal entailment that Robert Rosen's formalisation of Aristotle's four causes has made possible, although to date efficient causes and the rehabilitation of final cause have been its main focus. Formal cause has been paid rather scant attention, but, as this paper demonstrates, is crucial to our understanding of many types of processes, not necessarily biological. The graph-theoretic relational diagram of a mapping has played a key role in relational biology, and the first part of the paper is devoted to developing an explicit representation of formal cause in the diagram and how it acts in combination with efficient cause to form a mapping. I then use these representations to show how Von Neumann's universal constructor can be cast into a relational diagram in a way that avoids the logical paradox that Rosen detected in his own representation of the constructor in terms of sets and mappings. One aspect that was absent from both Von Neumann's and Rosen's treatments was the necessity of a code to translate the description (the formal cause) of the automaton to be constructed into the construction process itself. A formal definition of codes in general, and organic codes in particular, allows the relational diagram to be extended so as to capture this translation of formal cause into process. The extended relational diagram is used to exemplify causal entailment in a diverse range of processes, such as enzyme action, construction of automata, communication through the Morse code, and ribosomal polypeptide synthesis through the genetic code. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. The APR1400 Core Design by Using APA Code System

    International Nuclear Information System (INIS)

    Choi, Yu Sun; Koh, Byung Marn

    2008-01-01

    The nuclear design for APR1400 has been performed to prepare the core model for Automatic Load Follow Operation Simulation. APA (ALPHA/ PHOENIXP/ ANC) code system is a tool for the multi-cycle depletion calculations for APR1400. Its detail versions for ALPHA, PHOENIX-P and ANC are 8.9.3, 8.6.1 and 8.10.5, respectively. The first and equilibrium core depletion calculations for APR1400 have been performed to assure the target cycle length and confirm the safety parameters. The parameters are satisfied within limitation about nuclear design criteria. This APR1400 core models will be based on the design parameters for APR1400 Simulator

  14. Depletion of energy or depletion of knowledge alternative use of energy resources

    International Nuclear Information System (INIS)

    Arslan, M.

    2011-01-01

    This research paper is about the depletion of Energy resources being a huge problem facing the world at this time. As available energy sources are coming to a shortage and measures are be taken in order to conserve the irreplaceable energy resources that leads to sustainability and fair use of energy sources for future generations. Alternative energy sources are being sought; however no other energy source is able to provide even a fraction of energy as that of fossil fuels. Use of the alternative energy resources like wind corridors (Sindh and Baluchistan), fair use of Hydro energy (past monsoon flooding can produce enough energy that may available for next century). Uranium Resources which are enough for centuries energy production in Pakistan (Dhok Pathan Formation) lying in Siwalick series from Pliocene to Pleistocene. Among all of these, my focus is about energy from mineral fuels like Uranium from Sandstone hosted deposits in Pakistan (Siwalik Series in Pakistan). A number of uranium bearing mineralized horizons are present in the upper part of the Dhok Pathan Formation. These horizons have secondary uranium mineral carnotite and other ores. Uranium mineralization is widely distributed throughout the Siwaliks The purpose of this paper was to introduce the use of alternative energy sources in Pakistan which are present in enough amounts by nature. Pakistan is blessed with wealth of natural resources. Unfortunately, Pakistan is totally depending on non renewable energy resource. There are three main types of fossil fuels: coal, oil and natural gas. After food, fossil fuel is humanity's most important source of energy. Pakistan is among the most gas dependent economies of the world. Use of fossil fuel for energy will not only increase the demand of more fossils but it has also extreme effects on climate as well as direct and indirect effects to humans. These entire remedial thinking can only be possible if you try to use alternative energy resources rather than

  15. Code of practice for radiological protection in dentistry

    International Nuclear Information System (INIS)

    1988-01-01

    This Code of Practice applies to all those involved in the practice of dentistry and is designed to minimise radiation doses to patients, dental staff and the public from the use of dental radiographic equipment

  16. The manufacturing of depleted uranium biological shield components

    International Nuclear Information System (INIS)

    Metelkin, J.A.

    1998-01-01

    The unique combination of the physical and mechanical properties of uranium made it possible to manufacture biological shield components of transport package container (TPC) for transportation nuclear power plant irradiated fuel and radionuclides of radiation diagnostic instruments. Protective properties are substantially dependent on the nature radionuclide composition of uranium, that why I recommended depleted uranium after radiation chemical processing. Depleted uranium biological shield (DUBS) has improved specific mass-size characteristics compared to a shield made of lead, steel or tungsten. Technological achievements in uranium casting and machining made it possible to manufacture DUBS components of TPC up to 3 tons of mass and up to 2 metres of the maximum size. (authors)

  17. Fabrication and characterization of fully depleted surface barrier detectors

    International Nuclear Information System (INIS)

    Ray, A.

    2010-01-01

    Fabrication of fully depleted surface barrier type thin detectors needs thin silicon wafer of 20 - 30 μm thickness and flatness of ± 1 μm. Process has been developed for thinning silicon wafers to achieve thickness up to 20 - 30 μm from thicker (0.5 - 0.8 mm) silicon samples. These samples were used to fabricate fully depleted surface barrier detectors using Au contacts on n-type silicon. The detectors were characterized by measuring forward and reverse I-V characteristics and alpha energy spectra of Am-Pu source. (author)

  18. Risks associated to the depleted uranium in the piercing shells

    International Nuclear Information System (INIS)

    2001-01-01

    Following the complaints lodged by military personnel against the consequences of the utilization of depleted uranium in weapons during the Balkans war (1995-1999), the governments of six concerned countries asked information to the NATO. In this paper the IPSN gives its own opinion on this problem: the characteristics of the uranium and the depleted uranium, the impacts of the shell fires on the human and the environment. To establish the risks in terms of leukemia and the liabilities the IPSN advises more biological tests and more information on the shells utilization. (A.L.B.)

  19. A new memory effect (MSD) in fully depleted SOI MOSFETs

    Science.gov (United States)

    Bawedin, M.; Cristoloveanu, S.; Yun, J. G.; Flandre, D.

    2005-09-01

    We demonstrate that the transconductance and drain current of fully depleted MOSFETs can display an interesting time-dependent hysteresis. This new memory effect, called meta-stable dip (MSD), is mainly due to the long carrier generation lifetime in the silicon film. Our parametric analysis shows that the memory window can be adjusted in view of practical applications. Various measurement conditions and devices with different doping, front oxide and silicon film thicknesses are systematically explored. The MSD effect can be generalized to several fully depleted CMOS technologies. The MSD mechanism is discussed and validated by two-dimensional simulations results.

  20. Towards a complete propagation uncertainties in depletion calculations

    Energy Technology Data Exchange (ETDEWEB)

    Martinez, J.S. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering; Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Zwermann, W.; Gallner, L.; Puente-Espel, Federico; Velkov, K.; Hannstein, V. [Gesellschaft fuer Anlagen- und Reaktorsicherheit mbH (GRS), Garching (Germany); Cabellos, O. [Universidad Politecnica de Madrid (Spain). Dept. of Nuclear Engineering

    2013-07-01

    Propagation of nuclear data uncertainties to calculated values is interesting for design purposes and libraries evaluation. XSUSA, developed at GRS, propagates cross section uncertainties to nuclear calculations. In depletion simulations, fission yields and decay data are also involved and are a possible source of uncertainty that should be taken into account. We have developed tools to generate varied fission yields and decay libraries and to propagate uncertainties through depletion in order to complete the XSUSA uncertainty assessment capabilities. A generic test to probe the methodology is defined and discussed. (orig.)