WorldWideScience

Sample records for mhd-capable llnl codes

  1. Description and application of the AERIN Code at LLNL

    International Nuclear Information System (INIS)

    King, W.C.

    1986-01-01

    The AERIN code was written at the Lawrence Livermore National Laboratory in 1976 to compute the organ burdens and absorbed dose resulting from a chronic or acute inhalation of transuranic isotopes. The code was revised in 1982 to reflect the concepts of ICRP-30. This paper will describe the AERIN code and how it has been used at LLNL to study more than 80 cases of internal deposition and obtain estimates of internal dose. A comparison with the computed values of the committed organ dose is made with ICRP-30 values. The benefits of using the code are described. 3 refs., 3 figs., 6 tabs

  2. 16-APR-03 Final Release of ENDF/B-V for use with LLNL Codes

    International Nuclear Information System (INIS)

    Hill, T S; McNabb, D P; Hedstrom, G W; Beck, B; Hagmann, C A

    2003-01-01

    The new data files were prepared in two steps. First, the ENDF/B-V database was translated to an ENDL-format ascii database. The ENDL ascii format is a point-wise tabular storage scheme where intermediate values are extracted via interpolation. Sufficient point-wise information was generated in the translation to insure an extraction tolerance of 0.1% for most of the data. The only exception is along the incident neutron energy axis of the outgoing particle energy probability density function where a 0.5% tolerance was maintained. Second, processed files were generated from the translated database. Since the translated ENDF/B-V data is in ENDL-format, the standard processing codes were used to generate the new processed data files. To the best of our knowledge, these processed data files are accurate representations of the ENDF/B-V database to within the stated tolerances. However, there are several issues that users must be aware of and they are listed in this report

  3. LLNL NESHAPs 2014 Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bertoldo, N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallegos, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); MacQueen, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wegrecki, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-01

    Lawrence Livermore National Security, LLC operates facilities at Lawrence Livermore National Laboratory (LLNL) where radionuclides are handled and stored. These facilities are subject to the U.S. Environmental Protection Agency (EPA) National Emission Standards for Hazardous Air Pollutants (NESHAPs) in Code of Federal Regulations (CFR) Title 40, Part 61, Subpart H, which regulates radionuclide emissions to air from Department of Energy (DOE) facilities. Specifically, NESHAPs limits the emission of radionuclides to the ambient air to levels resulting in an annual effective dose equivalent of 10 mrem (100 μSv) to any member of the public. Using measured and calculated emissions, and building-specific and common parameters, LLNL personnel applied the EPA-approved computer code, CAP88-PC, Version 4.0.1.17, to calculate the dose to the maximally exposed individual member of the public for the Livermore Site and Site 300.

  4. LLNL 1981: technical horizons

    International Nuclear Information System (INIS)

    1981-07-01

    Research programs at LLNL for 1981 are described in broad terms. In his annual State of the Laboratory address, Director Roger Batzel projected a $481 million operating budget for fiscal year 1982, up nearly 13% from last year. In projects for the Department of Energy and the Department of Defense, the Laboratory applies its technical facilities and capabilities to nuclear weapons design and development and other areas of defense research that include inertial confinement fusion, nonnuclear ordnances, and particle-beam technology. LLNL is also applying its unique experience and capabilities to a variety of projects that will help the nation meet its energy needs in an environmentally acceptable manner. A sampling of recent achievements by LLNL support organizations indicates their diversity

  5. The LLNL AMS facility

    International Nuclear Information System (INIS)

    Roberts, M.L.; Bench, G.S.; Brown, T.A.

    1996-05-01

    The AMS facility at Lawrence Livermore National Laboratory (LLNL) routinely measures the isotopes 3 H, 7 Be, 10 Be, 14 C, 26 Al, 36 Cl, 41 Ca, 59,63 Ni, and 129 I. During the past two years, over 30,000 research samples have been measured. Of these samples, approximately 30% were for 14 C bioscience tracer studies, 45% were 14 C samples for archaeology and the geosciences, and the other isotopes constitute the remaining 25%. During the past two years at LLNL, a significant amount of work has gone into the development of the Projectile X-ray AMS (PXAMS) technique. PXAMS uses induced characteristic x-rays to discriminate against competing atomic isobars. PXAMS has been most fully developed for 63 Ni but shows promise for the measurement of several other long lived isotopes. During the past year LLNL has also conducted an 129 I interlaboratory comparison exercise. Recent hardware changes at the LLNL AMS facility include the installation and testing of a new thermal emission ion source, a new multianode gas ionization detector for general AMS use, re-alignment of the vacuum tank of the first of the two magnets that make up the high energy spectrometer, and a new cryo-vacuum system for the AMS ion source. In addition, they have begun design studies and carried out tests for a new high-resolution injector and a new beamline for heavy element AMS

  6. LLNL NESHAPs 2015 Annual Report - June 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, K. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gallegos, G. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); MacQueen, D. H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wegrecki, A. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-06-01

    Lawrence Livermore National Security, LLC operates facilities at Lawrence Livermore National Laboratory (LLNL) in which radionuclides are handled and stored. These facilities are subject to the U.S. Environmental Protection Agency (EPA) National Emission Standards for Hazardous Air Pollutants (NESHAPs) in Code of Federal Regulations (CFR) Title 40, Part 61, Subpart H, which regulates radionuclide emissions to air from Department of Energy (DOE) facilities. Specifically, NESHAPs limits the emission of radionuclides to the ambient air to levels resulting in an annual effective dose equivalent of 10 mrem (100 μSv) to any member of the public. Using measured and calculated emissions, and building-specific and common parameters, LLNL personnel applied the EPA-approved computer code, CAP88-PC, Version 4.0.1.17, to calculate the dose to the maximally exposed individual member of the public for the Livermore Site and Site 300.

  7. LLE-LLNL progress report on studies in nonlocal heat transport in spherical plasmas using the Fokker-Planck code SPARK

    International Nuclear Information System (INIS)

    Epperlein, E.M.

    1992-01-01

    Preliminary 1-D studies of nonlocal heat transport in spherical plasmas based on the Fokker-Planck code SPARK indicate significant levels of electron preheat and radial heat flux across a spherical heat sink surface kept at fixed temperature. However, the diffusive approximation to the Fokker-Planck equation is shown to be particularly sensitive to the nature of the inner surface boundary condition chosen. A suggested remedy is the inclusion of a target capsule in future simulations studies with SPARK

  8. LLNL NESHAPs, 1993 annual report

    International Nuclear Information System (INIS)

    Harrach, R.J.; Surano, K.A.; Biermann, A.H.; Gouveia, F.J.; Fields, B.C.; Tate, P.J.

    1994-06-01

    The standard defined in NESHAPSs CFR Part 61.92 limits the emission of radionuclides to the ambient air from DOE facilities to those that would cause any member of the public to receive in any year an effective dose equivalent of 10 mrem. In August 1993 DOE and EPA signed a Federal Facility Compliance Agreement which established a schedule of work for LLNL to perform to demonstrate compliance with NESHAPs, 40 CFR part 61, Subpart H. The progress in LLNL's NESHAPs program - evaluations of all emission points for the Livermore site and Site 300, of collective EDEs for populations within 80 km of each site, status in reguard to continuous monitoring requirements and periodic confirmatory measurements, improvements in the sampling and monitoring systems and progress on a NESHAPs quality assurance program - is described in this annual report. In April 1994 the EPA notified DOE and LLNL that all requirements of the FFCA had been met, and that LLNL was in compliance with the NESHAPs regulations

  9. LLNL Chemical Kinetics Modeling Group

    Energy Technology Data Exchange (ETDEWEB)

    Pitz, W J; Westbrook, C K; Mehl, M; Herbinet, O; Curran, H J; Silke, E J

    2008-09-24

    The LLNL chemical kinetics modeling group has been responsible for much progress in the development of chemical kinetic models for practical fuels. The group began its work in the early 1970s, developing chemical kinetic models for methane, ethane, ethanol and halogenated inhibitors. Most recently, it has been developing chemical kinetic models for large n-alkanes, cycloalkanes, hexenes, and large methyl esters. These component models are needed to represent gasoline, diesel, jet, and oil-sand-derived fuels.

  10. LLNL pure positron plasma program

    International Nuclear Information System (INIS)

    Hartley, J.H.; Beck, B.R.; Cowan, T.E.; Howell, R.H.; McDonald, J.L.; Rohatgi, R.R.; Fajans, J.; Gopalan, R.

    1995-01-01

    Assembly and initial testing of the Positron Time-of-Flight Trap at the Lawrence Livermore National Laboratory (LLNL) Increase Pulsed Positron Facility has been completed. The goal of the project is to accumulate at high-density positron plasma in only a few seconds., in order to facilitate study that may require destructive diagnostics. To date, densities of at least 6 x 10 6 positrons per cm 3 have been achieved

  11. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-01-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). The Waste Minimization Policy field has undergone continuous changes since its formal inception in the 1984 HSWA legislation. The first LLNL WMPP, Revision A, is dated March 1985. A series of informal revision were made on approximately a semi-annual basis. This Revision 2 is the third formal issuance of the WMPP document. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this new policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. In response to these policies, DOE has revised and issued implementation guidance for DOE Order 5400.1, Waste Minimization Plan and Waste Reduction reporting of DOE Hazardous, Radioactive, and Radioactive Mixed Wastes, final draft January 1990. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements. 3 figs., 4 tabs

  12. Status of LLNL granite projects

    International Nuclear Information System (INIS)

    Ramspott, L.D.

    1980-01-01

    The status of LLNL Projects dealing with nuclear waste disposal in granitic rocks is reviewed. This review covers work done subsequent to the June 1979 Workshop on Thermomechanical Modeling for a Hardrock Waste Repository and is prepared for the July 1980 Workshop on Thermomechanical-Hydrochemical Modeling for a Hardrock Waste Repository. Topics reviewed include laboratory determination of thermal, mechanical, and transport properties of rocks at conditions simulating a deep geologic repository, and field testing at the Climax granitic stock at the USDOE Nevada Test Site

  13. LLNL Mercury Project Trinity Open Science Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Brantley, Patrick [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, Shawn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McKinley, Scott [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); O' Brien, Matt [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Peters, Doug [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pozulp, Mike [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Becker, Greg [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohror, Kathryn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, Adam [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-04-20

    The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, we also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.

  14. LLNL Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    1990-05-01

    This document is the February 14, 1990 version of the LLNL Waste Minimization Program Plan (WMPP). Now legislation at the federal level is being introduced. Passage will result in new EPA regulations and also DOE orders. At the state level the Hazardous Waste Reduction and Management Review Act of 1989 was signed by the Governor. DHS is currently promulgating regulations to implement the new law. EPA has issued a proposed new policy statement on source reduction and recycling. This policy reflects a preventative strategy to reduce or eliminate the generation of environmentally-harmful pollutants which may be released to the air, land surface, water, or ground water. In accordance with this policy new guidance to hazardous waste generators on the elements of a Waste Minimization Program was issued. This WMPP is formatted to meet the current DOE guidance outlines. The current WMPP will be revised to reflect all of these proposed changes when guidelines are established. Updates, changes and revisions to the overall LLNL WMPP will be made as appropriate to reflect ever-changing regulatory requirements

  15. 2016 LLNL Nuclear Forensics Summer Program

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-11-15

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Program is designed to give graduate students an opportunity to come to LLNL for 8–10 weeks for a hands-on research experience. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students also have the opportunity to meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  16. 2017 LLNL Nuclear Forensics Summer Internship Program

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, Mavrik [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-12-13

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Internship Program (NFSIP) is designed to give graduate students an opportunity to come to LLNL for 8-10 weeks of hands-on research. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students can also meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  17. 2016 LLNL Nuclear Forensics Summer Program

    International Nuclear Information System (INIS)

    Zavarin, Mavrik

    2016-01-01

    The Lawrence Livermore National Laboratory (LLNL) Nuclear Forensics Summer Program is designed to give graduate students an opportunity to come to LLNL for 8-10 weeks for a hands-on research experience. Students conduct research under the supervision of a staff scientist, attend a weekly lecture series, interact with other students, and present their work in poster format at the end of the program. Students also have the opportunity to meet staff scientists one-on-one, participate in LLNL facility tours (e.g., the National Ignition Facility and Center for Accelerator Mass Spectrometry), and gain a better understanding of the various science programs at LLNL.

  18. Laser wakefields at UCLA and LLNL

    International Nuclear Information System (INIS)

    Mori, W.B.; Clayton, C.E.; Joshi, C.; Dawson, J.M.; Decker, C.B.; Marsh, K.; Katsouleas, T.; Darrow, C.B.; Wilks, S.C.

    1991-01-01

    The authors report on recent progress at UCLA and LLNL on the nonlinear laser wakefield scheme. They find advantages to operating in the limit where the laser pulse is narrow enough to expel all the plasma electrons from the focal region. A description of the experimental program for the new short pulse 10 TW laser facility at LLNL is also presented

  19. Fire science at LLNL: A review

    Energy Technology Data Exchange (ETDEWEB)

    Hasegawa, H.K. (ed.)

    1990-03-01

    This fire sciences report from LLNL includes topics on: fire spread in trailer complexes, properties of welding blankets, validation of sprinkler systems, fire and smoke detectors, fire modeling, and other fire engineering and safety issues. (JEF)

  20. LLNL's Regional Seismic Discrimination Research

    International Nuclear Information System (INIS)

    Hanley, W; Mayeda, K; Myers, S; Pasyanos, M; Rodgers, A; Sicherman, A; Walter, W

    1999-01-01

    As part of the Department of Energy's research and development effort to improve the monitoring capability of the planned Comprehensive Nuclear-Test-Ban Treaty international monitoring system, Lawrence Livermore Laboratory (LLNL) is testing and calibrating regional seismic discrimination algorithms in the Middle East, North Africa and Western Former Soviet Union. The calibration process consists of a number of steps: (1) populating the database with independently identified regional events; (2) developing regional boundaries and pre-identifying severe regional phase blockage zones; (3) measuring and calibrating coda based magnitude scales; (4a) measuring regional amplitudes and making magnitude and distance amplitude corrections (MDAC); (4b) applying the DOE modified kriging methodology to MDAC results using the regionalized background model; (5) determining the thresholds of detectability of regional phases as a function of phase type and frequency; (6) evaluating regional phase discriminant performance both singly and in combination; (7) combining steps 1-6 to create a calibrated discrimination surface for each stations; (8) assessing progress and iterating. We have now developed this calibration procedure to the point where it is fairly straightforward to apply earthquake-explosion discrimination in regions with ample empirical data. Several of the steps outlined above are discussed in greater detail in other DOE papers in this volume or in recent publications. Here we emphasize the results of the above process: station correction surfaces and their improvement to discrimination results compared with simpler calibration methods. Some of the outstanding discrimination research issues involve cases in which there is little or no empirical data. For example in many cases there is no regional nuclear explosion data at IMS stations or nearby surrogates. We have taken two approaches to this problem, first finding and using mining explosion data when available, and

  1. LLNL Site 200 Risk Management Plan

    International Nuclear Information System (INIS)

    Pinkston, D.; Johnson, M.

    2008-01-01

    It is the Lawrence Livermore National Laboratory's (LLNL) policy to perform work in a manner that protects the health and safety of employees and the public, preserves the quality of the environment, and prevents property damage using the Integrated Safety Management System. The environment, safety, and health are to take priority in the planning and execution of work activities at the Laboratory. Furthermore, it is the policy of LLNL to comply with applicable ES and H laws, regulations, and requirements (LLNL Environment, Safety and Health Manual, Document 1.2, ES and H Policies of LLNL). The program and policies that improve LLNL's ability to prevent or mitigate accidental releases are described in the LLNL Environment, Health, and Safety Manual that is available to the public. The laboratory uses an emergency management system known as the Incident Command System, in accordance with the California Standardized Emergency Management System (SEMS) to respond to Operational Emergencies and to mitigate consequences resulting from them. Operational Emergencies are defined as unplanned, significant events or conditions that require time-urgent response from outside the immediate area of the incident that could seriously impact the safety or security of the public, LLNL's employees, its facilities, or the environment. The Emergency Plan contains LLNL's Operational Emergency response policies, commitments, and institutional responsibilities for managing and recovering from emergencies. It is not possible to list in the Emergency Plan all events that could occur during any given emergency situation. However, a combination of hazard assessments, an effective Emergency Plan, and Emergency Plan Implementing Procedures (EPIPs) can provide the framework for responses to postulated emergency situations. Revision 7, 2004 of the above mentioned LLNL Emergency Plan is available to the public. The most recent revision of the LLNL Emergency Plan LLNL-AM-402556, Revision 11, March

  2. FY16 LLNL Omega Experimental Programs

    International Nuclear Information System (INIS)

    Heeter, R. F.; Ali, S. J.; Benstead, J.; Celliers, P. M.; Coppari, F.; Eggert, J.; Erskine, D.; Panella, A. F.; Fratanduono, D. E.; Hua, R.; Huntington, C. M.; Jarrott, L. C.; Jiang, S.; Kraus, R. G.; Lazicki, A. E.; LePape, S.; Martinez, D. A.; McNaney, J. M.; Millot, M. A.; Moody, J.; Pak, A. E.; Park, H. S.; Ping, Y.; Pollock, B. B.; Rinderknecht, H.; Ross, J. S.; Rubery, M.; Sio, H.; Smith, R. F.; Swadling, G. F.; Wehrenberg, C. E.; Collins, G. W.; Landen, O. L.; Wan, A.; Hsing, W.

    2016-01-01

    In FY16, LLNL's High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall, these LLNL programs led 430 target shots in FY16, with 304 shots using just the OMEGA laser system, and 126 shots using just the EP laser system. Approximately 21% of the total number of shots (77 OMEGA shots and 14 EP shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 79% (227 OMEGA shots and 112 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports. In addition to these experiments, LLNL Principal Investigators led a variety of Laboratory Basic Science campaigns using OMEGA and EP, including 81 target shots using just OMEGA and 42 shots using just EP. The highlights of these are also summarized, following the ICF and HED campaigns. Overall, LLNL PIs led a total of 553 shots at LLE in FY 2016. In addition, LLNL PIs also supported 57 NLUF shots on Omega and 31 NLUF shots on EP, in collaboration with the academic community.

  3. FY16 LLNL Omega Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ali, S. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Benstead, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Eggert, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Erskine, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Panella, A. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hua, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jarrott, L. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jiang, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lazicki, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); LePape, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNaney, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moody, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pak, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rinderknecht, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ross, J. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rubery, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sio, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Swadling, G. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wehrenberg, C. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    In FY16, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall, these LLNL programs led 430 target shots in FY16, with 304 shots using just the OMEGA laser system, and 126 shots using just the EP laser system. Approximately 21% of the total number of shots (77 OMEGA shots and 14 EP shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 79% (227 OMEGA shots and 112 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports. In addition to these experiments, LLNL Principal Investigators led a variety of Laboratory Basic Science campaigns using OMEGA and EP, including 81 target shots using just OMEGA and 42 shots using just EP. The highlights of these are also summarized, following the ICF and HED campaigns. Overall, LLNL PIs led a total of 553 shots at LLE in FY 2016. In addition, LLNL PIs also supported 57 NLUF shots on Omega and 31 NLUF shots on EP, in collaboration with the academic community.

  4. Operating characteristics and modeling of the LLNL 100-kV electric gun

    International Nuclear Information System (INIS)

    Osher, J.E.; Barnes, G.; Chau, H.H.; Lee, R.S.; Lee, C.; Speer, R.; Weingart, R.C.

    1989-01-01

    In the electric gun, the explosion of an electrically heated metal foil and the accompanying magnetic forces drive a thin flyer plate up a short barrel. Flyer velocities of up to 18 km/s make the gun useful for hypervelocity impact studies. The authors briefly review the technological evolution of the exploding-metal circuit elements that power the gun, describe the 100-kV electric gun designed at Lawrence Livermore National Laboratory (LLNL) in some detail, and present the general principles of electric gun operation. They compare the experimental performance of the LLNL gun with a simple model and with predictions of a magnetohydrodynamics code

  5. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.; Stoeffl, W.; Carter, D.

    1999-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world's highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectroscopy

  6. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.H.; Stoeffl, W.

    1998-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world's highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectra

  7. LLNL high-field coil program

    International Nuclear Information System (INIS)

    Miller, J.R.

    1986-01-01

    An overview is presented of the LLNL High-Field Superconducting Magnet Development Program wherein the technology is being developed for producing fields in the range of 15 T and higher for both mirror and tokamak applications. Applications requiring less field will also benefit from this program. In addition, recent results on the thermomechanical performance of cable-in-conduit conductor systems are presented and their importance to high-field coil design discussed

  8. LIFTERS-hyperspectral imaging at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Fields, D. [Lawrence Livermore National Lab., CA (United States); Bennett, C.; Carter, M.

    1994-11-15

    LIFTIRS, the Livermore Imaging Fourier Transform InfraRed Spectrometer, recently developed at LLNL, is an instrument which enables extremely efficient collection and analysis of hyperspectral imaging data. LIFTIRS produces a spatial format of 128x128 pixels, with spectral resolution arbitrarily variable up to a maximum of 0.25 inverse centimeters. Time resolution and spectral resolution can be traded off for each other with great flexibility. We will discuss recent measurements made with this instrument, and present typical images and spectra.

  9. Probabilistic Seismic Hazards Update for LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Menchawi, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fernandez, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-03-30

    Fugro Consultants, Inc. (FCL) completed the Probabilistic Seismic Hazard Analysis (PSHA) performed for Building 332 at the Lawrence Livermore National Laboratory (LLNL), near Livermore, CA. The study performed for the LLNL site includes a comprehensive review of recent information relevant to the LLNL regional tectonic setting and regional seismic sources in the vicinity of the site and development of seismic wave transmission characteristics. The Seismic Source Characterization (SSC), documented in Project Report No. 2259-PR-02 (FCL, 2015b), and Ground Motion Characterization (GMC), documented in Project Report No. 2259-PR-06 (FCL, 2015a) were developed in accordance with ANS/ANSI 2.29- 2008 Level 2 PSHA guidelines. The ANS/ANSI 2.29-2008 Level 2 PSHA framework is documented in Project Report No. 2259-PR-05 (FCL, 2016a). The Hazard Input Document (HID) for input into the PSHA developed from the SSC and GMC is presented in Project Report No. 2259-PR-04 (FCL, 2016b). The site characterization used as input for development of the idealized site profiles including epistemic uncertainty and aleatory variability is presented in Project Report No. 2259-PR-03 (FCL, 2015c). The PSHA results are documented in Project Report No. 2259-PR-07 (FCL, 2016c).

  10. Release isentrope measurements with the LLNL electric gun

    Energy Technology Data Exchange (ETDEWEB)

    Gathers, G.R.; Osher, J.E.; Chau, H.H.; Weingart, R.C.; Lee, C.G.; Diaz, E.

    1987-06-01

    The liquid-vapor coexistence boundary is not well known for most metals because the extreme conditions near the critical point create severe experimental difficulties. The isentropes passing through the liquid-vapor region typically begin from rather large pressures on the Hugoniot. We are attempting to use the high velocities achievable with the Lawrence Livermore National Laboratory (LLNL) electric gun to obtain these extreme states in aluminum and measure the release isentropes by releasing into a series of calibrated standards with known Hugoniots. To achieve large pressure drops needed to explore the liquid-vapor region, we use argon gas for which Hugoniots have been calculated using the ACTEX code, as one of the release materials.

  11. The LLNL portable tritium processing system

    International Nuclear Information System (INIS)

    Anon.

    1995-01-01

    The end of the Cold War significantly reduced the need for facilities to handle radioactive materials for the US nuclear weapons program. The LLNL Tritium Facility was among those slated for decommissioning. The plans for the facility have since been reversed, and it remains open. Nevertheless, in the early 1990s, the cleanup (the Tritium Inventory Removal Project) was undertaken. However, removing the inventory of tritium within the facility and cleaning up any pockets of high-level residual contamination required that we design a system adequate to the task and meeting today's stringent standards of worker and environmental protection. In collaboration with Sandia National Laboratory and EG ampersand G Mound Applied Technologies, we fabricated a three-module Portable Tritium Processing System (PTPS) that meets current glovebox standards, is operated from a portable console, and is movable from laboratory to laboratory for performing the basic tritium processing operations: pumping and gas transfer, gas analysis, and gas-phase tritium scrubbing. The Tritium Inventory Removal Project is now in its final year, and the portable system continues to be the workhorse. To meet a strong demand for tritium services, the LLNL Tritium Facility will be reconfigured to provide state-of-the-art tritium and radioactive decontamination research and development. The PTPS will play a key role in this new facility

  12. LLNL Livermore site Groundwater Surveillance Plan

    International Nuclear Information System (INIS)

    1992-04-01

    Department of Energy (DOE) Order 5400.1 establishes environ-mental protection program requirements, authorities, and responsibilities for DOE operations to assume compliance with federal, state, and local environmental protection laws and regulations; Federal Executive Orders; and internal DOE policies. ne DOE Order contains requirements and guidance for environmental monitoring programs, the objectives of which are to demonstrate compliance with legal and regulatory requirements imposed by federal, state, and local agencies; confirm adherence to DOE environmental protection polices; and support environmental management decisions. The environmental monitoring programs consist of two major activities: (1) measurement and monitoring of effluents from DOE operations, and (2) surveillance through measurement, monitoring, and calculation of the effects of those operations on the environment and public health. The latter concern, that of assessing the effects, if any, of Lawrence Livermore National Laboratory (LLNL) operations and activities on on-site and off-site surface waters and groundwaters is addressed by an Environmental Surveillance Program being developed by LLNL. The Groundwater Surveillance Plan presented here has been developed on a sitespecific basis, taking into consideration facility characteristics, applicable regulations, hazard potential, quantities and concentrations of materials released, the extent and use of local water resources, and specific local public interest and concerns

  13. High intensity positron program at LLNL

    International Nuclear Information System (INIS)

    Asoka-Kumar, P.; Howell, R.; Stoeffl, W.; Carter, D.

    1999-01-01

    Lawrence Livermore National Laboratory (LLNL) is the home of the world close-quote s highest current beam of keV positrons. The potential for establishing a national center for materials analysis using positron annihilation techniques around this capability is being actively pursued. The high LLNL beam current will enable investigations in several new areas. We are developing a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with submicron resolution. Below we summarize the important design features of this microprobe. Several experimental end stations will be available that can utilize the high current beam with a time distribution determined by the electron linac pulse structure, quasi-continuous, or bunched at 20 MHz, and can operate in an electrostatic or (and) magnetostatic environment. Some of the planned early experiments are: two-dimensional angular correlation of annihilation radiation of thin films and buried interfaces, positron diffraction holography, positron induced desorption, and positron induced Auger spectroscopy. copyright 1999 American Institute of Physics

  14. Proposed LLNL electron beam ion trap

    International Nuclear Information System (INIS)

    Marrs, R.E.; Egan, P.O.; Proctor, I.; Levine, M.A.; Hansen, L.; Kajiyama, Y.; Wolgast, R.

    1985-01-01

    The interaction of energetic electrons with highly charged ions is of great importance to several research fields such as astrophysics, laser fusion and magnetic fusion. In spite of this importance there are almost no measurements of electron interaction cross sections for ions more than a few times ionized. To address this problem an electron beam ion trap (EBIT) is being developed at LLNL. The device is essentially an EBIS except that it is not intended as a source of extracted ions. Instead the (variable energy) electron beam interacting with the confined ions will be used to obtain measurements of ionization cross sections, dielectronic recombination cross sections, radiative recombination cross sections, energy levels and oscillator strengths. Charge-exchange recombinaion cross sections with neutral gasses could also be measured. The goal is to produce and study elements in many different charge states up to He-like xenon and Ne-like uranium. 5 refs., 2 figs

  15. FY14 LLNL OMEGA Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fournier, K. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Baker, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barrios, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bernstein, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johnson, M. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Jenei, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ma, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNabb, D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Moore, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nagel, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patel, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Perez, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ross, J. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Rygg, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zylstra, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2014-10-13

    In FY14, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall these LLNL programs led 324 target shots in FY14, with 246 shots using just the OMEGA laser system, 62 shots using just the EP laser system, and 16 Joint shots using Omega and EP together. Approximately 31% of the total number of shots (62 OMEGA shots, 42 EP shots) shots supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 69% (200 OMEGA shots and 36 EP shots, including the 16 Joint shots) were dedicated to experiments for High- Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports.

  16. FY15 LLNL OMEGA Experimental Programs

    Energy Technology Data Exchange (ETDEWEB)

    Heeter, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Baker, K. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barrios, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Beckwith, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Casey, D. T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Celliers, P. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Coppari, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fournier, K. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fratanduono, D. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Frenje, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Huntington, C. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kraus, R. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lazicki, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Martinez, D. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McNaney, J. M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Millot, M. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pak, A. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Park, H. S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ping, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pollock, B. B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Smith, R. F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wehrenberg, C. E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Widmann, K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Collins, G. W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Landen, O. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wan, A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Hsing, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-04

    In FY15, LLNL’s High-Energy-Density Physics (HED) and Indirect Drive Inertial Confinement Fusion (ICF-ID) programs conducted several campaigns on the OMEGA laser system and on the EP laser system, as well as campaigns that used the OMEGA and EP beams jointly. Overall these LLNL programs led 468 target shots in FY15, with 315 shots using just the OMEGA laser system, 145 shots using just the EP laser system, and 8 Joint shots using Omega and EP together. Approximately 25% of the total number of shots (56 OMEGA shots and 67 EP shots, including the 8 Joint shots) supported the Indirect Drive Inertial Confinement Fusion Campaign (ICF-ID). The remaining 75% (267 OMEGA shots and 86 EP shots) were dedicated to experiments for High-Energy-Density Physics (HED). Highlights of the various HED and ICF campaigns are summarized in the following reports.

  17. LLNL/JNC repository collaboration interim progress report

    International Nuclear Information System (INIS)

    Bourcier, W.L.; Couch, R.G.; Gansemer, J.; Halsey, W.G.; Palmer, C.E.; Sinz, K.H.; Stout, R.B.; Wijesinghe, A.; Wolery, T.J.

    1999-01-01

    Under this Annex, a research program on the near-field performance assessment related to the geological disposal of radioactive waste will be carried out at the Lawrence Livermore National Laboratory (LLNL) in close collaboration with the Power Reactor and Nuclear Fuel Development Corporation of Japan (PNC). This program will focus on activities that provide direct support for PNC's near-term and long-term needs that will, in turn, utilize and further strengthen US capabilities for radioactive waste management. The work scope for two years will be designed based on the PNC's priorities for its second progress report (the H12 report) of research and development for high-level radioactive waste disposal and on the interest and capabilities of the LLNL. The work will focus on the chemical modeling for the near-field environment and long-term mechanical modeling of engineered barrier system as it evolves. Certain activities in this program will provide for a final iteration of analyses to provide additional technical basis prior to the year 2000 as determined in discussions with the PNC's technical coordinator. The work for two years will include the following activities: Activity 1: Chemical Modeling of EBS Materials Interactions--Task 1.1 Chemical Modeling of Iron Effects on Borosilicate Glass Durability; and Task 1.2 Changes in Overpack and Bentonite Properties Due to Metal, Bentonite and Water Interactions. Activity 2: Thermodynamic Database Validation and Comparison--Task 2.1 Set up EQ3/6 to Run with the Pitzer-based PNC Thermodynamic Data Base; Task 2.2 Provide Expert Consultation on the Thermodynamic Data Base; and Task 2.3 Provide Analysis of Likely Solubility Controls on Selenium. Activity 3: Engineered Barrier Performance Assessment of the Unsaturated, Oxidizing Transient--Task 3.1 Apply YMIM to PNC Transient EBS Performance; Task 3.2 Demonstrate Methods for Modeling the Return to Reducing Conditions; and Task 3.3 Evaluate the Potential for Stress Corrosion

  18. Compilation of LLNL CUP-2 Data

    Energy Technology Data Exchange (ETDEWEB)

    Eppich, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kips, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lindvall, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-07-31

    The CUP-2 uranium ore concentrate (UOC) standard reference material, a powder, was produced at the Blind River uranium refinery of Eldorado Resources Ltd. in Canada in 1986. This material was produced as part of a joint effort by the Canadian Certified Reference Materials Project and the Canadian Uranium Producers Metallurgical Committee to develop a certified reference material for uranium concentration and the concentration of several impurity constituents. This standard was developed to satisfy the requirements of the UOC mining and milling industry, and was characterized with this purpose in mind. To produce CUP-2, approximately 25 kg of UOC derived from the Blind River uranium refinery was blended, homogenized, and assessed for homogeneity by X-ray fluorescence (XRF) analysis. The homogenized material was then packaged into bottles, containing 50 g of material each, and distributed for analysis to laboratories in 1986. The CUP-2 UOC standard was characterized by an interlaboratory analysis program involving eight member laboratories, six commercial laboratories, and three additional volunteer laboratories. Each laboratory provided five replicate results on up to 17 analytes, including total uranium concentration, and moisture content. The selection of analytical technique was left to each participating laboratory. Uranium was reported on an “as-received” basis; all other analytes (besides moisture content) were reported on a “dry-weight” basis. A bottle of 25g of CUP-2 UOC standard as described above was purchased by LLNL and characterized by the LLNL Nuclear Forensics Group. Non-destructive and destructive analytical techniques were applied to the UOC sample. Information obtained from short-term techniques such as photography, gamma spectrometry, and scanning electron microscopy were used to guide the performance of longer-term techniques such as ICP-MS. Some techniques, such as XRF and ICP-MS, provided complementary types of data. The results

  19. The new LLNL AMS sample changer

    International Nuclear Information System (INIS)

    Roberts, M.L.; Norman, P.J.; Garibaldi, J.L.; Hornady, R.S.

    1993-01-01

    The Center for Accelerator Mass Spectrometry at LLNL has installed a new 64 position AMS sample changer on our spectrometer. This new sample changer has the capability of being controlled manually by an operator or automatically by the AMS data acquisition computer. Automatic control of the sample changer by the data acquisition system is a necessary step towards unattended AMS operation in our laboratory. The sample changer uses a fiber optic shaft encoder for rough rotational indexing of the sample wheel and a series of sequenced pneumatic cylinders for final mechanical indexing of the wheel and insertion and retraction of samples. Transit time from sample to sample varies from 4 s to 19 s, depending on distance moved. Final sample location can be set to within 50 microns on the x and y axis and within 100 microns in the z axis. Changing sample wheels on the new sample changer is also easier and faster than was possible on our previous sample changer and does not require the use of any tools

  20. Nuclear physics and heavy element research at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Stoyer, M A; Ahle, L E; Becker, J A; Bernstein, L A; Bleuel, D L; Burke, J T; Dashdorj, D; Henderson, R A; Hurst, A M; Kenneally, J M; Lesher, S R; Moody, K J; Nelson, S L; Norman, E B; Pedretti, M; Scielzo, N D; Shaughnessy, D A; Sheets, S A; Stoeffl, W; Stoyer, N J; Wiedeking, M; Wilk, P A; Wu, C Y

    2009-05-11

    This paper highlights some of the current basic nuclear physics research at Lawrence Livermore National Laboratory (LLNL). The work at LLNL concentrates on investigating nuclei at the extremes. The Experimental Nuclear Physics Group performs research to improve our understanding of nuclei, nuclear reactions, nuclear decay processes and nuclear astrophysics; an expertise utilized for important laboratory national security programs and for world-class peer-reviewed basic research.

  1. Development of positron diffraction and holography at LLNL

    International Nuclear Information System (INIS)

    Hamza, A.; Asoka-Kumar, P.; Stoeffl, W.; Howell, R.; Miller, D.; Denison, A.

    2003-01-01

    A low-energy positron diffraction and holography spectrometer is currently being constructed at the Lawrence Livermore National Laboratory (LLNL) to study surfaces and adsorbed structures. This instrument will operate in conjunction with the LLNL intense positron beam produced by the 100 MeV LINAC allowing data to be acquired in minutes rather than days. Positron diffraction possesses certain advantages over electron diffraction which are discussed. Details of the instrument based on that of low-energy electron diffraction are described

  2. LLNL/YMP Waste Container Fabrication and Closure Project

    International Nuclear Information System (INIS)

    1990-10-01

    The Department of Energy's Office of Civilian Radioactive Waste Management (OCRWM) Program is studying Yucca Mountain, Nevada as a suitable site for the first US high-level nuclear waste repository. Lawrence Livermore National Laboratory (LLNL) has the responsibility for designing and developing the waste package for the permanent storage of high-level nuclear waste. This report is a summary of the technical activities for the LLNL/YMP Nuclear Waste Disposal Container Fabrication and Closure Development Project. Candidate welding closure processes were identified in the Phase 1 report. This report discusses Phase 2. Phase 2 of this effort involved laboratory studies to determine the optimum fabrication and closure processes. Because of budget limitations, LLNL narrowed the materials for evaluation in Phase 2 from the original six to four: Alloy 825, CDA 715, CDA 102 (or CDA 122) and CDA 952. Phase 2 studies focused on evaluation of candidate material in conjunction with fabrication and closure processes

  3. Diversification and strategic management of LLNL's R ampersand D portfolio

    International Nuclear Information System (INIS)

    Glinsky, M.E.

    1994-12-01

    Strategic management of LLNL's research effort is addressed. A general framework is established by presenting the McKinsey/BCG Matrix Analysis as it applies to the research portfolio. The framework is used to establish the need for the diversification into new attractive areas of research and for the improvement of the market position of existing research in those attractive areas. With the need for such diversification established, attention is turned to optimizing it. There are limited resources available. It is concluded that LLNL should diversify into only a few areas and try to obtain full market share as soon as possible

  4. Thermochemical hydrogen production studies at LLNL: a status report

    International Nuclear Information System (INIS)

    Krikorian, O.H.

    1982-01-01

    Currently, studies are underway at the Lawrence Livermore National Laboratory (LLNL) on thermochemical hydrogen production based on magnetic fusion energy (MFE) and solar central receivers as heat sources. These areas of study were described earlier at the previous IEA Annex I Hydrogen Workshop (Juelich, West Germany, September 23-25, 1981), and a brief update will be given here. Some basic research has also been underway at LLNL on the electrolysis of water from fused phosphate salts, but there are no current results in that area, and the work is being terminated

  5. Spill exercise 1980: an LLNL emergency training exercise

    International Nuclear Information System (INIS)

    Morse, J.L.; Gibson, T.A.; Vance, W.F.

    1981-01-01

    An emergency training exercise at Lawrence Livermore National Laboratory (LLNL) demonstrated that off-hours emergency personnel can respond promptly and effecively to an emergency situation involving radiation, hazardous chemicals, and injured persons. The exercise simulated an explosion in a chemistry laboratory and a subsequent toxic-gas release

  6. Capabilities required to conduct the LLNL plutonium mission

    International Nuclear Information System (INIS)

    Kass, J.; Bish, W.; Copeland, A.; West, J.; Sack, S.; Myers, B.

    1991-01-01

    This report outlines the LLNL plutonium related mission anticipated over the next decade and defines the capabilities required to meet that mission wherever the Plutonium Facility is located. If plutonium work is relocated to a place where the facility is shared, then some capabilities can be commonly used by the sharing parties. However, it is essential that LLNL independently control about 20000 sq ft of net lab space, filled with LLNL controlled equipment, and staffed by LLNL employees. It is estimated that the cost to construct this facility should range from $140M to $200M. Purchase and installation of equipment to replace that already in Bldg 332 along with additional equipment identified as being needed to meet the mission for the next ten to fifteen years, is estimated to cost $118M. About $29M of the equipment could be shared. The Hardened Engineering Test Building (HETB) with its additional 8000 sq ft of unique test capability must also be replaced. The fully equipped replacement cost is estimated to be about $10M. About 40000 sq ft of setup and support space are needed along with office and related facilities for a 130 person resident staff. The setup space is estimated to cost $8M. The annual cost of a 130 person resident staff (100 programmatic and 30 facility operation) is estimated to be $20M

  7. Proceedings of the LLNL Technical Women`s Symposium

    Energy Technology Data Exchange (ETDEWEB)

    von Holtz, E. [ed.

    1993-12-31

    This report documents events of the LLNL Technical Women`s Symposium. Topics include; future of computer systems, environmental technology, defense and space, Nova Inertial Confinement Fusion Target Physics, technical communication, tools and techniques for biology in the 1990s, automation and robotics, software applications, materials science, atomic vapor laser isotope separation, technical communication, technology transfer, and professional development workshops.

  8. Proceedings of the LLNL technical women`s symposium

    Energy Technology Data Exchange (ETDEWEB)

    von Holtz, E. [ed.

    1994-12-31

    Women from institutions such as LLNL, LBL, Sandia, and SLAC presented papers at this conference. The papers deal with many aspects of global security, global ecology, and bioscience; they also reflect the challenges faced in improving business practices, communicating effectively, and expanding collaborations in the industrial world. Approximately 87 ``abstracts`` are included in six sessions; more are included in the addendum.

  9. The design and implementation of the LLNL gigabit testbed

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, D. [Lawrence Livermore National Labs., CA (United States)

    1994-12-01

    This paper will look at the design and implementation of the LLNL Gigabit testbed (LGTB), where various high speed networking products, can be tested in one environment. The paper will discuss the philosophy behind the design of and the need for the testbed, the tests that are performed in the testbed, and the tools used to implement those tests.

  10. LLNL X-ray Calibration and Standards Laboratory

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The LLNL X-ray Calibration and Standards Laboratory is a unique facility for developing and calibrating x-ray sources, detectors, and materials, and for conducting x-ray physics research in support of our weapon and fusion-energy programs

  11. Hazardous-waste analysis plan for LLNL operations

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, R.S.

    1982-02-12

    The Lawrence Livermore National Laboratory is involved in many facets of research ranging from nuclear weapons research to advanced Biomedical studies. Approximately 80% of all programs at LLNL generate hazardous waste in one form or another. Aside from producing waste from industrial type operations (oils, solvents, bottom sludges, etc.) many unique and toxic wastes are generated such as phosgene, dioxin (TCDD), radioactive wastes and high explosives. One key to any successful waste management program must address the following: proper identification of the waste, safe handling procedures and proper storage containers and areas. This section of the Waste Management Plan will address methodologies used for the Analysis of Hazardous Waste. In addition to the wastes defined in 40 CFR 261, LLNL and Site 300 also generate radioactive waste not specifically covered by RCRA. However, for completeness, the Waste Analysis Plan will address all hazardous waste.

  12. Hazardous-waste analysis plan for LLNL operations

    International Nuclear Information System (INIS)

    Roberts, R.S.

    1982-01-01

    The Lawrence Livermore National Laboratory is involved in many facets of research ranging from nuclear weapons research to advanced Biomedical studies. Approximately 80% of all programs at LLNL generate hazardous waste in one form or another. Aside from producing waste from industrial type operations (oils, solvents, bottom sludges, etc.) many unique and toxic wastes are generated such as phosgene, dioxin (TCDD), radioactive wastes and high explosives. One key to any successful waste management program must address the following: proper identification of the waste, safe handling procedures and proper storage containers and areas. This section of the Waste Management Plan will address methodologies used for the Analysis of Hazardous Waste. In addition to the wastes defined in 40 CFR 261, LLNL and Site 300 also generate radioactive waste not specifically covered by RCRA. However, for completeness, the Waste Analysis Plan will address all hazardous waste

  13. Lawrence Livermore National Laboratory (LLNL) Waste Minimization Program Plan

    International Nuclear Information System (INIS)

    Heckman, R.A.; Tang, W.R.

    1989-01-01

    This Program Plan document describes the background of the Waste Minimization field at Lawrence Livermore National Laboratory (LLNL) and refers to the significant studies that have impacted on legislative efforts, both at the federal and state levels. A short history of formal LLNL waste minimization efforts is provided. Also included are general findings from analysis of work to date, with emphasis on source reduction findings. A short summary is provided on current regulations and probable future legislation which may impact on waste minimization methodology. The LLN Waste Minimization Program Plan is designed to be dynamic and flexible so as to meet current regulations, and yet is able to respond to an everchanging regulatory environment. 19 refs., 12 figs., 8 tabs

  14. Seismic evaluation of the LLNL plutonium facility (Building 332)

    International Nuclear Information System (INIS)

    Hall, W.J.; Sozen, M.A.

    1982-03-01

    The expected performance of the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (Building 332) subjected to earthquake ground motion has been evaluated. Anticipated behavior of the building, glove boxes, ventilation system and other systems critical for containment of plutonium is described for three severe postulated earthquake excitations. Based upon this evaluation, some damage to the building, glove boxes and ventilation system would be expected but no collapse of any structure is anticipated as a result of the postulated earthquake ground motions

  15. Probabilistic Seismic Hazards Update for LLNL: PSHA Results Report

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, Alfredo [Fugro Consultants, Inc., Houston, TX (United States); Altekruse, Jason [Fugro Consultants, Inc., Houston, TX (United States); Menchawi, Osman El [Fugro Consultants, Inc., Houston, TX (United States)

    2016-03-11

    This report presents the Probabilistic Seismic Hazard Analysis (PSHA) performed for Building 332 at the Lawrence Livermore National Laboratory (LLNL), near Livermore, CA by Fugro Consultants, Inc. (FCL). This report is specific to Building 332 only and not to other portions of the Laboratory. The study performed for the LLNL site includes a comprehensive review of recent information relevant to the LLNL regional tectonic setting and regional seismic sources in the vicinity of the site and development of seismic wave transmission characteristics. The Seismic Source Characterization (SSC), documented in Project Report No. 2259-PR-02 (FCL, 2015a), and Ground Motion Characterization (GMC), documented in Project Report No. 2259-PR-06 (FCL, 2015c) were developed in accordance with ANS/ANSI 2.29-2008 Level 2 PSHA guidelines. The ANS/ANSI 2.29-2008 Level 2 PSHA framework is documented in Project Report No. 2259-PR-05 (FCL, 2016a). The Hazard Input Document (HID) for input into the PSHA developed from the SSC is presented in Project Report No. 2259-PR-04 (FCL, 2016b). The site characterization used as input for development of the idealized site profiles including epistemic uncertainty and aleatory variability is presented in Project Report No. 2259-PR-03 (FCL, 2015b).

  16. GAMA-LLNL Alpine Basin Special Study: Scope of Work

    Energy Technology Data Exchange (ETDEWEB)

    Singleton, M J; Visser, A; Esser, B K; Moran, J E

    2011-12-12

    For this task LLNL will examine the vulnerability of drinking water supplies in foothills and higher elevation areas to climate change impacts on recharge. Recharge locations and vulnerability will be determined through examination of groundwater ages and noble gas recharge temperatures in high elevation basins. LLNL will determine whether short residence times are common in one or more subalpine basin. LLNL will measure groundwater ages, recharge temperatures, hydrogen and oxygen isotopes, major anions and carbon isotope compositions on up to 60 samples from monitoring wells and production wells in these basins. In addition, a small number of carbon isotope analyses will be performed on surface water samples. The deliverable for this task will be a technical report that provides the measured data and an interpretation of the data from one or more subalpine basins. Data interpretation will: (1) Consider climate change impacts to recharge and its impact on water quality; (2) Determine primary recharge locations and their vulnerability to climate change; and (3) Delineate the most vulnerable areas and describe the likely impacts to recharge.

  17. Translation of ARAC computer codes

    International Nuclear Information System (INIS)

    Takahashi, Kunio; Chino, Masamichi; Honma, Toshimitsu; Ishikawa, Hirohiko; Kai, Michiaki; Imai, Kazuhiko; Asai, Kiyoshi

    1982-05-01

    In 1981 we have translated the famous MATHEW, ADPIC and their auxiliary computer codes for CDC 7600 computer version to FACOM M-200's. The codes consist of a part of the Atmospheric Release Advisory Capability (ARAC) system of Lawrence Livermore National Laboratory (LLNL). The MATHEW is a code for three-dimensional wind field analysis. Using observed data, it calculates the mass-consistent wind field of grid cells by a variational method. The ADPIC is a code for three-dimensional concentration prediction of gases and particulates released to the atmosphere. It calculates concentrations in grid cells by the particle-in-cell method. They are written in LLLTRAN, i.e., LLNL Fortran language and are implemented on the CDC 7600 computers of LLNL. In this report, i) the computational methods of the MATHEW/ADPIC and their auxiliary codes, ii) comparisons of the calculated results with our JAERI particle-in-cell, and gaussian plume models, iii) translation procedures from the CDC version to FACOM M-200's, are described. Under the permission of LLNL G-Division, this report is published to keep the track of the translation procedures and to serve our JAERI researchers for comparisons and references of their works. (author)

  18. Final report on the LLNL compact torus acceleration project

    International Nuclear Information System (INIS)

    Eddleman, J.; Hammer, J.; Hartman, C.; McLean, H.; Molvik, A.

    1995-01-01

    In this report, we summarize recent work at LLNL on the compact torus (CT) acceleration project. The CT accelerator is a novel technique for projecting plasmas to high velocities and reaching high energy density states. The accelerator exploits magnetic confinement in the CT to stably transport plasma over large distances and to directed kinetic energies large in comparison with the CT internal and magnetic energy. Applications range from heating and fueling magnetic fusion devices, generation of intense pulses of x-rays or neutrons for weapons effects and high energy-density fusion concepts

  19. A Novel Approach to Semantic and Coreference Annotation at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Firpo, M

    2005-02-04

    A case is made for the importance of high quality semantic and coreference annotation. The challenges of providing such annotation are described. Asperger's Syndrome is introduced, and the connections are drawn between the needs of text annotation and the abilities of persons with Asperger's Syndrome to meet those needs. Finally, a pilot program is recommended wherein semantic annotation is performed by people with Asperger's Syndrome. The primary points embodied in this paper are as follows: (1) Document annotation is essential to the Natural Language Processing (NLP) projects at Lawrence Livermore National Laboratory (LLNL); (2) LLNL does not currently have a system in place to meet its need for text annotation; (3) Text annotation is challenging for a variety of reasons, many related to its very rote nature; (4) Persons with Asperger's Syndrome are particularly skilled at rote verbal tasks, and behavioral experts agree that they would excel at text annotation; and (6) A pilot study is recommend in which two to three people with Asperger's Syndrome annotate documents and then the quality and throughput of their work is evaluated relative to that of their neuro-typical peers.

  20. LLNL (Lawrence Livermore National Laboratory) research on cold fusion

    Energy Technology Data Exchange (ETDEWEB)

    Thomassen, K I; Holzrichter, J F [eds.

    1989-09-14

    With the appearance of reports on Cold Fusion,'' scientists at the Lawrence Livermore National Laboratory (LLNL) began a series of increasingly sophisticated experiments and calculations to explain these phenomena. These experiments can be categorized as follows: (a) simple experiments to replicate the Utah results, (b) more sophisticated experiments to place lower bounds on the generation of heat and production of nuclear products, (c) a collaboration with Texas A M University to analyze electrodes and electrolytes for fusion by-products in a cell producing 10% excess heat (we found no by-products), and (d) attempts to replicate the Frascati experiment that first found neutron bursts when high-pressure deuterium gas in a cylinder with Ti chips was temperature-cycled. We failed in categories (a) and (b) to replicate either the Pons/Fleischmann or the Jones phenomena. We have seen phenomena similar to the Frascati results, (d) but these low-level burst signals may not be coming from neutrons generated in the Ti chips. Summaries of our experiments are described in Section II, as is a theoretical effort based on cosmic ray muons to describe low-level neutron production. Details of the experimental groups' work are contained in the six appendices. At LLNL, independent teams were spontaneously formed in response to the early announcements on cold fusion. This report's format follows this organization.

  1. Joint FAM/Line Management Assessment Report on LLNL Machine Guarding Safety Program

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, J. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-07-19

    The LLNL Safety Program for Machine Guarding is implemented to comply with requirements in the ES&H Manual Document 11.2, "Hazards-General and Miscellaneous," Section 13 Machine Guarding (Rev 18, issued Dec. 15, 2015). The primary goal of this LLNL Safety Program is to ensure that LLNL operations involving machine guarding are managed so that workers, equipment and government property are adequately protected. This means that all such operations are planned and approved using the Integrated Safety Management System to provide the most cost effective and safest means available to support the LLNL mission.

  2. Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98

    Energy Technology Data Exchange (ETDEWEB)

    Cerjan, Charles J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shi, Xizeng [Read-Rite Corporation, Fremont, CA (United States)

    2017-11-09

    The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executable code.

  3. Overview and applications of the Monte Carlo radiation transport kit at LLNL

    International Nuclear Information System (INIS)

    Sale, K. E.

    1999-01-01

    Modern Monte Carlo radiation transport codes can be applied to model most applications of radiation, from optical to TeV photons, from thermal neutrons to heavy ions. Simulations can include any desired level of detail in three-dimensional geometries using the right level of detail in the reaction physics. The technology areas to which we have applied these codes include medical applications, defense, safety and security programs, nuclear safeguards and industrial and research system design and control. The main reason such applications are interesting is that by using these tools substantial savings of time and effort (i.e. money) can be realized. In addition it is possible to separate out and investigate computationally effects which can not be isolated and studied in experiments. In model calculations, just as in real life, one must take care in order to get the correct answer to the right question. Advancing computing technology allows extensions of Monte Carlo applications in two directions. First, as computers become more powerful more problems can be accurately modeled. Second, as computing power becomes cheaper Monte Carlo methods become accessible more widely. An overview of the set of Monte Carlo radiation transport tools in use a LLNL will be presented along with a few examples of applications and future directions

  4. OMICRON, LLNL ENDL Charged Particle Data Library Processing

    International Nuclear Information System (INIS)

    Mengoni, A.; Panini, G.C.

    2002-01-01

    1 - Description of program or function: The program has been designed to read the Evaluated Charged Particle Library (ECPL) of the LLNL Evaluated Nuclear Data Library (ENDL) and generate output in various forms: interpreted listing, ENDF format and graphs. 2 - Method of solution: A file containing ECPL in card image transmittal format is scanned to retrieve the requested reactions from the requested materials; in addition selections can be made by data type or incident particle. 3 - Restrictions on the complexity of the problem: The Reaction Property Designator I determines the type of data in the ENDL library (e.g. cross sections, angular distributions, Maxwellian averages, etc.); the program does not take into account the data for I=3,4 (energy-angle-distributions) since there are no data in the current ECPL version

  5. Results of LLNL investigation of NYCT data sets

    International Nuclear Information System (INIS)

    Sale, K; Harrison, M; Guo, M; Groza, M

    2007-01-01

    Upon examination we have concluded that none of the alarms indicate the presence of a real threat. A brief history and results from our examination of the NYCT ASP occupancy data sets dated from 2007-05-14 19:11:07 to 2007-06-20 15:46:15 are presented in this letter report. When the ASP data collection campaign at NYCT was completed, rather than being shut down, the Canberra ASP annunciator box was unplugged leaving the data acquisition system running. By the time it was discovered that the ASP was still acquiring data about 15,000 occupancies had been recorded. Among these were about 500 alarms (classified by the ASP analysis system as either Threat Alarms or Suspect Alarms). At your request, these alarms have been investigated. Our conclusion is that none of the alarm data sets indicate the presence of a real threat (within statistics). The data sets (ICD1 and ICD2 files with concurrent JPEG pictures) were delivered to LLNL on a removable hard drive labeled FOUO. The contents of the data disk amounted to 53.39 GB of data requiring over two days for the standard LLNL virus checking software to scan before work could really get started. Our first step was to walk through the directory structure of the disk and create a database of occupancies. For each occupancy, the database was populated with the occupancy date and time, occupancy number, file path to the ICD1 data and the alarm ('No Alarm', 'Suspect Alarm' or 'Threat Alarm') from the ICD2 file along with some other incidental data. In an attempt to get a global understanding of what was going on, we investigated the occupancy information. The occupancy date/time and alarm type were binned into one-hour counts. These data are shown in Figures 1 and 2

  6. Evaluation of LLNL BSL-3 Maximum Credible Event Potential Consequence to the General Population and Surrounding Environment

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2010-08-16

    The purpose of this evaluation is to establish reproducibility of the analysis and consequence results to the general population and surrounding environment in the LLNL Biosafety Level 3 Facility Environmental Assessment (LLNL 2008).

  7. Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Kallman, J S; Morales, K E; Whipple, R E; Huber, R D; Martz, A; Brown, W D; Smith, J A; Schneberk, D J; Martz, Jr., H E; White, III, W T

    2010-03-11

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of the material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference

  8. LLNL medical and industrial laser isotope separation: large volume, low cost production through advanced laser technologies

    International Nuclear Information System (INIS)

    Comaskey, B.; Scheibner, K. F.; Shaw, M.; Wilder, J.

    1998-01-01

    The goal of this LDRD project was to demonstrate the technical and economical feasibility of applying laser isotope separation technology to the commercial enrichment (>lkg/y) of stable isotopes. A successful demonstration would well position the laboratory to make a credible case for the creation of an ongoing medical and industrial isotope production and development program at LLNL. Such a program would establish LLNL as a center for advanced medical isotope production, successfully leveraging previous LLNL Research and Development hardware, facilities, and knowledge

  9. Test results from the LLNL 250 GHz CARM experiment

    International Nuclear Information System (INIS)

    Kulke, B.; Caplan, M.; Bubp, D.; Houck, T.; Rogers, D.; Trimble, D.; VanMaren, R.; Westenskow, G.; McDermott, D.B.; Luhmann, N.C. Jr.; Danly, B.

    1991-01-01

    The authors have completed the initial phase of a 250 GHz CARM experiment, driven by the 2 MeV, 1 kA, 30 ns induction linac at the LLNL ARC facility. A non-Brillouin, solid, electron beam is generated from a flux-threaded, thermionic cathode. As the beam traverses a 10 kG plateau produced by a superconducting magnet, ten percent of the beam energy is converted into rotational energy in a bifilar helix wiggler that produces a spiraling, 50 G, transverse magnetic field. The beam is then compressed to a 5 mm diameter as it drifts into a 30 kG plateau. For the present experiment, the CARM interaction region consisted of a single Bragg section resonator, followed by a smooth-bore amplifier section. Using high-pass filters, they have observed broadband output signals estimated to be at the several megawatt level in the range 140 to over 230 GHz. This is consistent with operation as a superradiant amplifier. Simultaneously, they also observed K a band power levels near 3 MW

  10. Test results from the LLNL 250 GHz CARM experiment

    International Nuclear Information System (INIS)

    Kulke, B.; Caplan, M.; Bubp, D.; Houck, T.; Rogers, D.; Trimble, D.; VanMaren, R.; Westenskow, G.; McDermott, D.B.; Luhmann, N.C. Jr.; Danly, B.

    1991-05-01

    We have completed the initial phase of a 250 GHz CARM experiment, driven by the 2 MeV, 1 kA, 30 ns induction linac at the LLNL ARC facility. A non-Brillouin, solid, electron beam is generated from a flux-threaded, thermionic cathode. As the beam traverses a 10 kG plateau produced by a superconducting magnet, ten percent of the beam energy is converted into rotational energy in a bifilar helix wiggler that produces a spiraling, 50 G, transverse magnetic field. The beam is then compressed to a 5 mm diameter as it drifts into a 30 kG plateau. For the present experiment, the CARM interaction region consisted of a single Bragg section resonator, followed by a smooth-bore amplifier section. Using high-pass filters, we have observed broadband output signals estimated to be at the several megawatt level in the range 140 to over 230 GHz. This is consistent with operation as a superradiant amplifier. Simultaneously, we also observed K a band power levels near 3 MW

  11. Net Weight Issue LLNL DOE-STD-3013 Containers

    International Nuclear Information System (INIS)

    Wilk, P

    2008-01-01

    The following position paper will describe DOE-STD-3013 container sets No.L000072 and No.L000076, and how they are compliant with DOE-STD-3013-2004. All masses of accountable nuclear materials are measured on LLNL certified balances maintained under an MC and A Program approved by DOE/NNSA LSO. All accountability balances are recalibrated annually and checked to be within calibration on each day that the balance is used for accountability purposes. A statistical analysis of the historical calibration checks from the last seven years indicates that the full-range Limit of Error (LoE, 95% confidence level) for the balance used to measure the mass of the contents of the above indicated 3013 containers is 0.185 g. If this error envelope, at the 95% confidence level, were to be used to generate an upper-limit to the measured weight of the containers No.L000072 and No.L000076, the error-envelope would extend beyond the 5.0 kg 3013-standard limit on the package contents by less than 0.3 g. However, this is still well within the intended safety bounds of DOE-STD-3013-2004

  12. Training the Masses ? Web-based Laser Safety Training at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Sprague, D D

    2004-12-17

    The LLNL work smart standard requires us to provide ongoing laser safety training for a large number of persons on a three-year cycle. In order to meet the standard, it was necessary to find a cost and performance effective method to perform this training. This paper discusses the scope of the training problem, specific LLNL training needs, various training methods used at LLNL, the advantages and disadvantages of these methods and the rationale for selecting web-based laser safety training. The tools and costs involved in developing web-based training courses are also discussed, in addition to conclusions drawn from our training operating experience. The ILSC lecture presentation contains a short demonstration of the LLNL web-based laser safety-training course.

  13. LLNL Compliance Plan for TRUPACT-2 Authorized Methods for Payload Control

    International Nuclear Information System (INIS)

    1995-03-01

    This document describes payload control at LLNL to ensure that all shipments of CH-TRU waste in the TRUPACT-II (Transuranic Package Transporter-II) meet the requirements of the TRUPACT-II SARP (safety report for packaging). This document also provides specific instructions for the selection of authorized payloads once individual payload containers are qualified for transport. The physical assembly of the qualified payload and operating procedures for the use of the TRUPACT-II, including loading and unloading operations, are described in HWM Procedure No. 204, based on the information in the TRUPACT-II SARP. The LLNL TRAMPAC, along with the TRUPACT-II operating procedures contained in HWM Procedure No. 204, meet the documentation needs for the use of the TRUPACT-II at LLNL. Table 14-1 provides a summary of the LLNL waste generation and certification procedures as they relate to TRUPACT-II payload compliance

  14. Proposals for ORNL [Oak Ridge National Laboratory] support to Tiber LLNL [Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Berry, L.A.; Rosenthal, M.W.; Saltmarsh, M.J.; Shannon, T.E.; Sheffield, J.

    1987-01-01

    This document describes the interests and capabilities of Oak Ridge National Laboratory in their proposals to support the Lawrence Livermore National Laboratory (LLNL) Engineering Test Reactor (ETR) project. Five individual proposals are cataloged separately. (FI)

  15. LLNL Center of Excellence Work Items for Q9-Q10 period

    Energy Technology Data Exchange (ETDEWEB)

    Neely, J. R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-09-02

    This work plan encompasses a slice of effort going on within the ASC program, and for projects utilizing COE vendor resources, describes work that will be performed by both LLNL staff and COE vendor staff collaboratively.

  16. Review of LLNL Mixed Waste Streams for the Application of Potential Waste Reduction Controls

    International Nuclear Information System (INIS)

    Belue, A; Fischer, R P

    2007-01-01

    In July 2004, LLNL adopted the International Standard ISO 14001 as a Work Smart Standard in lieu of DOE Order 450.1. In support of this new requirement the Director issued a new environmental policy that was documented in Section 3.0 of Document 1.2, ''ES and H Policies of LLNL'', in the ES and H Manual. In recent years the Environmental Management System (EMS) process has become formalized as LLNL adopted ISO 14001 as part of the contract under which the laboratory is operated for the Department of Energy (DOE). On May 9, 2005, LLNL revised its Integrated Safety Management System Description to enhance existing environmental requirements to meet ISO 14001. Effective October 1, 2005, each new project or activity is required to be evaluated from an environmental aspect, particularly if a potential exists for significant environmental impacts. Authorizing organizations are required to consider the management of all environmental aspects, the applicable regulatory requirements, and reasonable actions that can be taken to reduce negative environmental impacts. During 2006, LLNL has worked to implement the corrective actions addressing the deficiencies identified in the DOE/LSO audit. LLNL has begun to update the present EMS to meet the requirements of ISO 14001:2004. The EMS commits LLNL--and each employee--to responsible stewardship of all the environmental resources in our care. The generation of mixed radioactive waste was identified as a significant environmental aspect. Mixed waste for the purposes of this report is defined as waste materials containing both hazardous chemical and radioactive constituents. Significant environmental aspects require that an Environmental Management Plan (EMP) be developed. The objective of the EMP developed for mixed waste (EMP-005) is to evaluate options for reducing the amount of mixed waste generated. This document presents the findings of the evaluation of mixed waste generated at LLNL and a proposed plan for reduction

  17. The National Ignition Facility (NIF) and High Energy Density Science Research at LLNL (Briefing Charts)

    Science.gov (United States)

    2013-06-21

    The National Ignition Facility ( NIF ) and High Energy Density Science Research at LLNL Presentation to: IEEE Pulsed Power and Plasma Science...Conference C. J. Keane Director, NIF User Office June 21, 2013 1491978-1-4673-5168-3/13/$31.00 ©2013 IEEE Report Documentation Page Form ApprovedOMB No...4. TITLE AND SUBTITLE The National Ignition Facility ( NIF ) and High Energy Density Science Research at LLNL 5a. CONTRACT NUMBER 5b. GRANT

  18. Linear collider research and development at SLAC, LBL and LLNL

    International Nuclear Information System (INIS)

    Mattison, T.S.

    1988-10-01

    The study of electron-positron (e + e/sup /minus//) annihilation in storage ring colliders has been very fruitful. It is by now well understood that the optimized cost and size of e + e/sup /minus// storage rings scales as E(sub cm//sup 2/ due to the need to replace energy lost to synchrotron radiation in the ring bending magnets. Linear colliders, using the beams from linear accelerators, evade this scaling law. The study of e/sup +/e/sup /minus// collisions at TeV energy will require linear colliders. The luminosity requirements for a TeV linear collider are set by the physics. Advanced accelerator research and development at SLAC is focused toward a TeV Linear Collider (TLC) of 0.5--1 TeV in the center of mass, with a luminosity of 10/sup 33/--10/sup 34/. The goal is a design for two linacs of less than 3 km each, and requiring less than 100 MW of power each. With a 1 km final focus, the TLC could be fit on Stanford University land (although not entirely within the present SLAC site). The emphasis is on technologies feasible for a proposal to be framed in 1992. Linear collider development work is progressing on three fronts: delivering electrical energy to a beam, delivering a focused high quality beam, and system optimization. Sources of high peak microwave radio frequency (RF) power to drive the high gradient linacs are being developed in collaboration with Lawrence Berkeley Laboratory (LBL) and Lawrence Livermore National Laboratory (LLNL). Beam generation, beam dynamics and final focus work has been done at SLAC and in collaboration with KEK. Both the accelerator physics and the utilization of TeV linear colliders were topics at the 1988 Snowmass Summer Study. 14 refs., 4 figs., 1 tab

  19. Progress in AMS measurements at the LLNL spectrometer

    International Nuclear Information System (INIS)

    Southon, J.R.; Vogel, J.S.; Trumbore, S.E.; Davis, J.C.; Roberts, M.L.; Caffee, M.; Finkel, R.; Proctor, I.D.; Heikkinen, D.W.; Berno, A.J.; Hornady, R.S.

    1991-06-01

    The AMS measurement program at LLNL began in earnest in late 1989, and has initially concentrated on 14 C measurements for biomedical and geoscience applications. We have now begun measurements on 10 Be and 36 Cl, are presently testing the spectrometer performance for 26 Al and 3 H, and will begin tests on 7 Be, 41 Ca and 129 I within the next few months. Our laboratory has a strong biomedical AMS program of 14 C tracer measurements involving large numbers of samples (sometimes hundreds in a single experiment) at 14 C concentrations which are typically .5--5 times Modern, but are occasionally highly enriched. The sample preparation techniques required for high throughput and low cross-contamination for this work are discussed elsewhere. Similar demands are placed on the AMS measurement system, and in particular on the ion source. Modifications to our GIC 846 ion source, described below, allow us to run biomedical and geoscience or archaeological samples in the same source wheel with no adverse effects. The source has a capacity for 60 samples (about 45 unknown) in a single wheel and provides currents of 30--60μA of C - from hydrogen-reduced graphite. These currents and sample capacity provide high throughput for both biomedical and other measurements: the AMS system can be started up, tuned, and a wheel of carbon samples measured to 1--1.5% in under a day; and 2 biomedical wheels can be measured per day without difficulty. We report on the present status of the Lawrence Livermore AMS spectrometer, including sample throughput and progress towards routine 1% measurement capability for 14 C, first results on other isotopes, and experience with a multi-sample high intensity ion source. 5 refs

  20. Challenges in biotechnology at LLNL: from genes to proteins; TOPICAL

    International Nuclear Information System (INIS)

    Albala, J S

    1999-01-01

    This effort has undertaken the task of developing a link between the genomics, DNA repair and structural biology efforts within the Biology and Biotechnology Research Program at LLNL. Through the advent of the I.M.A.G.E. (Integrated Molecular Analysis of Genomes and their Expression) Consortium, a world-wide effort to catalog the largest public collection of genes, accepted and maintained within BBRP, it is now possible to systematically express the protein complement of these to further elucidate novel gene function and structure. The work has ensued in four phases, outlined as follows: (1) Gene and System selection; (2) Protein expression and purification; (3) Structural analysis; and (4) biological integration. Proteins to be expressed have been those of high programmatic interest. This includes, in particular, proteins involved in the maintenance of genome integrity, particularly those involved in the repair of DNA damage, including ERCC1, ERCC4, XRCC2, XRCC3, XRCC9, HEX1, APN1, p53, RAD51B, RAD51C, and RAD51. Full-length cDNA cognates of selected genes were isolated, and cloned into baculovirus-based expression vectors. The baculoviral expression system for protein over-expression is now well-established in the Albala laboratory. Procedures have been successfully optimized for full-length cDNA clining into expression vectors for protein expression from recombinant constructs. This includes the reagents, cell lines, techniques necessary for expression of recombinant baculoviral constructs in Spodoptera frugiperda (Sf9) cells. The laboratory has also generated a high-throughput baculoviral expression paradigm for large scale expression and purification of human recombinant proteins amenable to automation

  1. LLNL Mercury Project Trinity Open Science Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Shawn A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-17

    The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.

  2. Implementation of the Nonlinear Composite Analysis Code "LAMPAT" into LLNL-DYNA3D

    National Research Council Canada - National Science Library

    Tabiei, Ala

    2002-01-01

    .... In addition, LAMPAT is modified for use in an explicit time-integration solver. The model is improved to account for loss of symmetry of the material stiffness matrix resulting from degradation of the elastic moduli during damage evolution...

  3. Evaluation of LLNL's Nuclear Accident Dosimeters at the CALIBAN Reactor September 2010

    International Nuclear Information System (INIS)

    Hickman, D.P.; Wysong, A.R.; Heinrichs, D.P.; Wong, C.T.; Merritt, M.J.; Topper, J.D.; Gressmann, F.A.; Madden, D.J.

    2011-01-01

    The Lawrence Livermore National Laboratory uses neutron activation elements in a Panasonic TLD holder as a personnel nuclear accident dosimeter (PNAD). The LLNL PNAD has periodically been tested using a Cf-252 neutron source, however until 2009, it was more than 25 years since the PNAD has been tested against a source of neutrons that arise from a reactor generated neutron spectrum that simulates a criticality. In October 2009, LLNL participated in an intercomparison of nuclear accident dosimeters at the CEA Valduc Silene reactor (Hickman, et.al. 2010). In September 2010, LLNL participated in a second intercomparison of nuclear accident dosimeters at CEA Valduc. The reactor generated neutron irradiations for the 2010 exercise were performed at the Caliban reactor. The Caliban results are described in this report. The procedure for measuring the nuclear accident dosimeters in the event of an accident has a solid foundation based on many experimental results and comparisons. The entire process, from receiving the activated NADs to collecting and storing them after counting was executed successfully in a field based operation. Under normal conditions at LLNL, detectors are ready and available 24/7 to perform the necessary measurement of nuclear accident components. Likewise LLNL maintains processing laboratories that are separated from the areas where measurements occur, but contained within the same facility for easy movement from processing area to measurement area. In the event of a loss of LLNL permanent facilities, the Caliban and previous Silene exercises have demonstrated that LLNL can establish field operations that will very good nuclear accident dosimetry results. There are still several aspects of LLNL's nuclear accident dosimetry program that have not been tested or confirmed. For instance, LLNL's method for using of biological samples (blood and hair) has not been verified since the method was first developed in the 1980's. Because LLNL and the other DOE

  4. LLNL/YMP Waste Container Fabrication and Closure Project; GFY technical activity summary

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1990-10-01

    The Department of Energy`s Office of Civilian Radioactive Waste Management (OCRWM) Program is studying Yucca Mountain, Nevada as a suitable site for the first US high-level nuclear waste repository. Lawrence Livermore National Laboratory (LLNL) has the responsibility for designing and developing the waste package for the permanent storage of high-level nuclear waste. This report is a summary of the technical activities for the LLNL/YMP Nuclear Waste Disposal Container Fabrication and Closure Development Project. Candidate welding closure processes were identified in the Phase 1 report. This report discusses Phase 2. Phase 2 of this effort involved laboratory studies to determine the optimum fabrication and closure processes. Because of budget limitations, LLNL narrowed the materials for evaluation in Phase 2 from the original six to four: Alloy 825, CDA 715, CDA 102 (or CDA 122) and CDA 952. Phase 2 studies focused on evaluation of candidate material in conjunction with fabrication and closure processes.

  5. Comprehensive Angular Response Study of LLNL Panasonic Dosimeter Configurations and Artificial Intelligence Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Stone, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-06-30

    In April of 2016, the Lawrence Livermore National Laboratory External Dosimetry Program underwent a Department of Energy Laboratory Accreditation Program (DOELAP) on-site assessment. The assessment reported a concern that the study performed in 2013 Angular Dependence Study Panasonic UD-802 and UD-810 Dosimeters LLNL Artificial Intelligence Algorithm was incomplete. Only the responses at ±60° and 0° were evaluated and independent data from dosimeters was not used to evaluate the algorithm. Additionally, other configurations of LLNL dosimeters were not considered in this study. This includes nuclear accident dosimeters (NAD) which are placed in the wells surrounding the TLD in the dosimeter holder.

  6. Assessment of the proposed decontamination and waste treatment facility at LLNL

    International Nuclear Information System (INIS)

    Cohen, J.J.

    1987-01-01

    To provide a centralized decontamination and waste treatment facility (DWTF) at LLNL, the construction of a new installation has been planned. Objectives for this new facility were to replace obsolete, structurally and environmentally sub-marginal liquid and solid waste process facilities and decontamination facility and to bring these facilities into compliance with existing federal, state and local regulations as well as DOE orders. In a previous study, SAIC conducted a preliminary review and evaluation of existing facilities at LLNL and cost effectiveness of the proposed DWTF. This document reports on a detailed review of specific aspects of the proposed DWTF

  7. A coupled systems code-CFD MHD solver for fusion blanket design

    Energy Technology Data Exchange (ETDEWEB)

    Wolfendale, Michael J., E-mail: m.wolfendale11@imperial.ac.uk; Bluck, Michael J.

    2015-10-15

    Highlights: • A coupled systems code-CFD MHD solver for fusion blanket applications is proposed. • Development of a thermal hydraulic systems code with MHD capabilities is detailed. • A code coupling methodology based on the use of TCP socket communications is detailed. • Validation cases are briefly discussed for the systems code and coupled solver. - Abstract: The network of flow channels in a fusion blanket can be modelled using a 1D thermal hydraulic systems code. For more complex components such as junctions and manifolds, the simplifications employed in such codes can become invalid, requiring more detailed analyses. For magnetic confinement reactor blanket designs using a conducting fluid as coolant/breeder, the difficulties in flow modelling are particularly severe due to MHD effects. Blanket analysis is an ideal candidate for the application of a code coupling methodology, with a thermal hydraulic systems code modelling portions of the blanket amenable to 1D analysis, and CFD providing detail where necessary. A systems code, MHD-SYS, has been developed and validated against existing analyses. The code shows good agreement in the prediction of MHD pressure loss and the temperature profile in the fluid and wall regions of the blanket breeding zone. MHD-SYS has been coupled to an MHD solver developed in OpenFOAM and the coupled solver validated for test geometries in preparation for modelling blanket systems.

  8. Summary of the LLNL one-dimensional transport-kinetics model of the troposphere and stratosphere: 1981

    International Nuclear Information System (INIS)

    Wuebbles, D.J.

    1981-09-01

    Since the LLNL one-dimensional coupled transport and chemical kinetics model of the troposphere and stratosphere was originally developed in 1972 (Chang et al., 1974), there have been many changes to the model's representation of atmospheric physical and chemical processes. A brief description is given of the current LLNL one-dimensional coupled transport and chemical kinetics model of the troposphere and stratosphere

  9. LLNL radioactive waste management plan as per DOE Order 5820.2

    International Nuclear Information System (INIS)

    1984-01-01

    The following aspects of LLNL's radioactive waste management plan are discussed: program administration; description of waste generating processes; radioactive waste collection, treatment, and disposal; sanitary waste management; site 300 operations; schedules and major milestones for waste management activities; and environmental monitoring programs (sampling and analysis)

  10. National Uranium Resource Evaluation Program: the Hydrogeochemical Stream Sediment Reconnaissance Program at LLNL

    International Nuclear Information System (INIS)

    Higgins, G.H.

    1980-08-01

    From early 1975 to mid 1979, Lawrence Livermore National Laboratory (LLNL) participated in the Hydrogeochemical Stream Sediment Reconnaissance (HSSR), part of the National Uranium Resource Evaluation (NURE) program sponsored by the Department of Energy (DOE). The Laboratory was initially responsible for collecting, analyzing, and evaluating sediment and water samples from approximately 200,000 sites in seven western states. Eventually, however, the NURE program redefined its sampling priorities, objectives, schedules, and budgets, with the increasingly obvious result that LLNL objectives and methodologies were not compatible with those of the NURE program office, and the LLNL geochemical studies were not relevant to the program goal. The LLNL portion of the HSSR program was consequently terminated, and all work was suspended by June 1979. Of the 38,000 sites sampled, 30,000 were analyzed by instrumental neutron activation analyses (INAA), delayed neutron counting (DNC), optical emission spectroscopy (OES), and automated chloride-sulfate analyses (SC). Data from about 13,000 sites have been formally reported. From each site, analyses were published of about 30 of the 60 elements observed. Uranium mineralization has been identified at several places which were previously not recognized as potential uranium source areas, and a number of other geochemical anomalies were discovered

  11. LLNL Site plan for a MOX fuel lead assembly mission in support of surplus plutonium disposition

    Energy Technology Data Exchange (ETDEWEB)

    Bronson, M.C.

    1997-10-01

    The principal facilities that LLNL would use to support a MOX Fuel Lead Assembly Mission are Building 332 and Building 334. Both of these buildings are within the security boundary known as the LLNL Superblock. Building 332 is the LLNL Plutonium Facility. As an operational plutonium facility, it has all the infrastructure and support services required for plutonium operations. The LLNL Plutonium Facility routinely handles kilogram quantities of plutonium and uranium. Currently, the building is limited to a plutonium inventory of 700 kilograms and a uranium inventory of 300 kilograms. Process rooms (excluding the vaults) are limited to an inventory of 20 kilograms per room. Ongoing operations include: receiving SSTS, material receipt, storage, metal machining and casting, welding, metal-to-oxide conversion, purification, molten salt operations, chlorination, oxide calcination, cold pressing and sintering, vitrification, encapsulation, chemical analysis, metallography and microprobe analysis, waste material processing, material accountability measurements, packaging, and material shipping. Building 334 is the Hardened Engineering Test Building. This building supports environmental and radiation measurements on encapsulated plutonium and uranium components. Other existing facilities that would be used to support a MOX Fuel Lead Assembly Mission include Building 335 for hardware receiving and storage and TRU and LLW waste storage and shipping facilities, and Building 331 or Building 241 for storage of depleted uranium.

  12. Beam-beam studies for the proposed SLAC/LBL/LLNL B Factory

    International Nuclear Information System (INIS)

    Furman, M.A.

    1991-05-01

    We present a summary of beam-beam dynamics studies that have been carried out to date for the proposed SLAC/LBL/LLNL B Factory. Most of the material presented here is contained in the proposal's Conceptual Design Report, although post-CDR studies are also presented. 15 refs., 6 figs., 2 tabs

  13. LLNL Site plan for a MOX fuel lead assembly mission in support of surplus plutonium disposition

    International Nuclear Information System (INIS)

    Bronson, M.C.

    1997-01-01

    The principal facilities that LLNL would use to support a MOX Fuel Lead Assembly Mission are Building 332 and Building 334. Both of these buildings are within the security boundary known as the LLNL Superblock. Building 332 is the LLNL Plutonium Facility. As an operational plutonium facility, it has all the infrastructure and support services required for plutonium operations. The LLNL Plutonium Facility routinely handles kilogram quantities of plutonium and uranium. Currently, the building is limited to a plutonium inventory of 700 kilograms and a uranium inventory of 300 kilograms. Process rooms (excluding the vaults) are limited to an inventory of 20 kilograms per room. Ongoing operations include: receiving SSTS, material receipt, storage, metal machining and casting, welding, metal-to-oxide conversion, purification, molten salt operations, chlorination, oxide calcination, cold pressing and sintering, vitrification, encapsulation, chemical analysis, metallography and microprobe analysis, waste material processing, material accountability measurements, packaging, and material shipping. Building 334 is the Hardened Engineering Test Building. This building supports environmental and radiation measurements on encapsulated plutonium and uranium components. Other existing facilities that would be used to support a MOX Fuel Lead Assembly Mission include Building 335 for hardware receiving and storage and TRU and LLW waste storage and shipping facilities, and Building 331 or Building 241 for storage of depleted uranium

  14. Dispersion of Radionuclides and Exposure Assessment in Urban Environments: A Joint CEA and LLNL Report

    Energy Technology Data Exchange (ETDEWEB)

    Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gowardhan, Akshay [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, Kristin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Simpson, Matthew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Yu, Kristen [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Armand, Patrick [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Duchenne, Christophe [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Mariotte, Frederic [Alternative Energies and Atomic Energy Commission (CEA), Paris (France); Pectorin, Xavier [Alternative Energies and Atomic Energy Commission (CEA), Paris (France)

    2014-12-19

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate the utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single

  15. Dispersion of Radionuclides and Exposure Assessment in Urban Environments: A Joint CEA and LLNL Report

    International Nuclear Information System (INIS)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin; Simpson, Matthew; Yu, Kristen; Armand, Patrick; Duchenne, Christophe; Mariotte, Frederic; Pectorin, Xavier

    2014-01-01

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other's flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate the utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single

  16. LLNL large-area inductively coupled plasma (ICP) source: Experiments

    International Nuclear Information System (INIS)

    Richardson, R.A.; Egan, P.O.; Benjamin, R.D.

    1995-05-01

    We describe initial experiments with a large (76-cm diameter) plasma source chamber to explore the problems associated with large-area inductively coupled plasma (ICP) sources to produce high density plasmas useful for processing 400-mm semiconductor wafers. Our experiments typically use a 640-nun diameter planar ICP coil driven at 13.56 MHz. Plasma and system data are taken in Ar and N 2 over the pressure range 3-50 mtorr. RF inductive power was run up to 2000W, but typically data were taken over the range 100-1000W. Diagnostics include optical emission spectroscopy, Langmuir probes, and B probes as well as electrical circuit measurements. The B and E-M measurements are compared with models based on commercial E-M codes. Initial indications are that uniform plasmas suitable for 400-mm processing are attainable

  17. LLNL nuclear data libraries used for fusion calculations

    International Nuclear Information System (INIS)

    Howerton, R.J.

    1984-01-01

    The Physical Data Group of the Computational Physics Division of the Lawrence Livermore National Laboratory has as its principal responsibility the development and maintenance of those data that are related to nuclear reaction processes and are needed for Laboratory programs. Among these are the Magnetic Fusion Energy and the Inertial Confinement Fusion programs. To this end, we have developed and maintain a collection of data files or libraries. These include: files of experimental data of neutron induced reactions; an annotated bibliography of literature related to charged particle induced reactions with light nuclei; and four main libraries of evaluated data. We also maintain files of calculational constants developed from the evaluated libraries for use by Laboratory computer codes. The data used for fusion calculations are usually these calculational constants, but since they are derived by prescribed manipulation of evaluated data this discussion will describe the evaluated libraries

  18. Development of Compton gamma-ray sources at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Albert, F.; Anderson, S. G.; Ebbers, C. A.; Gibson, D. J.; Hartemann, F. V.; Marsh, R. A.; Messerly, M. J.; Prantil, M. A.; Wu, S.; Barty, C. P. J. [Lawrence Livermore National Laboratory, NIF and Photon Science, 7000 East avenue, Livermore, CA 94550 (United States)

    2012-12-21

    Compact Compton scattering gamma-ray sources offer the potential of studying nuclear photonics with new tools. The optimization of such sources depends on the final application, but generally requires maximizing the spectral density (photons/eV) of the gamma-ray beam while simultaneously reducing the overall bandwidth on target to minimize noise. We have developed an advanced design for one such system, comprising the RF drive, photoinjector, accelerator, and electron-generating and electron-scattering laser systems. This system uses a 120 Hz, 250 pC, 2 ps, 0.35 mm mrad electron beam with 250 MeV maximum energy in an X-band accelerator scattering off a 150 mJ, 10 ps, 532 nm laser to generate 5 Multiplication-Sign 10{sup 10} photons/eV/s/Sr at 0.5 MeV with an overall bandwidth of less than 1%. The source will be able to produce photons up to energies of 2.5 MeV. We also discuss Compton scattering gamma-ray source predictions given by numerical codes.

  19. Joint research and development and exchange of technology on toxic material emergency response between LLNL and ENEA. 1985 progress report

    International Nuclear Information System (INIS)

    Dickerson, M.H.; Caracciolo, R.

    1986-01-01

    For the past six years, the US Department of Energy, LLNL, and the ENEA, Rome, Italy, have participated in cooperative studies for improving a systems approach to an emergency response following nuclear accidents. Technology exchange between LLNL and the ENEA was initially confined to the development, application, and evaluation of atmospheric transport and diffusion models. With the emergence of compatible hardware configurations between LLNL and ENEA, exchanges of technology and ideas for improving the development and implementation of systems are beginning to emerge. This report describes cooperative work that has occurred during the past three years, the present state of each system, and recommendations for future exchanges of technology

  20. DOE/LLNL verification symposium on technologies for monitoring nuclear tests related to weapons proliferation

    International Nuclear Information System (INIS)

    Nakanishi, K.K.

    1993-01-01

    The rapidly changing world situation has raised concerns regarding the proliferation of nuclear weapons and the ability to monitor a possible clandestine nuclear testing program. To address these issues, Lawrence Livermore National Laboratory's (LLNL) Treaty Verification Program sponsored a symposium funded by the US Department of Energy's (DOE) Office of Arms Control, Division of Systems and Technology. The DOE/LLNL Symposium on Technologies for Monitoring Nuclear Tests Related to Weapons Proliferation was held at the DOE's Nevada Operations Office in Las Vegas, May 6--7,1992. This volume is a collection of several papers presented at the symposium. Several experts in monitoring technology presented invited talks assessing the status of monitoring technology with emphasis on the deficient areas requiring more attention in the future. In addition, several speakers discussed proliferation monitoring technologies being developed by the DOE's weapons laboratories

  1. The LLNL Multiuser Tandem Laboratory computer-controlled radiation monitoring system

    International Nuclear Information System (INIS)

    Homann, S.G.

    1992-01-01

    The Physics Department of the Lawrence Livermore National Laboratory (LLNL) recently constructed a Multiuser Tandem Laboratory (MTL) to perform a variety of basic and applied measurement programs. The laboratory and its research equipment were constructed with support from a consortium of LLNL Divisions, Sandia National Laboratories Livermore, and the University of California. Primary design goals for the facility were inexpensive construction and operation, high beam quality at a large number of experimental stations, and versatility in adapting to new experimental needs. To accomplish these goals, our main design decisions were to place the accelerator in an unshielded structure, to make use of reconfigured cyclotrons as effective switching magnets, and to rely on computer control systems for both radiological protection and highly reproducible and well-characterized accelerator operation. This paper addresses the radiological control computer system

  2. A probabilistic risk assessment of the LLNL Plutonium facility's evaluation basis fire operational accident

    International Nuclear Information System (INIS)

    Brumburgh, G.

    1994-01-01

    The Lawrence Livermore National Laboratory (LLNL) Plutonium Facility conducts numerous involving plutonium to include device fabrication, development of fabrication techniques, metallurgy research, and laser isotope separation. A Safety Analysis Report (SAR) for the building 332 Plutonium Facility was completed rational safety and acceptable risk to employees, the public, government property, and the environment. This paper outlines the PRA analysis of the Evaluation Basis Fire (EDF) operational accident. The EBF postulates the worst-case programmatic impact event for the Plutonium Facility

  3. Effects of stratospheric aerosol surface processes on the LLNL two-dimensional zonally averaged model

    International Nuclear Information System (INIS)

    Connell, P.S.; Kinnison, D.E.; Wuebbles, D.J.; Burley, J.D.; Johnston, H.S.

    1992-01-01

    We have investigated the effects of incorporating representations of heterogeneous chemical processes associated with stratospheric sulfuric acid aerosol into the LLNL two-dimensional, zonally averaged, model of the troposphere and stratosphere. Using distributions of aerosol surface area and volume density derived from SAGE 11 satellite observations, we were primarily interested in changes in partitioning within the Cl- and N- families in the lower stratosphere, compared to a model including only gas phase photochemical reactions

  4. Status of the SLAC/LBL/LLNL B-factory and the BABAR detector

    International Nuclear Information System (INIS)

    Oddone, P.

    1994-10-01

    After a brief introduction on the physics reach of the SLAC/LBL/LLNL Asymmetric B-Factory, the author describes the status of the accelerator and the detector as of the end of 1994. At this time, essentially all major decisions have been made, including the choice of particle identification for the detector. The author concludes this report with the description of the schedule for the construction of both accelerator and detector

  5. Evaluation of the neutron dose received by personnel at the LLNL

    International Nuclear Information System (INIS)

    Hankins, D.E.

    1982-01-01

    This report was prepared to document the techniques being used to evaluate the neutron exposures received by personnel at the LLNL. Two types of evaluations are discussed covering the use of the routine personnel dosimeter and of the albedo neutron dosimeter. Included in the report are field survey results which were used to determine the calibration factors being applied to the dosimeter readings. Calibration procedures are discussed and recommendations are made on calibration and evaluation procedures

  6. LLNL Contribution to LLE FY09 Annual Report: NIC and HED Results

    International Nuclear Information System (INIS)

    Heeter, R.F.; Landen, O.L.; Hsing, W.W.; Fournier, K.B.

    2009-01-01

    In FY09, LLNL led 238 target shots on the OMEGA Laser System. Approximately half of these LLNL-led shots supported the National Ignition Campaign (NIC). The remainder was dedicated to experiments for the high-energy-density stewardship experiments (HEDSE). Objectives of the LLNL led NIC campaigns at OMEGA included: (1) Laser-plasma interaction studies in physical conditions relevant for the NIF ignition targets; (2) Demonstration of Tr = 100 eV foot symmetry tuning using a reemission sphere; (3) X-ray scattering in support of conductivity measurements of solid density Be plasmas; (4) Experiments to study the physical properties (thermal conductivity) of shocked fusion fuels; (5) High-resolution measurements of velocity nonuniformities created by microscopic perturbations in NIF ablator materials; (6) Development of a novel Compton Radiography diagnostic platform for ICF experiments; and (7) Precision validation of the equation of state for quartz. The LLNL HEDSE campaigns included the following experiments: (1) Quasi-isentropic (ICE) drive used to study material properties such as strength, equation of state, phase, and phase-transition kinetics under high pressure; (2) Development of a high-energy backlighter for radiography in support of material strength experiments using Omega EP and the joint OMEGA-OMEGA-EP configuration; (3) Debris characterization from long-duration, point-apertured, point-projection x-ray backlighters for NIF radiation transport experiments; (4) Demonstration of ultrafast temperature and density measurements with x-ray Thomson scattering from short-pulse laser-heated matter; (5) The development of an experimental platform to study nonlocal thermodynamic equilibrium (NLTE) physics using direct-drive implosions; (6) Opacity studies of high-temperature plasmas under LTE conditions; and (7) Characterization of copper (Cu) foams for HEDSE experiments.

  7. Superconducting magnet development capability of the LLNL [Lawrence Livermore National Laboratory] High Field Test Facility

    International Nuclear Information System (INIS)

    Miller, J.R.; Shen, S.; Summers, L.T.

    1990-02-01

    This paper discusses the following topics: High-Field Test Facility Equipment at LLNL; FENIX Magnet Facility; High-Field Test Facility (HFTF) 2-m Solenoid; Cryogenic Mechanical Test Facility; Electro-Mechanical Conductor Test Apparatus; Electro-Mechanical Wire Test Apparatus; FENIX/HFTF Data System and Network Topology; Helium Gas Management System (HGMS); Airco Helium Liquefier/Refrigerator; CTI 2800 Helium Liquefier; and MFTF-B/ITER Magnet Test Facility

  8. LLNL Containment Program nuclear test effects and geologic data base: glossary and parameter definitions

    International Nuclear Information System (INIS)

    Howard, N.W.

    1983-01-01

    This report lists, defines, and updates Parameters in DBASE, an LLNL test effects data bank in which data are stored from experiments performed at NTS and other test sites. Parameters are listed by subject and by number. Part 2 of this report presents the same information for parameters for which some of the data may be classified; it was issued in 1979 and is not being reissued at this time as it is essentially unchanged

  9. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    International Nuclear Information System (INIS)

    Bergman, W.; Elliott, J.; Wilson, K.

    1995-01-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system

  10. Physics of laser fusion. Volume II. Diagnostics of experiments on laser fusion targets at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Ahlstrom, H.G.

    1982-01-01

    These notes present the experimental basis and status for laser fusion as developed at LLNL. There are two other volumes in this series: Vol. I, by C.E. Max, presents the theoretical laser-plasma interaction physics; Vol. III, by J.F. Holzrichter et al., presents the theory and design of high-power pulsed lasers. A fourth volume will present the theoretical implosion physics. The notes consist of six sections. The first, an introductory section, provides some of the history of inertial fusion and a simple explanation of the concepts involved. The second section presents an extensive discussion of diagnostic instrumentation used in the LLNL Laser Fusion Program. The third section is a presentation of laser facilities and capabilities at LLNL. The purpose here is to define capability, not to derive how it was obtained. The fourth and fifth sections present the experimental data on laser-plasma interaction and implosion physics. The last chapter is a short projection of the future.

  11. Performance of HEPA filters at LLNL following the 1980 and 1989 earthquakes

    Energy Technology Data Exchange (ETDEWEB)

    Bergman, W.; Elliott, J.; Wilson, K. [Lawrence Livermore National Laboratory, CA (United States)

    1995-02-01

    The Lawrence Livermore National Laboratory has experienced two significant earthquakes for which data is available to assess the ability of HEPA filters to withstand seismic conditions. A 5.9 magnitude earthquake with an epicenter 10 miles from LLNL struck on January 24, l980. Estimates of the peak ground accelerations ranged from 0.2 to 0.3 g. A 7.0 magnitude earthquake with an epicenter about 50 miles from LLNL struck on October 17, 1989. Measurements of the ground accelerations at LLNL averaged 0.1 g. The results from the in-place filter tests obtained after each of the earthquakes were compiled and studied to determine if the earthquakes had caused filter leakage. Our study showed that only the 1980 earthquake resulted in a small increase in the number of HEPA filters developing leaks. In the 12 months following the 1980 and 1989 earthquakes, the in-place filter tests showed 8.0% and 4.1% of all filters respectively developed leaks. The average percentage of filters developing leaks from 1980 to 1993 was 3.3%+/-1.7%. The increase in the filter leaks is significant for the 1980 earthquake, but not for the 1989 earthquake. No contamination was detected following the earthquakes that would suggest transient releases from the filtration system.

  12. Physics of laser fusion. Volume II. Diagnostics of experiments on laser fusion targets at LLNL

    International Nuclear Information System (INIS)

    Ahlstrom, H.G.

    1982-01-01

    These notes present the experimental basis and status for laser fusion as developed at LLNL. There are two other volumes in this series: Vol. I, by C.E. Max, presents the theoretical laser-plasma interaction physics; Vol. III, by J.F. Holzrichter et al., presents the theory and design of high-power pulsed lasers. A fourth volume will present the theoretical implosion physics. The notes consist of six sections. The first, an introductory section, provides some of the history of inertial fusion and a simple explanation of the concepts involved. The second section presents an extensive discussion of diagnostic instrumentation used in the LLNL Laser Fusion Program. The third section is a presentation of laser facilities and capabilities at LLNL. The purpose here is to define capability, not to derive how it was obtained. The fourth and fifth sections present the experimental data on laser-plasma interaction and implosion physics. The last chapter is a short projection of the future

  13. Institute of Geophysics and Planetary Physics (IGPP), Lawrence Livermore National Laboratory (LLNL): Quinquennial report, November 14-15, 1996

    Energy Technology Data Exchange (ETDEWEB)

    Tweed, J.

    1996-10-01

    This Quinquennial Review Report of the Lawrence Livermore National Laboratory (LLNL) branch of the Institute for Geophysics and Planetary Physics (IGPP) provides an overview of IGPP-LLNL, its mission, and research highlights of current scientific activities. This report also presents an overview of the University Collaborative Research Program (UCRP), a summary of the UCRP Fiscal Year 1997 proposal process and the project selection list, a funding summary for 1993-1996, seminars presented, and scientific publications. 2 figs., 3 tabs.

  14. A probabilistic risk assessment of the LLNL Plutonium Facility's evaluation basis fire operational accident. Revision 1

    International Nuclear Information System (INIS)

    Brumburgh, G.P.

    1995-01-01

    The Lawrence Livermore National Laboratory (LLNL) Plutonium Facility conducts numerous programmatic activities involving plutonium to include device fabrication, development of improved and/or unique fabrication techniques, metallurgy research, and laser isotope separation. A Safety Analysis Report (SAR) for the building 332 Plutonium Facility was completed in July 1994 to address operational safety and acceptable risk to employees, the public, government property, and the environmental. This paper outlines the PRA analysis of the Evaluation Basis Fire (EBF) operational accident. The EBF postulates the worst-case programmatic impact event for the Plutonium Facility

  15. Status of the SLAC/LBL/LLNL B-Factory and the BaBar detector

    International Nuclear Information System (INIS)

    Oddone, P.

    1994-08-01

    The primary motivation of the Asymmetric B-Factory is the study of CP violation. The decay of B mesons and, in particular, the decay of neutral B mesons, offers the possibility of determining conclusively whether CP violation is part and parcel of the Standard Model with three generations of quarks and leptons. Alternatively, the authors may discover that CP violation lies outside the present framework. In this paper the authors briefly describe the physics reach of the SLAC/LBL/LLNL Asymmetric B-Factory, the progress on the machine design and construction, the progress on the detector design, and the schedule to complete both projects

  16. M4FT-15LL0806062-LLNL Thermodynamic and Sorption Data FY15 Progress Report

    Energy Technology Data Exchange (ETDEWEB)

    Zavarin, M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wolery, T. J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-08-31

    This progress report (Milestone Number M4FT-15LL0806062) summarizes research conducted at Lawrence Livermore National Laboratory (LLNL) within Work Package Number FT-15LL080606. The focus of this research is the thermodynamic modeling of Engineered Barrier System (EBS) materials and properties and development of thermodynamic databases and models to evaluate the stability of EBS materials and their interactions with fluids at various physicochemical conditions relevant to subsurface repository environments. The development and implementation of equilibrium thermodynamic models are intended to describe chemical and physical processes such as solubility, sorption, and diffusion.

  17. Analyses in Support of Z-IFE LLNL Progress Report for FY-05

    International Nuclear Information System (INIS)

    Moir, R W; Abbott, R P; Callahan, D A; Latkowski, J F; Meier, W R; Reyes, S

    2005-01-01

    The FY04 LLNL study of Z-IFE [1] proposed and evaluated a design that deviated from SNL's previous baseline design. The FY04 study included analyses of shock mitigation, stress in the first wall, neutronics and systems studies. In FY05, the subject of this report, we build on our work and the theme of last year. Our emphasis continues to be on alternatives that hold promise of considerable improvements in design and economics compared to the base-line design. Our key results are summarized here

  18. Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1989-01-01

    COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies

  19. 3D equilibrium codes for mirror machines

    International Nuclear Information System (INIS)

    Kaiser, T.B.

    1983-01-01

    The codes developed for cumputing three-dimensional guiding center equilibria for quadrupole tandem mirrors are discussed. TEBASCO (Tandem equilibrium and ballooning stability code) is a code developed at LLNL that uses a further expansion of the paraxial equilibrium equation in powers of β (plasma pressure/magnetic pressure). It has been used to guide the design of the TMX-U and MFTF-B experiments at Livermore. Its principal weakness is its perturbative nature, which renders its validity for high-β calculation open to question. In order to compute high-β equilibria, the reduced MHD technique that has been proven useful for determining toroidal equilibria was adapted to the tandem mirror geometry. In this approach, the paraxial expansion of the MHD equations yields a set of coupled nonlinear equations of motion valid for arbitrary β, that are solved as an initial-value problem. Two particular formulations have been implemented in computer codes developed at NYU/Kyoto U and LLNL. They differ primarily in the type of grid, the location of the lateral boundary and the damping techniques employed, and in the method of calculating pressure-balance equilibrium. Discussions on these codes are presented in this paper. (Kato, T.)

  20. Over Batch Analysis for the LLNL Plutonium Packaging System (PuPS)

    International Nuclear Information System (INIS)

    Riley, D.; Dodson, K.

    2007-01-01

    This document addresses the concern raised in the Savannah River Site (SRS) Acceptance Criteria (Reference 1, Section 6.a.3) about receiving an item that is over batched by 1.0 kg of fissile materials. This document shows that the occurrence of this is incredible. Some of the Department of Energy Standard 3013 (DOE-STD-3013) requirements are described in Section 2.1. The SRS requirement is discussed in Section 2.2. Section 2.3 describes the way fissile materials are handled in the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (B332). Based on the material handling discussed in Section 2.3, there are only three errors that could result in a shipping container being over batched. These are: incorrect measurement of the item, selecting the wrong item to package, and packaging two items into a single shipping container. The analysis in Section 3 shows that the first two events are incredible because of the controls that exist at LLNL. The third event is physically impossible. Therefore, it is incredible for an item to be shipped to SRS that is more than 1.0 kg of fissile materials over batched

  1. Over Batch Analysis for the LLNL DOE-STD-3013 Packaging System

    International Nuclear Information System (INIS)

    Riley, D.C.; Dodson, K.

    2009-01-01

    This document addresses the concern raised in the Savannah River Site (SRS) Acceptance Criteria about receiving an item that is over batched by 1.0 kg of fissile materials. This document shows that the occurrence of this is incredible. Some of the Department of Energy Standard 3013 (DOE-STD-3013) requirements are described in Section 2.1. The SRS requirement is discussed in Section 2.2. Section 2.3 describes the way fissile materials are handled in the Lawrence Livermore National Laboratory (LLNL) Plutonium Facility (B332). Based on the material handling discussed in Section 2.3, there are only three errors that could result in a shipping container being over batched. These are: incorrect measurement of the item, selecting the wrong item to package, and packaging two items into a single shipping container. The analysis in Section 3 shows that the first two events are incredible because of the controls that exist at LLNL. The third event is physically impossible. Therefore, it is incredible for an item to be shipped to SRS that is more than 1.0 kg of fissile materials over batched.

  2. Implementing necessary and sufficient standards for radioactive waste management at LLNL

    International Nuclear Information System (INIS)

    Sims, J.M.; Ladran, A.; Hoyt, D.

    1995-01-01

    Lawrence Livermore National Laboratory (LLNL) and the U.S. Department of Energy, Oakland Field Office (DOE/OAK), are participating in a pilot program to evaluate the process to develop necessary and sufficient sets of standards for contractor activities. This concept of contractor and DOE jointly and locally deciding on what constitutes the set of standards that are necessary and sufficient to perform work safely and in compliance with federal, state, and local regulations, grew out of DOE's Department Standards Committee (Criteria for the Department's Standards Program, August 1994, DOE/EH/-0416). We have chosen radioactive waste management activities as the pilot program at LLNL. This pilot includes low-level radioactive waste, transuranic (TRU) waste, and the radioactive component of low-level and TRU mixed wastes. Guidance for the development and implementation of the necessary and sufficient set of standards is provided in open-quotes The Department of Energy Closure Process for Necessary and Sufficient Sets of Standards,close quotes March 27, 1995 (draft)

  3. LLNL Experimental Test Site (Site 300) Potable Water System Operations Plan

    Energy Technology Data Exchange (ETDEWEB)

    Ocampo, R. P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bellah, W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-09-14

    The existing Lawrence Livermore National Laboratory (LLNL) Site 300 drinking water system operation schematic is shown in Figures 1 and 2 below. The sources of water are from two Site 300 wells (Well #18 and Well #20) and San Francisco Public Utilities Commission (SFPUC) Hetch-Hetchy water through the Thomas shaft pumping station. Currently, Well #20 with 300 gallons per minute (gpm) pump capacity is the primary source of well water used during the months of September through July, while Well #18 with 225 gpm pump capacity is the source of well water for the month of August. The well water is chlorinated using sodium hypochlorite to provide required residual chlorine throughout Site 300. Well water chlorination is covered in the Lawrence Livermore National Laboratory Experimental Test Site (Site 300) Chlorination Plan (“the Chlorination Plan”; LLNL-TR-642903; current version dated August 2013). The third source of water is the SFPUC Hetch-Hetchy Water System through the Thomas shaft facility with a 150 gpm pump capacity. At the Thomas shaft station the pumped water is treated through SFPUC-owned and operated ultraviolet (UV) reactor disinfection units on its way to Site 300. The Thomas Shaft Hetch- Hetchy water line is connected to the Site 300 water system through the line common to Well pumps #18 and #20 at valve box #1.

  4. Foreign Travel Trip Report for LLNL travel with DOE FES funding, May 19th-30th, 2012

    International Nuclear Information System (INIS)

    Joseph, I.

    2012-01-01

    I attended the 20th biannual International Conference on Plasma Surface Interaction (PSI) in Fusion Devices in Aachen, Germany, hosted this year by the Forschungszentrum Julich (FZJ) research center. The PSI conference is one of the main international forums for the presentation and discussion of results on plasma surface interactions and edge plasma physics relevant to magnetic confinement fusion devices. I disseminated the recent results of FESP/LLNL tokamak research by presenting three posters on: (i) understanding reconnection and controlling edge localized modes (ELMs) using the BOUT++ code, (ii) simulation of resistive ballooning mode turbulence, and (iii) innovative design of Snowflake divertors. I learned of many new and recent results from international tokamak facilities and had the opportunity for discussion of these topics with other scientists at the poster sessions, conference lunches/receptions, etc. Some of the major highlights of the PSI conference topics were: (1) Review of the progress in using metallic tungsten and beryllium (ITER-like) walls at international tokamak facilities: JET (Culham, UK), TEXTOR (FZJ, Germany) and Alcator CMOD (MIT, USA). Results included: effect of small and large-area melting on plasma impurity content and recovery, expected reduction in retention of hydrogenic species, increased heat load during disruptions and need for mitigation with massive gas injection. (2) A review of ELM control in general (T. Evans, GA) and recent results of ELM control using n=2 external magnetic perturbations on ASDEX-Upgrade (MPI-Garching, Germany). (3) General agreement among the international tokamak database that, along the outer midplane of a low collisionality tokamak, the SOL power width in current experiments varies inversely with respect to plasma current (Ip), roughly as 1/Ip, with little dependence on other plasma parameters. This would imply roughly a factor of 1/4 of the width that was assumed for the design of the ITER tokamak

  5. Foreign Travel Trip Report for LLNL travel with DOE FES funding,May 19th-30th, 2012

    Energy Technology Data Exchange (ETDEWEB)

    Joseph, I

    2012-07-05

    I attended the 20th biannual International Conference on Plasma Surface Interaction (PSI) in Fusion Devices in Aachen, Germany, hosted this year by the Forschungszentrum Julich (FZJ) research center. The PSI conference is one of the main international forums for the presentation and discussion of results on plasma surface interactions and edge plasma physics relevant to magnetic confinement fusion devices. I disseminated the recent results of FESP/LLNL tokamak research by presenting three posters on: (i) understanding reconnection and controlling edge localized modes (ELMs) using the BOUT++ code, (ii) simulation of resistive ballooning mode turbulence, and (iii) innovative design of Snowflake divertors. I learned of many new and recent results from international tokamak facilities and had the opportunity for discussion of these topics with other scientists at the poster sessions, conference lunches/receptions, etc. Some of the major highlights of the PSI conference topics were: (1) Review of the progress in using metallic tungsten and beryllium (ITER-like) walls at international tokamak facilities: JET (Culham, UK), TEXTOR (FZJ, Germany) and Alcator CMOD (MIT, USA). Results included: effect of small and large-area melting on plasma impurity content and recovery, expected reduction in retention of hydrogenic species, increased heat load during disruptions and need for mitigation with massive gas injection. (2) A review of ELM control in general (T. Evans, GA) and recent results of ELM control using n=2 external magnetic perturbations on ASDEX-Upgrade (MPI-Garching, Germany). (3) General agreement among the international tokamak database that, along the outer midplane of a low collisionality tokamak, the SOL power width in current experiments varies inversely with respect to plasma current (Ip), roughly as 1/Ip, with little dependence on other plasma parameters. This would imply roughly a factor of 1/4 of the width that was assumed for the design of the ITER tokamak

  6. RCS modeling with the TSAR FDTD code

    Energy Technology Data Exchange (ETDEWEB)

    Pennock, S.T.; Ray, S.L.

    1992-03-01

    The TSAR electromagnetic modeling system consists of a family of related codes that have been designed to work together to provide users with a practical way to set up, run, and interpret the results from complex 3-D finite-difference time-domain (FDTD) electromagnetic simulations. The software has been in development at the Lawrence Livermore National Laboratory (LLNL) and at other sites since 1987. Active internal use of the codes began in 1988 with limited external distribution and use beginning in 1991. TSAR was originally developed to analyze high-power microwave and EMP coupling problems. However, the general-purpose nature of the tools has enabled us to use the codes to solve a broader class of electromagnetic applications and has motivated the addition of new features. In particular a family of near-to-far field transformation routines have been added to the codes, enabling TSAR to be used for radar-cross section and antenna analysis problems.

  7. Code Cactus; Code Cactus

    Energy Technology Data Exchange (ETDEWEB)

    Fajeau, M; Nguyen, L T; Saunier, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-09-01

    This code handles the following problems: -1) Analysis of thermal experiments on a water loop at high or low pressure; steady state or transient behavior; -2) Analysis of thermal and hydrodynamic behavior of water-cooled and moderated reactors, at either high or low pressure, with boiling permitted; fuel elements are assumed to be flat plates: - Flowrate in parallel channels coupled or not by conduction across plates, with conditions of pressure drops or flowrate, variable or not with respect to time is given; the power can be coupled to reactor kinetics calculation or supplied by the code user. The code, containing a schematic representation of safety rod behavior, is a one dimensional, multi-channel code, and has as its complement (FLID), a one-channel, two-dimensional code. (authors) [French] Ce code permet de traiter les problemes ci-dessous: 1. Depouillement d'essais thermiques sur boucle a eau, haute ou basse pression, en regime permanent ou transitoire; 2. Etudes thermiques et hydrauliques de reacteurs a eau, a plaques, a haute ou basse pression, ebullition permise: - repartition entre canaux paralleles, couples on non par conduction a travers plaques, pour des conditions de debit ou de pertes de charge imposees, variables ou non dans le temps; - la puissance peut etre couplee a la neutronique et une representation schematique des actions de securite est prevue. Ce code (Cactus) a une dimension d'espace et plusieurs canaux, a pour complement Flid qui traite l'etude d'un seul canal a deux dimensions. (auteurs)

  8. Programming a real code in a functional language (part 1)

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, C.P.

    1991-09-10

    For some, functional languages hold the promise of allowing ease of programming massively parallel computers that imperative languages such as Fortran and C do not offer. At LLNL, we have initiated a project to write the physics of a major production code in Sisal, a functional language developed at LLNL in collaboration with researchers throughout the world. We are investigating the expressibility of Sisal, as well as its performance on a shared-memory multiprocessor, the Y-MP. An interesting aspect of the project is that Sisal modules can call Fortran modules, and are callable by them. This eliminates the rewriting of 80% of the production code that would not benefit from parallel execution. Preliminary results indicate that the restrictive nature of the language does not cause problems in expressing the algorithms we have chosen. Some interesting aspects of programming in a mixed functional-imperative environment have surfaced, but can be managed. 8 refs.

  9. Overview of the LBL/LLNL negative-ion-based neutral beam program

    International Nuclear Information System (INIS)

    Pyle, R.V.

    1980-01-01

    The LBL/LLNL negative-ion-based neutral beam development program and status are described. The emphasis has shifted in some details since the first symposium in 1977, but our overall objectives remain the same, namely, the development of megawatt d.c. injection systems. Previous emphasis was on a system in which the negative ions were produced by double charge exchange in sodium vapor. At present, the emphasis is on a self-extraction source in which the negative ions are produced on a biased surface imbedded in a plasma. A one-ampere beam will be accelerated to at least 40 keV next year. Studies of negative-ion formation and interactions help provide a data base for the technology program

  10. Research at Clark in the early '60s and at LLNL in the late '80s

    International Nuclear Information System (INIS)

    Gatrousis, C.

    1993-01-01

    Tom Sugihara's scientific leadership over a period of almost four decades covered many areas. His early research at Clark dealt with fission yields measurements and radiochemical separations of fallout species in the marine environment. Tom pioneered many of the methods for detecting soft beta emitters and low levels of radioactivity. Studies of the behavior of radioactivity in the marine ecosystem were important adjuncts to Tom's nuclear science research at Clark University which emphasized investigations of nuclear reaction mechanisms. Among Tom's most important contributions while at Clark was his work with Matsuo and Dudey on the interpretation of isomeric yield ratios and fission studies with Noshkin and Baba. Tom's scientific career oscillated between research and administration. During the latter part of his career his great breadth of interests and his scientific open-quotes tasteclose quotes had a profound influence at LLNL in areas that were new to him, materials science and solid state physics

  11. A historical perspective on fifteen years of laser damage thresholds at LLNL

    International Nuclear Information System (INIS)

    Rainer, F.; De Marco, F.P.; Staggs, M.C.; Kozlowski, M.R.; Atherton, L.J.; Sheehan, L.M.

    1993-01-01

    We have completed a fifteen year, referenced and documented compilation of more than 15,000 measurements of laser-induced damage thresholds (LIDT) conducted at the Lawrence Livermore National Laboratory (LLNL). These measurements cover the spectrum from 248 to 1064 nm with pulse durations ranging from < 1 ns to 65 ns and at pulse-repetition frequencies (PRF) from single shots to 6.3 kHz. We emphasize the changes in LIDTs during the past two years since we last summarized our database. We relate these results to earlier data concentrating on improvements in processing methods, materials, and conditioning techniques. In particular, we highlight the current status of anti-reflective (AR) coatings, high reflectors (HR), polarizers, and frequency-conversion crystals used primarily at 355 nm and 1064 nm

  12. Production of High Harmonic X-Ray Radiation from Non-linear Thomson at LLNL PLEIADES

    CERN Document Server

    Lim, Jae; Betts, Shawn; Crane, John; Doyuran, Adnan; Frigola, Pedro; Gibson, David J; Hartemann, Fred V; Rosenzweig, James E; Travish, Gil; Tremaine, Aaron M

    2005-01-01

    We describe an experiment for production of high harmonic x-ray radiation from Thomson backscattering of an ultra-short high power density laser by a relativistic electron beam at the PLEIADES facility at LLNL. In this scenario, electrons execute a “figure-8” motion under the influence of the high-intensity laser field, where the constant characterizing the field strength is expected to exceed unity: $aL=e*EL/m*c*ωL ≥ 1$. With large $aL$ this motion produces high harmonic x-ray radiation and significant broadening of the spectral peaks. This paper is intended to give a layout of the PLEIADES experiment, along with progress towards experimental goals.

  13. Author Contribution to the Pu Handbook II: Chapter 37 LLNL Integrated Sample Preparation Glovebox (TEM) Section

    International Nuclear Information System (INIS)

    Wall, Mark A.

    2016-01-01

    The development of our Integrated Actinide Sample Preparation Laboratory (IASPL) commenced in 1998 driven by the need to perform transmission electron microscopy studies on naturally aged plutonium and its alloys looking for the microstructural effects of the radiological decay process (1). Remodeling and construction of a laboratory within the Chemistry and Materials Science Directorate facilities at LLNL was required to turn a standard radiological laboratory into a Radiological Materials Area (RMA) and Radiological Buffer Area (RBA) containing type I, II and III workplaces. Two inert atmosphere dry-train glove boxes with antechambers and entry/exit fumehoods (Figure 1), having a baseline atmosphere of 1 ppm oxygen and 1 ppm water vapor, a utility fumehood and a portable, and a third double-walled enclosure have been installed and commissioned. These capabilities, along with highly trained technical staff, facilitate the safe operation of sample preparation processes and instrumentation, and sample handling while minimizing oxidation or corrosion of the plutonium. In addition, we are currently developing the capability to safely transfer small metallographically prepared samples to a mini-SEM for microstructural imaging and chemical analysis. The gloveboxes continue to be the most crucial element of the laboratory allowing nearly oxide-free sample preparation for a wide variety of LLNL-based characterization experiments, which includes transmission electron microscopy, electron energy loss spectroscopy, optical microscopy, electrical resistivity, ion implantation, X-ray diffraction and absorption, magnetometry, metrological surface measurements, high-pressure diamond anvil cell equation-of-state, phonon dispersion measurements, X-ray absorption and emission spectroscopy, and differential scanning calorimetry. The sample preparation and materials processing capabilities in the IASPL have also facilitated experimentation at world-class facilities such as the

  14. Author Contribution to the Pu Handbook II: Chapter 37 LLNL Integrated Sample Preparation Glovebox (TEM) Section

    Energy Technology Data Exchange (ETDEWEB)

    Wall, Mark A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-10-25

    The development of our Integrated Actinide Sample Preparation Laboratory (IASPL) commenced in 1998 driven by the need to perform transmission electron microscopy studies on naturally aged plutonium and its alloys looking for the microstructural effects of the radiological decay process (1). Remodeling and construction of a laboratory within the Chemistry and Materials Science Directorate facilities at LLNL was required to turn a standard radiological laboratory into a Radiological Materials Area (RMA) and Radiological Buffer Area (RBA) containing type I, II and III workplaces. Two inert atmosphere dry-train glove boxes with antechambers and entry/exit fumehoods (Figure 1), having a baseline atmosphere of 1 ppm oxygen and 1 ppm water vapor, a utility fumehood and a portable, and a third double-walled enclosure have been installed and commissioned. These capabilities, along with highly trained technical staff, facilitate the safe operation of sample preparation processes and instrumentation, and sample handling while minimizing oxidation or corrosion of the plutonium. In addition, we are currently developing the capability to safely transfer small metallographically prepared samples to a mini-SEM for microstructural imaging and chemical analysis. The gloveboxes continue to be the most crucial element of the laboratory allowing nearly oxide-free sample preparation for a wide variety of LLNL-based characterization experiments, which includes transmission electron microscopy, electron energy loss spectroscopy, optical microscopy, electrical resistivity, ion implantation, X-ray diffraction and absorption, magnetometry, metrological surface measurements, high-pressure diamond anvil cell equation-of-state, phonon dispersion measurements, X-ray absorption and emission spectroscopy, and differential scanning calorimetry. The sample preparation and materials processing capabilities in the IASPL have also facilitated experimentation at world-class facilities such as the

  15. Three-dimensional modeling with finite element codes

    Energy Technology Data Exchange (ETDEWEB)

    Druce, R.L.

    1986-01-17

    This paper describes work done to model magnetostatic field problems in three dimensions. Finite element codes, available at LLNL, and pre- and post-processors were used in the solution of the mathematical model, the output from which agreed well with the experimentally obtained data. The geometry used in this work was a cylinder with ports in the periphery and no current sources in the space modeled. 6 refs., 8 figs.

  16. Enhanced verification test suite for physics simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, James R.; Brock, Jerry S.; Brandon, Scott T.; Cotrell, David L.; Johnson, Bryan; Knupp, Patrick; Rider, William J.; Trucano, Timothy G.; Weirs, V. Gregory

    2008-09-01

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations.

  17. Final report for the 1996 DOE grant supporting research at the SLAC/LBNL/LLNL B factory

    International Nuclear Information System (INIS)

    Judd, D.; Wright, D.

    1997-01-01

    This final report discusses Department of Energy-supported research funded through Lawrence Livermore National Laboratory (LLNL) which was performed as part of a collaboration between LLNL and Prairie View A and M University to develop part of the BaBar detector at the SLAC B Factory. This work focuses on the Instrumented Flux Return (IFR) subsystem of BaBar and involves a full range of detector development activities: computer simulations of detector performance, creation of reconstruction algorithms, and detector hardware R and D. Lawrence Livermore National Laboratory has a leading role in the IFR subsystem and has established on-site computing and detector facilities to conduct this research. By establishing ties with the existing LLNL Research Collaboration Program and leveraging LLNL resources, the experienced Prairie View group was able to quickly achieve a more prominent role within the BaBar collaboration and make significant contributions to the detector design. In addition, this work provided the first entry point for Historically Black Colleges and Universities into the B Factory collaboration, and created an opportunity to train a new generation of minority students at the premier electron-positron high energy physics facility in the US

  18. Criticality Safety Support to a Project Addressing SNM Legacy Items at LLNL

    International Nuclear Information System (INIS)

    Pearson, J S; Burch, J G; Dodson, K E; Huang, S T

    2005-01-01

    The programmatic, facility and criticality safety support staffs at the LLNL Plutonium Facility worked together to successfully develop and implement a project to process legacy (DNFSB Recommendation 94-1 and non-Environmental, Safety, and Health (ES and H) labeled) materials in storage. Over many years, material had accumulated in storage that lacked information to adequately characterize the material for current criticality safety controls used in the facility. Generally, the fissionable material mass information was well known, but other information such as form, impurities, internal packaging, and presence of internal moderating or reflecting materials were not well documented. In many cases, the material was excess to programmatic need, but such a determination was difficult with the little information given on MC and A labels and in the MC and A database. The material was not packaged as efficiently as possible, so it also occupied much more valuable storage space than was necessary. Although safe as stored, the inadequately characterized material posed a risk for criticality safety noncompliances if moved within the facility under current criticality safety controls. A Legacy Item Implementation Plan was developed and implemented to deal with this problem. Reasonable bounding conditions were determined for the material involved, and criticality safety evaluations were completed. Two appropriately designated glove boxes were identified and criticality safety controls were developed to safely inspect the material. Inspecting the material involved identifying containers of legacy material, followed by opening, evaluating, processing if necessary, characterizing and repackaging the material. Material from multiple containers was consolidated more efficiently thus decreasing the total number of stored items to about one half of the highest count. Current packaging requirements were implemented. Detailed characterization of the material was captured in databases

  19. Impact of the Revised 10 CFR 835 on the Neutron Dose Rates at LLNL

    International Nuclear Information System (INIS)

    Radev, R.

    2009-01-01

    In June 2007, 10 CFR 835 (1) was revised to include new radiation weighting factors for neutrons, updated dosimetric models, and dose terms consistent with the newer ICRP recommendations. A significant aspect of the revised 10 CFR 835 is the adoption of the recommendations outlined in ICRP-60 (2). The recommended new quantities demand a review of much of the basic data used in protection against exposure to sources of ionizing radiation. The International Commission on Radiation Units and Measurements has defined a number of quantities for use in personnel and area monitoring (3,4,5) including the ambient dose equivalent H*(d) to be used for area monitoring and instrument calibrations. These quantities are used in ICRP-60 and ICRP-74. This report deals only with the changes in the ambient dose equivalent and ambient dose rate equivalent for neutrons as a result of the implementation of the revised 10 CFR 835. In the report, the terms neutron dose and neutron dose rate will be used for convenience for ambient neutron dose and ambient neutron dose rate unless otherwise stated. This report provides a qualitative and quantitative estimate of how much the neutron dose rates at LLNL will change with the implementation of the revised 10 CFR 835. Neutron spectra and dose rates from selected locations at the LLNL were measured with a high resolution spectroscopic neutron dose rate system (ROSPEC) as well as with a standard neutron rem meter (a.k.a., a remball). The spectra obtained at these locations compare well with the spectra from the Radiation Calibration Laboratory's (RCL) bare californium source that is currently used to calibrate neutron dose rate instruments. The measurements obtained from the high resolution neutron spectrometer and dose meter ROSPEC and the NRD dose meter compare within the range of ±25%. When the new radiation weighting factors are adopted with the implementation of the revised 10 CFR 835, the measured dose rates will increase by up to 22%. The

  20. LLNL-G3Dv3: Global P wave tomography model for improved regional and teleseismic travel time prediction: LLNL-G3DV3---GLOBAL P WAVE TOMOGRAPHY

    Energy Technology Data Exchange (ETDEWEB)

    Simmons, N. A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Myers, S. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Johannesson, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Matzel, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-10-06

    [1] We develop a global-scale P wave velocity model (LLNL-G3Dv3) designed to accurately predict seismic travel times at regional and teleseismic distances simultaneously. The model provides a new image of Earth's interior, but the underlying practical purpose of the model is to provide enhanced seismic event location capabilities. The LLNL-G3Dv3 model is based on ∼2.8 millionP and Pnarrivals that are re-processed using our global multiple-event locator called Bayesloc. We construct LLNL-G3Dv3 within a spherical tessellation based framework, allowing for explicit representation of undulating and discontinuous layers including the crust and transition zone layers. Using a multiscale inversion technique, regional trends as well as fine details are captured where the data allow. LLNL-G3Dv3 exhibits large-scale structures including cratons and superplumes as well numerous complex details in the upper mantle including within the transition zone. Particularly, the model reveals new details of a vast network of subducted slabs trapped within the transition beneath much of Eurasia, including beneath the Tibetan Plateau. We demonstrate the impact of Bayesloc multiple-event location on the resulting tomographic images through comparison with images produced without the benefit of multiple-event constraints (single-event locations). We find that the multiple-event locations allow for better reconciliation of the large set of direct P phases recorded at 0–97° distance and yield a smoother and more continuous image relative to the single-event locations. Travel times predicted from a 3-D model are also found to be strongly influenced by the initial locations of the input data, even when an iterative inversion/relocation technique is employed.

  1. Summary of Environmental Data Analysis and Work Performed by Lawrence Livermore National Laboratory (LLNL) in Support of the Navajo Nation Abandoned Mine Lands Project at Tse Tah, Arizona

    Energy Technology Data Exchange (ETDEWEB)

    Taffet, Michael J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Esser, Bradley K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Madrid, Victor M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-17

    This report summarizes work performed by Lawrence Livermore National Laboratory (LLNL) under Navajo Nation Services Contract CO9729 in support of the Navajo Abandoned Mine Lands Reclamation Program (NAMLRP). Due to restrictions on access to uranium mine waste sites at Tse Tah, Arizona that developed during the term of the contract, not all of the work scope could be performed. LLNL was able to interpret environmental monitoring data provided by NAMLRP. Summaries of these data evaluation activities are provided in this report. Additionally, during the contract period, LLNL provided technical guidance, instructional meetings, and review of relevant work performed by NAMLRP and its contractors that was not contained in the contract work scope.

  2. Estimating The Reliability of the Lawrence Livermore National Laboratory (LLNL) Flash X-ray (FXR) Machine

    International Nuclear Information System (INIS)

    Ong, M M; Kihara, R; Zentler, J M; Kreitzer, B R; DeHope, W J

    2007-01-01

    At Lawrence Livermore National Laboratory (LLNL), our flash X-ray accelerator (FXR) is used on multi-million dollar hydrodynamic experiments. Because of the importance of the radiographs, FXR must be ultra-reliable. Flash linear accelerators that can generate a 3 kA beam at 18 MeV are very complex. They have thousands, if not millions, of critical components that could prevent the machine from performing correctly. For the last five years, we have quantified and are tracking component failures. From this data, we have determined that the reliability of the high-voltage gas-switches that initiate the pulses, which drive the accelerator cells, dominates the statistics. The failure mode is a single-switch pre-fire that reduces the energy of the beam and degrades the X-ray spot-size. The unfortunate result is a lower resolution radiograph. FXR is a production machine that allows only a modest number of pulses for testing. Therefore, reliability switch testing that requires thousands of shots is performed on our test stand. Study of representative switches has produced pre-fire statistical information and probability distribution curves. This information is applied to FXR to develop test procedures and determine individual switch reliability using a minimal number of accelerator pulses

  3. LLNL Underground-Coal-Gasification Project. Quarterly progress report, July-September 1981

    Energy Technology Data Exchange (ETDEWEB)

    Stephens, D.R.; Clements, W. (eds.)

    1981-11-09

    We have continued our laboratory studies of forward gasification in small blocks of coal mounted in 55-gal drums. A steam/oxygen mixture is fed into a small hole drilled longitudinally through the center of the block, the coal is ignited near the inlet and burns toward the outlet, and the product gases come off at the outlet. Various diagnostic measurements are made during the course of the burn, and afterward the coal block is split open so that the cavity can be examined. Development work continues on our mathematical model for the small coal block experiments. Preparations for the large block experiments at a coal outcrop in the Tono Basin of Washington State have required steadily increasing effort with the approach of the scheduled starting time for the experiments (Fall 1981). Also in preparation is the deep gasification experiment, Tono 1, planned for another site in the Tono Basin after the large block experiments have been completed. Wrap-up work continues on our previous gasification experiments in Wyoming. Results of the postburn core-drilling program Hoe Creek 3 are presented here. Since 1976 the Soviets have been granted four US patents on various aspects of the underground coal gasification process. These patents are described here, and techniques of special interest are noted. Finally, we include ten abstracts of pertinent LLNL reports and papers completed during the quarter.

  4. Status of experiments at LLNL on high-power X-band microwave generators

    International Nuclear Information System (INIS)

    Houck, T.L.; Westenskow, G.A.

    1994-01-01

    The Microwave Source Facility at the Lawrence Livermore National Laboratory (LLNL) is studying the application of induction accelerator technology to high-power microwave generators suitable for linear collider power sources. The authors report on the results of two experiments, both using the Choppertron's 11.4 GHz modulator and a 5-MeV, 1-kA induction beam. The first experimental configuration has a single traveling wave output structure designed to produce in excess of 300 MW in a single fundamental waveguide. This output structure consists of 12 individual cells, the first two incorporating de-Q-ing circuits to dampen higher order resonant modes. The second experiment studies the feasibility of enhancing beam to microwave power conversion by accelerating a modulated beam with induction cells. Referred to as the ''Reacceleration Experiment,'' this experiment consists of three traveling-wave output structures designed to produce about 125 MW per output and two induction cells located between the outputs. Status of current and planned experiments are presented

  5. Pleiades: A Sub-picosecond Tunable X-ray Source at the LLNL Electron Linac

    International Nuclear Information System (INIS)

    Slaughter, Dennis; Springer, Paul; Le Sage, Greg; Crane, John; Ditmire, Todd; Cowan, Tom; Anderson, Scott G.; Rosenzweig, James B.

    2002-01-01

    The use of ultra fast laser pulses to generate very high brightness, ultra short (fs to ps) pulses of x-rays is a topic of great interest to the x-ray user community. In principle, femto-second-scale pump-probe experiments can be used to temporally resolve structural dynamics of materials on the time scale of atomic motion. The development of sub-ps x-ray pulses will make possible a wide range of materials and plasma physics studies with unprecedented time resolution. A current project at LLNL will provide such a novel x-ray source based on Thomson scattering of high power, short laser pulses with a high peak brightness, relativistic electron bunch. The system is based on a 5 mm-mrad normalized emittance photo-injector, a 100 MeV electron RF linac, and a 300 mJ, 35 fs solid-state laser system. The Thomson x-ray source produces ultra fast pulses with x-ray energies capable of probing into high-Z metals, and a high flux per pulse enabling single shot experiments. The system will also operate at a high repetition rate (∼ 10 Hz). (authors)

  6. Assessment and cleanup of the Taxi Strip waste storage area at LLNL [Lawrence Livermore National Laboratory

    International Nuclear Information System (INIS)

    Buerer, A.

    1983-01-01

    In September 1982 the Hazards Control Department of the Lawrence Livermore National Laboratory (LLNL) began a final radiological survey of a former low-level radioactive waste storage area called the Taxi Strip so that the area could be released for construction of an office building. Collection of soil samples at the location of a proposed sewer line led to the discovery of an old disposal pit containing soil contaminated with low-level radioactive waste and organic solvents. The Taxi Strip area was excavated leading to the discovery of three additional small pits. The clean-up of Pit No. 1 is considered to be complete for radioactive contamination. The results from the chlorinated solvent analysis of the borehole samples and the limited number of samples analyzed by gas chromatography/mass spectrometry indicate that solvent clean-up at this pit is complete. This is being verified by gas chromatography/mass spectrometry analysis of a few additional soil samples from the bottom sides and ends of the pit. As a precaution, samples are also being analyzed for metals to determine if further excavation is necessary. Clean-up of Pits No. 2 and No. 3 is considered to be complete for radioactive and solvent contamination. Results of analysis for metals will determine if excavation is complete. Excavation of Pit No. 4 which resulted from surface leakage of radioactive contamination from an evaporation tray is complete

  7. Summary of LLNL's accomplishments for the FY93 Waste Processing Operations Program

    International Nuclear Information System (INIS)

    Grasz, E.; Domning, E.; Heggins, D.; Huber, L.; Hurd, R.; Martz, H.; Roberson, P.; Wilhelmsen, K.

    1994-04-01

    Under the US Department of Energy's (DOE's) Office of Technology Development (OTD)-Robotic Technology Development Program (RTDP), the Waste Processing Operations (WPO) Program was initiated in FY92 to address the development of automated material handling and automated chemical and physical processing systems for mixed wastes. The Program's mission was to develop a strategy for the treatment of all DOE mixed, low-level, and transuranic wastes. As part of this mission, DOE's Mixed Waste Integrated Program (MWIP) was charged with the development of innovative waste treatment technologies to surmount shortcomings of existing baseline systems. Current technology advancements and applications results from cooperation of private industry, educational institutions, and several national laboratories operated for DOE. This summary document presents the LLNL Environmental Restoration and Waste Management (ER and WM) Automation and Robotics Section's contributions in support of DOE's FY93 WPO Program. This document further describes the technological developments that were integrated in the 1993 Mixed Waste Operations (MWO) Demonstration held at SRTC in November 1993

  8. The EBIT Calorimeter Spectrometer: a new, permanent user facility at the LLNL EBIT

    International Nuclear Information System (INIS)

    Porter, F.S.; Beiersdorfer, P.; Brown, G.V.; Doriese, W.; Gygax, J.; Kelley, R.L.; Kilbourne, C.A.; King, J.; Irwin, K.; Reintsema, C.; Ullom, J.

    2007-01-01

    The EBIT Calorimeter Spectrometer (ECS) is currently being completed and will be installed at the EBIT facility at the Lawrence Livermore National Laboratory in October 2007. The ECS will replace the smaller XRS/EBIT microcalorimeter spectrometer that has been in almost continuous operation since 2000. The XRS/EBIT was based on a spare laboratory cryostat and an engineering model detector system from the Suzaku/XRS observatory program. The new ECS spectrometer was built to be a low maintenance, high performance implanted silicon microcalorimeter spectrometer with 4 eV resolution at 6 keV, 32 detector channels, 10 (micro)s event timing, and capable of uninterrupted acquisition sessions of over 60 hours at 50 mK. The XRS/EBIT program has been very successful, producing many results on topics such as laboratory astrophysics, atomic physics, nuclear physics, and calibration of the spectrometers for the National Ignition Facility. The ECS spectrometer will continue this work into the future with improved spectral resolution, integration times, and ease-of-use. We designed the ECS instrument with TES detectors in mind by using the same highly successful magnetic shielding as our laboratory TES cryostats. This design will lead to a future TES instrument at the LLNL EBIT. Here we discuss the legacy of the XRS/EBIT program, the performance of the new ECS spectrometer, and plans for a future TES instrument.

  9. Report on the B-Fields at NIF Workshop Held at LLNL October 12-13, 2015

    International Nuclear Information System (INIS)

    Fournier, K. B.; Moody, J. D.

    2015-01-01

    A national ICF laboratory workshop on requirements for a magnetized target capability on NIF was held by NIF at LLNL on October 12 and 13, attended by experts from LLNL, SNL, LLE, LANL, GA, and NRL. Advocates for indirect drive (LLNL), magnetic (Z) drive (SNL), polar direct drive (LLE), and basic science needing applied B (many institutions) presented and discussed requirements for the magnetized target capabilities they would like to see. 30T capability was most frequently requested. A phased operation increasing the field in steps experimentally can be envisioned. The NIF management will take the inputs from the scientific community represented at the workshop and recommend pulse-powered magnet parameters for NIF that best meet the collective user requests. In parallel, LLNL will continue investigating magnets for future generations that might be powered by compact laser-B-field generators (Moody, Fujioka, Santos, Woolsey, Pollock). The NIF facility engineers will start to analyze compatibility of the recommended pulsed magnet parameters (size, field, rise time, materials) with NIF chamber constraints, diagnostic access, and final optics protection against debris in FY16. The objective of this assessment will be to develop a schedule for achieving an initial Bfield capability. Based on an initial assessment, room temperature magnetized gas capsules will be fielded on NIF first. Magnetized cryo-ice-layered targets will take longer (more compatibility issues). Magnetized wetted foam DT targets (Olson) may have somewhat fewer compatibility issues making them a more likely choice for the first cryo-ice-layered target fielded with applied Bz.

  10. Joint research and development on toxic-material emergency response between ENEA and LLNL. 1982 progress report

    International Nuclear Information System (INIS)

    Gudiksen, P.; Lange, R.; Dickerson, M.; Sullivan, T.; Rosen, L.; Walker, H.; Boeri, G.B.; Caracciolo, R.; Fiorenza, R.

    1982-11-01

    A summary is presented of current and future cooperative studies between ENEA and LLNL researchers designed to develop improved real-time emergency response capabilities for assessing the environmental consequences resulting from an accidental release of toxic materials into the atmosphere. These studies include development and evaluation of atmospheric transport and dispersion models, interfacing of data processing and communications systems, supporting meteorological field experiments, and integration of radiological measurements and model results into real-time assessments

  11. PCS a code system for generating production cross section libraries

    International Nuclear Information System (INIS)

    Cox, L.J.

    1997-01-01

    This document outlines the use of the PCS Code System. It summarizes the execution process for generating FORMAT2000 production cross section files from FORMAT2000 reaction cross section files. It also describes the process of assembling the ASCII versions of the high energy production files made from ENDL and Mark Chadwick's calculations. Descriptions of the function of each code along with its input and output and use are given. This document is under construction. Please submit entries, suggestions, questions, and corrections to (ljc at sign llnl.gov) 3 tabs

  12. The LLNL [Lawrence Livermore National Laboratory] ICF [Inertial Confinement Fusion] Program: Progress toward ignition in the Laboratory

    International Nuclear Information System (INIS)

    Storm, E.; Batha, S.H.; Bernat, T.P.; Bibeau, C.; Cable, M.D.; Caird, J.A.; Campbell, E.M.; Campbell, J.H.; Coleman, L.W.; Cook, R.C.; Correll, D.L.; Darrow, C.B.; Davis, J.I.; Drake, R.P.; Ehrlich, R.B.; Ellis, R.J.; Glendinning, S.G.; Haan, S.W.; Haendler, B.L.; Hatcher, C.W.; Hatchett, S.P.; Hermes, G.L.; Hunt, J.P.; Kania, D.R.; Kauffman, R.L.; Kilkenny, J.D.; Kornblum, H.N.; Kruer, W.L.; Kyrazis, D.T.; Lane, S.M.; Laumann, C.W.; Lerche, R.A.; Letts, S.A.; Lindl, J.D.; Lowdermilk, W.H.; Mauger, G.J.; Montgomery, D.S.; Munro, D.H.; Murray, J.R.; Phillion, D.W.; Powell, H.T.; Remington, B.R.; Ress, D.B.; Speck, D.R.; Suter, L.J.; Tietbohl, G.L.; Thiessen, A.R.; Trebes, J.E.; Trenholme, J.B.; Turner, R.E.; Upadhye, R.S.; Wallace, R.J.; Wiedwald, J.D.; Woodworth, J.G.; Young, P.M.; Ze, F.

    1990-01-01

    The Inertial Confinement Fusion (ICF) Program at the Lawrence Livermore National Laboratory (LLNL) has made substantial progress in target physics, target diagnostics, and laser science and technology. In each area, progress required the development of experimental techniques and computational modeling. The objectives of the target physics experiments in the Nova laser facility are to address and understand critical physics issues that determine the conditions required to achieve ignition and gain in an ICF capsule. The LLNL experimental program primarily addresses indirect-drive implosions, in which the capsule is driven by x rays produced by the interaction of the laser light with a high-Z plasma. Experiments address both the physics of generating the radiation environment in a laser-driven hohlraum and the physics associated with imploding ICF capsules to ignition and high-gain conditions in the absence of alpha deposition. Recent experiments and modeling have established much of the physics necessary to validate the basic concept of ignition and ICF target gain in the laboratory. The rapid progress made in the past several years, and in particular, recent results showing higher radiation drive temperatures and implosion velocities than previously obtained and assumed for high-gain target designs, has led LLNL to propose an upgrade of the Nova laser to 1.5 to 2 MJ (at 0.35 μm) to demonstrate ignition and energy gains of 10 to 20 -- the Nova Upgrade

  13. Monte Carlo Codes Invited Session

    International Nuclear Information System (INIS)

    Trama, J.C.; Malvagi, F.; Brown, F.

    2013-01-01

    This document lists 22 Monte Carlo codes used in radiation transport applications throughout the world. For each code the names of the organization and country and/or place are given. We have the following computer codes. 1) ARCHER, USA, RPI; 2) COG11, USA, LLNL; 3) DIANE, France, CEA/DAM Bruyeres; 4) FLUKA, Italy and CERN, INFN and CERN; 5) GEANT4, International GEANT4 collaboration; 6) KENO and MONACO (SCALE), USA, ORNL; 7) MC21, USA, KAPL and Bettis; 8) MCATK, USA, LANL; 9) MCCARD, South Korea, Seoul National University; 10) MCNP6, USA, LANL; 11) MCU, Russia, Kurchatov Institute; 12) MONK and MCBEND, United Kingdom, AMEC; 13) MORET5, France, IRSN Fontenay-aux-Roses; 14) MVP2, Japan, JAEA; 15) OPENMC, USA, MIT; 16) PENELOPE, Spain, Barcelona University; 17) PHITS, Japan, JAEA; 18) PRIZMA, Russia, VNIITF; 19) RMC, China, Tsinghua University; 20) SERPENT, Finland, VTT; 21) SUPERMONTECARLO, China, CAS INEST FDS Team Hefei; and 22) TRIPOLI-4, France, CEA Saclay

  14. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  15. Strengthening LLNL Missions through Laboratory Directed Research and Development in High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Willis, D. K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-12-01

    High performance computing (HPC) has been a defining strength of Lawrence Livermore National Laboratory (LLNL) since its founding. Livermore scientists have designed and used some of the world’s most powerful computers to drive breakthroughs in nearly every mission area. Today, the Laboratory is recognized as a world leader in the application of HPC to complex science, technology, and engineering challenges. Most importantly, HPC has been integral to the National Nuclear Security Administration’s (NNSA’s) Stockpile Stewardship Program—designed to ensure the safety, security, and reliability of our nuclear deterrent without nuclear testing. A critical factor behind Lawrence Livermore’s preeminence in HPC is the ongoing investments made by the Laboratory Directed Research and Development (LDRD) Program in cutting-edge concepts to enable efficient utilization of these powerful machines. Congress established the LDRD Program in 1991 to maintain the technical vitality of the Department of Energy (DOE) national laboratories. Since then, LDRD has been, and continues to be, an essential tool for exploring anticipated needs that lie beyond the planning horizon of our programs and for attracting the next generation of talented visionaries. Through LDRD, Livermore researchers can examine future challenges, propose and explore innovative solutions, and deliver creative approaches to support our missions. The present scientific and technical strengths of the Laboratory are, in large part, a product of past LDRD investments in HPC. Here, we provide seven examples of LDRD projects from the past decade that have played a critical role in building LLNL’s HPC, computer science, mathematics, and data science research capabilities, and describe how they have impacted LLNL’s mission.

  16. LLNL MOX fuel lead assemblies data report for the surplus plutonium disposition environmental impact statement

    International Nuclear Information System (INIS)

    O'Connor, D.G.; Fisher, S.E.; Holdaway, R.

    1998-08-01

    The purpose of this document is to support the US Department of Energy (DOE) Fissile Materials Disposition Program's preparation of the draft surplus plutonium disposition environmental impact statement. This is one of several responses to data call requests for background information on activities associated with the operation of the lead assembly (LA) mixed-oxide (MOX) fuel fabrication facility. The DOE Office of Fissile Materials Disposition (DOE-MD) has developed a dual-path strategy for disposition of surplus weapons-grade plutonium. One of the paths is to disposition surplus plutonium through irradiation of MOX fuel in commercial nuclear reactors. MOX fuel consists of plutonium and uranium oxides (PuO 2 and UO 2 ), typically containing 95% or more UO 2 . DOE-MD requested that the DOE Site Operations Offices nominate DOE sites that meet established minimum requirements that could produce MOX LAs. LLNL has proposed an LA MOX fuel fabrication approach that would be done entirely inside an S and S Category 1 area. This includes receipt and storage of PuO 2 powder, fabrication of MOX fuel pellets, assembly of fuel rods and bundles, and shipping of the packaged fuel to a commercial reactor site. Support activities will take place within a Category 1 area. Building 332 will be used to receive and store the bulk PuO 2 powder, fabricate MOX fuel pellets, and assemble fuel rods. Building 334 will be used to assemble, store, and ship fuel bundles. Only minor modifications would be required of Building 332. Uncontaminated glove boxes would need to be removed, petition walls would need to be removed, and minor modifications to the ventilation system would be required

  17. Attenuation Drift in the Micro-Computed Tomography System at LLNL

    Energy Technology Data Exchange (ETDEWEB)

    Dooraghi, Alex A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brown, William [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Seetho, Isaac [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kallman, Jeff [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lennox, Kristin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glascoe, Lee [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-12

    The maximum allowable level of drift in the linear attenuation coefficients (μ) for a Lawrence Livermore National Laboratory (LLNL) micro-computed tomography (MCT) system was determined to be 0.1%. After ~100 scans were acquired during the period of November 2014 to March 2015, the drift in μ for a set of six reference materials reached or exceeded 0.1%. Two strategies have been identified to account for or correct the drift. First, normalizing the 160 kV and 100 kV μ data by the μ of water at the corresponding energy, in contrast to conducting normalization at the 160 kV energy only, significantly compensates for measurement drift. Even after the modified normalization, μ of polytetrafluoroethylene (PTFE) increases linearly with scan number at an average rate of 0.00147% per scan. This is consistent with PTFE radiation damage documented in the literature. The second strategy suggested is the replacement of the PTFE reference with fluorinated ethylene propylene (FEP), which has the same effective atomic number (Ze) and electron density (ρe) as PTFE, but is 10 times more radiation resistant. This is important as effective atomic number and electron density are key parameters in analysis. The presence of a material with properties such as PTFE, when taken together with the remaining references, allows for a broad range of the (Ze, ρe) feature space to be used in analysis. While FEP is documented as 10 times more radiation resistant, testing will be necessary to assess how often, if necessary, FEP will need to be replaced. As radiation damage to references has been observed, it will be necessary to monitor all reference materials for radiation damage to ensure consistent x-ray characteristics of the references.

  18. LLNL MOX fuel lead assemblies data report for the surplus plutonium disposition environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    O`Connor, D.G.; Fisher, S.E.; Holdaway, R. [and others

    1998-08-01

    The purpose of this document is to support the US Department of Energy (DOE) Fissile Materials Disposition Program`s preparation of the draft surplus plutonium disposition environmental impact statement. This is one of several responses to data call requests for background information on activities associated with the operation of the lead assembly (LA) mixed-oxide (MOX) fuel fabrication facility. The DOE Office of Fissile Materials Disposition (DOE-MD) has developed a dual-path strategy for disposition of surplus weapons-grade plutonium. One of the paths is to disposition surplus plutonium through irradiation of MOX fuel in commercial nuclear reactors. MOX fuel consists of plutonium and uranium oxides (PuO{sub 2} and UO{sub 2}), typically containing 95% or more UO{sub 2}. DOE-MD requested that the DOE Site Operations Offices nominate DOE sites that meet established minimum requirements that could produce MOX LAs. LLNL has proposed an LA MOX fuel fabrication approach that would be done entirely inside an S and S Category 1 area. This includes receipt and storage of PuO{sub 2} powder, fabrication of MOX fuel pellets, assembly of fuel rods and bundles, and shipping of the packaged fuel to a commercial reactor site. Support activities will take place within a Category 1 area. Building 332 will be used to receive and store the bulk PuO{sub 2} powder, fabricate MOX fuel pellets, and assemble fuel rods. Building 334 will be used to assemble, store, and ship fuel bundles. Only minor modifications would be required of Building 332. Uncontaminated glove boxes would need to be removed, petition walls would need to be removed, and minor modifications to the ventilation system would be required.

  19. Coding Partitions

    Directory of Open Access Journals (Sweden)

    Fabio Burderi

    2007-05-01

    Full Text Available Motivated by the study of decipherability conditions for codes weaker than Unique Decipherability (UD, we introduce the notion of coding partition. Such a notion generalizes that of UD code and, for codes that are not UD, allows to recover the ``unique decipherability" at the level of the classes of the partition. By tacking into account the natural order between the partitions, we define the characteristic partition of a code X as the finest coding partition of X. This leads to introduce the canonical decomposition of a code in at most one unambiguouscomponent and other (if any totally ambiguouscomponents. In the case the code is finite, we give an algorithm for computing its canonical partition. This, in particular, allows to decide whether a given partition of a finite code X is a coding partition. This last problem is then approached in the case the code is a rational set. We prove its decidability under the hypothesis that the partition contains a finite number of classes and each class is a rational set. Moreover we conjecture that the canonical partition satisfies such a hypothesis. Finally we consider also some relationships between coding partitions and varieties of codes.

  20. LLNL contributions to ANL Report ANL/NE-16/6 'Sharp User Manual'

    International Nuclear Information System (INIS)

    Solberg, J. M.

    2016-01-01

    Diablo is a Multiphysics implicit finite element code with an emphasis on coupled structural/thermal analysis. In the SHARP framework, it is used as the structural solver, and may also be used as the mesh smoother.

  1. Lawrence Livermore National Laboratory Probabilistic Seismic Hazard Codes Validation

    International Nuclear Information System (INIS)

    Savy, J B

    2003-01-01

    Probabilistic Seismic Hazard Analysis (PSHA) is a methodology that estimates the likelihood that various levels of earthquake-caused ground motion will be exceeded at a given location in a given future time-period. LLNL has been developing the methodology and codes in support of the Nuclear Regulatory Commission (NRC) needs for reviews of site licensing of nuclear power plants, since 1978. A number of existing computer codes have been validated and still can lead to ranges of hazard estimates in some cases. Until now, the seismic hazard community had not agreed on any specific method for evaluation of these codes. The Earthquake Engineering Research Institute (EERI) and the Pacific Engineering Earthquake Research (PEER) center organized an exercise in testing of existing codes with the aim of developing a series of standard tests that future developers could use to evaluate and calibrate their own codes. Seven code developers participated in the exercise, on a voluntary basis. Lawrence Livermore National laboratory participated with some support from the NRC. The final product of the study will include a series of criteria for judging of the validity of the results provided by a computer code. This EERI/PEER project was first planned to be completed by June of 2003. As the group neared completion of the tests, the managing team decided that new tests were necessary. As a result, the present report documents only the work performed to this point. It demonstrates that the computer codes developed by LLNL perform all calculations correctly and as intended. Differences exist between the results of the codes tested, that are attributed to a series of assumptions, on the parameters and models, that the developers had to make. The managing team is planning a new series of tests to help in reaching a consensus on these assumptions

  2. Design and construction of a 208-L drum containing representative LLNL transuranic and low-level wastes

    International Nuclear Information System (INIS)

    Camp, D.C.; Pickering, J.; Martz, H.E.

    1994-01-01

    At the Lawrence Livermore National Laboratory (LLNL), we are developing the nondestructive analysis (NDA) technique of active (A) computed tomography (CT) to measure waste matrix attenuation as a function of gamma-ray energy (ACT); and passive. (P) Cr to locate and identify all gamma-ray emitting isotopes within a waste container. Coupling the ACT and PCT results will quantify each isotope identified, thereby categorize the amount of radioactivity within waste drums having volumes up to 416-liters (L), i.e., 110-gallon drums

  3. Evaluation of dynamic range for LLNL streak cameras using high contrast pulsed and pulse podiatry on the Nova laser system

    International Nuclear Information System (INIS)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-01-01

    This paper reports on a standard LLNL streak camera that has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1

  4. [Utilizing the ultraintense JanUSP laser at LLNL]. 99-ERD-049 Final LDRD Report

    International Nuclear Information System (INIS)

    Patel, P K; Price, D F; Mackinnon, A J; Springer, P T

    2002-01-01

    Recent advances in laser and optical technologies have now enabled the current generation of high intensity, ultrashort-pulse lasers to achieve focal intensities of 10 20 -10 21 W/cm 2 in pulse durations of 100-500fs. These ultraintense laser pulses are capable of producing highly relativistic plasma states with densities, temperatures, and pressures rivaling those found in the interiors of stars and nuclear weapons. Utilizing the ultraintense 100TW JanUSP laser at LLNL we have explored the possibility of ion shock heating small micron-sized plasmas to extremely high energy densities approaching 1GJ/g on timescales of a few hundred femtoseconds. The JanUSP laser delivers 10 Joules of energy in a 100fs pulse in a near diffraction-limited beam, producing intensities on target of up to 10 21 W/cm 2 . The electric field of the laser at this intensity ionizes and accelerates electrons to relativistic MeV energies. The sudden ejection of electrons from the focal region produces tremendous electrostatic forces which in turn accelerate heavier ions to MeV energies. The predicted ion flux of 1 MJ/cm 2 is sufficient to achieve thermal equilibrium conditions at high temperature in solid density targets. Our initial experiments were carried out at the available laser contrast of 10 -7 (i.e. the contrast of the amplified spontaneous emission (ASE), and of the pre-pules produced in the regenerative amplifier). We used the nuclear photoactivation of Au-197 samples to measure the gamma production above 12MeV-corresponding to the threshold for the Au-197(y,n) reaction. Since the predominant mechanism for gamma production is through the bremsstrahlung emission of energetic electrons as they pass through the solid target we were able to infer a conversion yield of several percent of the incident laser energy into electrons with energies >12MeV. This result is consistent with the interaction of the main pulse with a large pre-formed plasma. The contrast of the laser was improved to

  5. Dielectronic Satellite Spectra of Na-like Mo Ions Benchmarked by LLNL EBIT with Application to HED Plasmas

    Science.gov (United States)

    Stafford, A.; Safronova, A. S.; Kantsyrev, V. L.; Safronova, U. I.; Petkov, E. E.; Shlyaptseva, V. V.; Childers, R.; Shrestha, I.; Beiersdorfer, P.; Hell, H.; Brown, G. V.

    2017-10-01

    Dielectronic recombination (DR) is an important process for astrophysical and laboratory high energy density (HED) plasmas and the associated satellite lines are frequently used for plasma diagnostics. In particular, K-shell DR satellite lines were studied in detail in low-Z plasmas. L-shell Na-like spectral features from Mo X-pinches considered here represent the blend of DR and inner shell satellites and motivated the detailed study of DR at the EBIT-1 electron beam ion trap at LLNL. In these experiments the beam energy was swept between 0.6 - 2.4 keV to produce resonances at certain electron beam energies. The advantages of using an electron beam ion trap to better understand atomic processes with highly ionized ions in HED Mo plasma are highlighted. This work was supported by NNSA under DOE Grant DE-NA0002954. Work at LLNL was performed under the auspices of the U.S. DOE under Contract No. DE-AC52-07NA27344.

  6. Estimate of aircraft crash hit frequencies on to facilities at the Lawrence Livermore National Laboratory (LLNL) Site 200

    International Nuclear Information System (INIS)

    Kimura, C.Y.

    1997-01-01

    Department of Energy (DOE) nuclear facilities are required by DOE Order 5480.23, Section 8.b.(3)(k) to consider external events as initiating events to accidents within the scope of their Safety Analysis Reports (SAR). One of the external initiating events which should be considered within the scope of a SAR is an aircraft accident, i.e., an aircraft crashing into the nuclear facility with the related impact and fire leading to penetration of the facility and to the release of radioactive and/or hazardous materials. This report presents the results of an Aircraft Crash Frequency analysis performed for the Materials Management Area (MMA), and the National Ignition Facility (NIF) at the Lawrence Livermore National Laboratory (LLNL) Site 200. The analysis estimates only the aircraft crash hit frequency on to the analyzed facilities. No initial aircraft crash hit frequency screening structural response calculations of the facilities to the aircraft impact, or consequence analysis of radioactive/hazardous materials released following the aircraft impact are performed. The method used to estimate the aircraft crash hit frequencies on to facilities at the Lawrence Livermore National Laboratory (LLNL) generally follows the procedure given by the DOE Standard 3014-96 on Aircraft Crash Analysis. However, certain adjustments were made to the DOE Standard procedure because of the site specific fight environment or because of facility specific characteristics

  7. TMRBAR power balance code for tandem mirror reactors

    International Nuclear Information System (INIS)

    Blackkfield, D.T.; Campbell, R.; Fenstermacher, M.; Bulmer, R.; Perkins, L.; Peng, Y.K.M.; Reid, R.L.; Wu, K.F.

    1984-01-01

    A revised version of the tandem mirror multi-point code TMRBAR developed at LLNL has been used to examine various reactor designs using MARS-like ''c'' coils. We solve 14 to 16 non-linear equations to obtain the densities, temperatures, plasma potential and magnetic field on axis at the cardinal points. Since ICRH, ECRH, and neutral beams may be used to stabilize the central cell, various combinations of rf and neutral beam powers may satisfy the physics. To select a desired set of physics parameters, we use nonlinear optimization techniques. Whit these routines, we minimize or maximize a physics variable subject to the physics constraints being satisfied. For example, for a given fusion power we may find the minimum length needed to have an ignited central cell or the maximum fusion Q. Finally, we have coupled this physics model to the LLNL magnetics-MHD code. This code runs the EFFI magnetic field generator and uses TEBASCO to calculate 1-D MHD equilibria and stability

  8. Speaking Code

    DEFF Research Database (Denmark)

    Cox, Geoff

    Speaking Code begins by invoking the “Hello World” convention used by programmers when learning a new language, helping to establish the interplay of text and code that runs through the book. Interweaving the voice of critical writing from the humanities with the tradition of computing and software...

  9. BBU code development for high-power microwave generators

    International Nuclear Information System (INIS)

    Houck, T.L.; Westenskow, G.A.; Yu, S.S.

    1992-01-01

    We are developing a two-dimensional, time-dependent computer code for the simulation of transverse instabilities in support of relativistic klystron-two beam accelerator research at LLNL. The code addresses transient effects as well as both cumulative and regenerative beam breakup modes. Although designed specifically for the transport of high current (kA) beams through traveling-wave structures, it is applicable to devices consisting of multiple combinations of standing-wave, traveling-wave, and induction accelerator structures. In this paper we compare code simulations to analytical solutions for the case where there is no rf coupling between cavities, to theoretical scaling parameters for coupled cavity structures, and to experimental data involving beam breakup in the two traveling-wave output structure of our microwave generator. (Author) 4 figs., tab., 5 refs

  10. First experimental results from IBM/TENN/TULANE/LLNL/LBL undulator beamline at the advanced light source

    International Nuclear Information System (INIS)

    Jia, J.J.; Callcott, T.A.; Yurkas, J.; Ellis, A.W.; Himpsel, F.J.; Samant, M.G.; Stoehr, J.; Ederer, D.L.; Carlisle, J.A.; Hudson, E.A.; Terminello, L.J.; Shuh, D.K.; Perera, R.C.C.

    1995-01-01

    The IBM/TENN/TULANE/LLNL/LBL Beamline 8.0 at the advanced light source combining a 5.0 cm, 89 period undulator with a high-throughput, high-resolution spherical grating monochromator, provides a powerful excitation source over a spectral range of 70--1200 eV for surface physics and material science research. The beamline progress and the first experimental results obtained with a fluorescence end station on graphite and titanium oxides are presented here. The dispersive features in K emission spectra of graphite excited near threshold, and found a clear relationship between them and graphite band structure are observed. The monochromator is operated at a resolving power of roughly 2000, while the spectrometer has a resolving power of 400 for these fluorescence experiments

  11. Production of High Harmonic X-ray Radiation from Non-linear Thomson Scattering at LLNL PLEIADES

    International Nuclear Information System (INIS)

    Lim, J; Doyuran, A; Frigola, P; Travish, G; Rosenzweig, J; Anderson, S; Betts, S; Crane, J; Gibson, D; Hartemann, F; Tremaine, A

    2005-01-01

    We describe an experiment for production of high harmonic x-ray radiation from Thomson backscattering of an ultra-short high power density laser by a relativistic electron beam at the PLEIADES facility at LLNL. In this scenario, electrons execute a ''figure-8'' motion under the influence of the high-intensity laser field, where the constant characterizing the field strength is expected to exceed unity: a L = eE L /m e cw L (ge) 1. With large a L this motion produces high harmonic x-ray radiation and significant broadening of the spectral peaks. This paper is intended to give a layout of the PLEIADES experiment, along with progress towards experimental goals

  12. Progress Toward Measuring CO2 Isotopologue Fluxes in situ with the LLNL Miniature, Laser-based CO2 Sensor

    Science.gov (United States)

    Osuna, J. L.; Bora, M.; Bond, T.

    2015-12-01

    One method to constrain photosynthesis and respiration independently at the ecosystem scale is to measure the fluxes of CO2­ isotopologues. Instrumentation is currently available to makes these measurements but they are generally costly, large, bench-top instruments. Here, we present progress toward developing a laser-based sensor that can be deployed directly to a canopy to passively measure CO2 isotopologue fluxes. In this study, we perform initial proof-of-concept and sensor characterization tests in the laboratory and in the field to demonstrate performance of the Lawrence Livermore National Laboratory (LLNL) tunable diode laser flux sensor. The results shown herein demonstrate measurement of bulk CO2 as a first step toward achieving flux measurements of CO2 isotopologues. The sensor uses a Vertical Cavity Surface Emitting Laser (VCSEL) in the 2012 nm range. The laser is mounted in a multi-pass White Cell. In order to amplify the absorption signal of CO2 in this range we employ wave modulation spectroscopy, introducing an alternating current (AC) bias component where f is the frequency of modulation on the laser drive current in addition to the direct current (DC) emission scanning component. We observed a strong linear relationship (r2 = 0.998 and r2 = 0.978 at all and low CO2 concentrations, respectively) between the 2f signal and the CO2 concentration in the cell across the range of CO2 concentrations relevant for flux measurements. We use this calibration to interpret CO2 concentration of a gas flowing through the White cell in the laboratory and deployed over a grassy field. We will discuss sensor performance in the lab and in situ as well as address steps toward achieving canopy-deployed, passive measurements of CO2 isotopologue fluxes. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-675788

  13. Coding Labour

    Directory of Open Access Journals (Sweden)

    Anthony McCosker

    2014-03-01

    Full Text Available As well as introducing the Coding Labour section, the authors explore the diffusion of code across the material contexts of everyday life, through the objects and tools of mediation, the systems and practices of cultural production and organisational management, and in the material conditions of labour. Taking code beyond computation and software, their specific focus is on the increasingly familiar connections between code and labour with a focus on the codification and modulation of affect through technologies and practices of management within the contemporary work organisation. In the grey literature of spreadsheets, minutes, workload models, email and the like they identify a violence of forms through which workplace affect, in its constant flux of crisis and ‘prodromal’ modes, is regulated and governed.

  14. LLNL's Big Science Capabilities Help Spur Over $796 Billion in U.S. Economic Activity Sequencing the Human Genome

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, Jeffrey S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-28

    LLNL’s successful history of taking on big science projects spans beyond national security and has helped create billions of dollars per year in new economic activity. One example is LLNL’s role in helping sequence the human genome. Over $796 billion in new economic activity in over half a dozen fields has been documented since LLNL successfully completed this Grand Challenge.

  15. Speech coding

    Energy Technology Data Exchange (ETDEWEB)

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  16. Optimal codes as Tanner codes with cyclic component codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pinero, Fernando; Zeng, Peng

    2014-01-01

    In this article we study a class of graph codes with cyclic code component codes as affine variety codes. Within this class of Tanner codes we find some optimal binary codes. We use a particular subgraph of the point-line incidence plane of A(2,q) as the Tanner graph, and we are able to describe ...

  17. System Modeling of kJ-class Petawatt Lasers at LLNL

    International Nuclear Information System (INIS)

    Shverdin, M.Y.; Rushford, M.; Henesian, M.A.; Boley, C.; Haefner, C.; Heebner, J.E.; Crane, J.K.; Siders, C.W.; Barty, C.P.

    2010-01-01

    in the system. We employ 3D Fourier based propagation codes: MIRO, Virtual Beamline (VBL), and PROP for time-domain pulse analysis. These codes simulate nonlinear effects, calculate near and far field beam profiles, and account for amplifier gain. Verification of correct system set-up is a major difficulty to using these codes. VBL and PROP predictions have been extensively benchmarked to NIF experiments, and the verified descriptions of specific NIF beamlines are used for ARC. MIRO has the added capability of treating bandwidth specific effects of CPA. A sample MIRO model of the NIF beamline is shown in Fig. 3. MIRO models are benchmarked to VBL and PROP in the narrow bandwidth mode. Developing a variety of simulation tools allows us to cross-check predictions of different models and gain confidence in their fidelity. Preliminary experiments, currently in progress, are allowing us to validate and refine our models, and help guide future experimental campaigns.

  18. Aztheca Code

    International Nuclear Information System (INIS)

    Quezada G, S.; Espinosa P, G.; Centeno P, J.; Sanchez M, H.

    2017-09-01

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  19. Vocable Code

    DEFF Research Database (Denmark)

    Soon, Winnie; Cox, Geoff

    2018-01-01

    a computational and poetic composition for two screens: on one of these, texts and voices are repeated and disrupted by mathematical chaos, together exploring the performativity of code and language; on the other, is a mix of a computer programming syntax and human language. In this sense queer code can...... be understood as both an object and subject of study that intervenes in the world’s ‘becoming' and how material bodies are produced via human and nonhuman practices. Through mixing the natural and computer language, this article presents a script in six parts from a performative lecture for two persons...

  20. NSURE code

    International Nuclear Information System (INIS)

    Rattan, D.S.

    1993-11-01

    NSURE stands for Near-Surface Repository code. NSURE is a performance assessment code. developed for the safety assessment of near-surface disposal facilities for low-level radioactive waste (LLRW). Part one of this report documents the NSURE model, governing equations and formulation of the mathematical models, and their implementation under the SYVAC3 executive. The NSURE model simulates the release of nuclides from an engineered vault, their subsequent transport via the groundwater and surface water pathways tot he biosphere, and predicts the resulting dose rate to a critical individual. Part two of this report consists of a User's manual, describing simulation procedures, input data preparation, output and example test cases

  1. The Aster code; Code Aster

    Energy Technology Data Exchange (ETDEWEB)

    Delbecq, J.M

    1999-07-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  2. Coding Class

    DEFF Research Database (Denmark)

    Ejsing-Duun, Stine; Hansbøl, Mikala

    Denne rapport rummer evaluering og dokumentation af Coding Class projektet1. Coding Class projektet blev igangsat i skoleåret 2016/2017 af IT-Branchen i samarbejde med en række medlemsvirksomheder, Københavns kommune, Vejle Kommune, Styrelsen for IT- og Læring (STIL) og den frivillige forening...... Coding Pirates2. Rapporten er forfattet af Docent i digitale læringsressourcer og forskningskoordinator for forsknings- og udviklingsmiljøet Digitalisering i Skolen (DiS), Mikala Hansbøl, fra Institut for Skole og Læring ved Professionshøjskolen Metropol; og Lektor i læringsteknologi, interaktionsdesign......, design tænkning og design-pædagogik, Stine Ejsing-Duun fra Forskningslab: It og Læringsdesign (ILD-LAB) ved Institut for kommunikation og psykologi, Aalborg Universitet i København. Vi har fulgt og gennemført evaluering og dokumentation af Coding Class projektet i perioden november 2016 til maj 2017...

  3. Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  4. ANIMAL code

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1979-01-01

    This report describes ANIMAL, a two-dimensional Eulerian magnetohydrodynamic computer code. ANIMAL's physical model also appears. Formulated are temporal and spatial finite-difference equations in a manner that facilitates implementation of the algorithm. Outlined are the functions of the algorithm's FORTRAN subroutines and variables

  5. Network Coding

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 15; Issue 7. Network Coding. K V Rashmi Nihar B Shah P Vijay Kumar. General Article Volume 15 Issue 7 July 2010 pp 604-621. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/015/07/0604-0621 ...

  6. MCNP code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids

  7. Expander Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 10; Issue 1. Expander Codes - The Sipser–Spielman Construction. Priti Shankar. General Article Volume 10 ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science Bangalore 560 012, India.

  8. The global sustainability project and the LLNL China energy systems model

    International Nuclear Information System (INIS)

    Harris, N; Lamont, A; Stewart, J; Woodrow, C.

    1999-01-01

    The sustainability of our modern way of life is becoming a major concern of both our domestic and international policy. The Rio conference on the environment and the recent Kyoto conference on global climate change are two indications of the importance of solving global environmental problem. Energy is a key component in global sustainability since obtaining and using it has major environmental effects. If our energy systems are to be sustainable in the long run, they must be structured using technologies that have a minimal impact on our environment and resources. At the same time, they must meet practical economic requirements: they must be reasonably economical, they must meet the needs of society and they must be tailored to the resources that are available in a particular region or country. Because economic considerations and government policies both determine the development of the energy system, economic and systems modeling can help us better understand ways that new technologies and policies can be used to obtain a more sustainable system. The Global Sustainability Project has developed both economic modeling software and models to help us better understand these issues and has applied them to the analysis of energy and environmental problems in China. In the past year, the models and data developed by the project have been used to support other projects investigating the interaction of technologies and the environment. The project this year has focused on software development to improve our modeling tools and on the refinement and application of the China Energy System model. The major thrust of the software development has been improvements in the METANet economic software system. We have modified its solution algorithm to improve speed and accuracy of the solutions and to make it compatible with the SuperCode modeling system. It is planned to eventually merge the two systems to take advantage of the faster, more flexible solution algorithms of SuperCode

  9. Panda code

    International Nuclear Information System (INIS)

    Altomare, S.; Minton, G.

    1975-02-01

    PANDA is a new two-group one-dimensional (slab/cylinder) neutron diffusion code designed to replace and extend the FAB series. PANDA allows for the nonlinear effects of xenon, enthalpy and Doppler. Fuel depletion is allowed. PANDA has a completely general search facility which will seek criticality, maximize reactivity, or minimize peaking. Any single parameter may be varied in a search. PANDA is written in FORTRAN IV, and as such is nearly machine independent. However, PANDA has been written with the present limitations of the Westinghouse CDC-6600 system in mind. Most computation loops are very short, and the code is less than half the useful 6600 memory size so that two jobs can reside in the core at once. (auth)

  10. Lawrence Livermore National Laboratory (LLNL) Oxide Material Representation in the Material Identification and Surveillance (MIS) Program, Revision 2

    Energy Technology Data Exchange (ETDEWEB)

    Riley, D C; Dodson, K

    2004-06-30

    The Materials Identification and Surveillance (MIS) program was established within the 94-1 R&D Program to confirm the suitability of plutonium-bearing materials for stabilization, packaging, and long-term storage under DOE-STD-3013-2000. Oxide materials from different sites were chemically and physically characterized. The adequacy of the stabilization process parameters of temperature and duration at temperature (950 C and 2 hours) for eliminating chemical reactivity and reducing the moisture content to less than 0.5 weight percent were validated. Studies also include surveillance monitoring to determine the behavior of the oxides and packaging materials under storage conditions. Materials selected for this program were assumed to be representative of the overall inventory for DOE sites. The Quality Assurance section of the DOE-STD-3013-2000 required that each site be responsible for assuring that oxides packaged according to this standard are represented by items in the MIS characterization program. The purpose of this document is to define the path for determining if an individual item is ''represented'' in the MIS Program and to show that oxides being packaged at Lawrence Livermore National Laboratory (LLNL) are considered represented in the MIS program. The methodology outlined in the MIS Representation Document (LA-14016-MS) for demonstrating representation requires concurrence of the MIS working Group (MIS-WG). The signature page on this document provides for the MIS-WG concurrence.

  11. CANAL code

    International Nuclear Information System (INIS)

    Gara, P.; Martin, E.

    1983-01-01

    The CANAL code presented here optimizes a realistic iron free extraction channel which has to provide a given transversal magnetic field law in the median plane: the current bars may be curved, have finite lengths and cooling ducts and move in a restricted transversal area; terminal connectors may be added, images of the bars in pole pieces may be included. A special option optimizes a real set of circular coils [fr

  12. Seismic Analysis Code (SAC): Development, porting, and maintenance within a legacy code base

    Science.gov (United States)

    Savage, B.; Snoke, J. A.

    2017-12-01

    The Seismic Analysis Code (SAC) is the result of toil of many developers over almost a 40-year history. Initially a Fortran-based code, it has undergone major transitions in underlying bit size from 16 to 32, in the 1980s, and 32 to 64 in 2009; as well as a change in language from Fortran to C in the late 1990s. Maintenance of SAC, the program and its associated libraries, have tracked changes in hardware and operating systems including the advent of Linux in the early 1990, the emergence and demise of Sun/Solaris, variants of OSX processors (PowerPC and x86), and Windows (Cygwin). Traces of these systems are still visible in source code and associated comments. A major concern while improving and maintaining a routinely used, legacy code is a fear of introducing bugs or inadvertently removing favorite features of long-time users. Prior to 2004, SAC was maintained and distributed by LLNL (Lawrence Livermore National Lab). In that year, the license was transferred from LLNL to IRIS (Incorporated Research Institutions for Seismology), but the license is not open source. However, there have been thousands of downloads a year of the package, either source code or binaries for specific system. Starting in 2004, the co-authors have maintained the SAC package for IRIS. In our updates, we fixed bugs, incorporated newly introduced seismic analysis procedures (such as EVALRESP), added new, accessible features (plotting and parsing), and improved the documentation (now in HTML and PDF formats). Moreover, we have added modern software engineering practices to the development of SAC including use of recent source control systems, high-level tests, and scripted, virtualized environments for rapid testing and building. Finally, a "sac-help" listserv (administered by IRIS) was setup for SAC-related issues and is the primary avenue for users seeking advice and reporting bugs. Attempts are always made to respond to issues and bugs in a timely fashion. For the past thirty-plus years

  13. From concatenated codes to graph codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom

    2004-01-01

    We consider codes based on simple bipartite expander graphs. These codes may be seen as the first step leading from product type concatenated codes to more complex graph codes. We emphasize constructions of specific codes of realistic lengths, and study the details of decoding by message passing...

  14. Normalized Tritium Quantification Approach (NoTQA) a Method for Quantifying Tritium Contaminated Trash and Debris at LLNL

    International Nuclear Information System (INIS)

    Dominick, J.L.; Rasmussen, C.L.

    2008-01-01

    Several facilities and many projects at LLNL work exclusively with tritium. These operations have the potential to generate large quantities of Low-Level Radioactive Waste (LLW) with the same or similar radiological characteristics. A standardized documented approach to characterizing these waste materials for disposal as radioactive waste will enhance the ability of the Laboratory to manage them in an efficient and timely manner while ensuring compliance with all applicable regulatory requirements. This standardized characterization approach couples documented process knowledge with analytical verification and is very conservative, overestimating the radioactivity concentration of the waste. The characterization approach documented here is the Normalized Tritium Quantification Approach (NoTQA). This document will serve as a Technical Basis Document which can be referenced in radioactive waste characterization documentation packages such as the Information Gathering Document. In general, radiological characterization of waste consists of both developing an isotopic breakdown (distribution) of radionuclides contaminating the waste and using an appropriate method to quantify the radionuclides in the waste. Characterization approaches require varying degrees of rigor depending upon the radionuclides contaminating the waste and the concentration of the radionuclide contaminants as related to regulatory thresholds. Generally, as activity levels in the waste approach a regulatory or disposal facility threshold the degree of required precision and accuracy, and therefore the level of rigor, increases. In the case of tritium, thresholds of concern for control, contamination, transportation, and waste acceptance are relatively high. Due to the benign nature of tritium and the resulting higher regulatory thresholds, this less rigorous yet conservative characterization approach is appropriate. The scope of this document is to define an appropriate and acceptable

  15. The role of the LLNL Atmospheric Release Advisory Capability in a FRMAC response to a nuclear power plant incident

    International Nuclear Information System (INIS)

    Baskett, R.L.; Sullivan, T.J.; Ellis, J.S.; Foster, C.S.

    1994-01-01

    The Federal Radiological Emergency Response Plan (FRERP) can provide several emergency response resources in response to a nuclear power plant (NPP) accident if requested by a state or local agency. The primary FRERP technical resources come from the US Department of Energy's (DOE) Federal Radiological Monitoring and Assessment Center (FRMAC). Most of the FRMAC assets are located at the DOE Remote Sensing Laboratory (RSL) at Nellis Air Force Base, Las Vegas, Nevada. In addition, the primary atmospheric dispersion modeling and dose assessment asset, the Atmospheric Release Advisory Capability (ARAC) is located at Lawrence Livermore National Laboratory (LLNL) in Livermore, California. In the early stages of a response, ARAC relies on its automatic worldwide meteorological data acquisition via the Air Force Global Weather Center (AFGWC). The regional airport data are supplemented with data from on-site towers and sodars and the National Oceanographic ampersand Atmospheric Administration's (NOAA) field-deployable real-time rawinsonde system. ARAC is prepared with three-dimensional regional-scale diagnostic dispersion model to simulate the complex mixed fission product release from a reactor accident. The program has been operational for 18 years and is presently developing its third generation system. The current modernization includes faster central computers, a new site workstation system. The current modernization includes faster central computers, a new site workstation system, improvements in its diagnostic dispersion models, addition of a new hybrid-particle source term, and implementation of a mesoscale prognostic model. AS these new capabilities evolve, they will be integrated into the FRMAC's field-deployable assets

  16. Experiment designs offered for discussion preliminary to an LLNL field scale validation experiment in the Yucca Mountain Exploratory Shaft Facility

    International Nuclear Information System (INIS)

    Lowry, B.; Keller, C.

    1988-01-01

    It has been proposed (''Progress Report on Experiment Rationale for Validation of LLNL Models of Ground Water Behavior Near Nuclear Waste Canisters,'' Keller and Lowry, Dec. 7, 1988) that a heat generating spent fuel canister emplaced in unsaturated tuff, in a ventilated hole, will cause a net flux of water into the borehole during the heating cycle of the spent fuel. Accompanying this mass flux will be the formation of mineral deposits near the borehole wall as the water evaporates and leaves behind its dissolved solids. The net effect of this process upon the containment of radioactive wastes is a function of (1) where and how much solid material is deposited in the tuff matrix and cracks, and (2) the resultant effect on the medium flow characteristics. Experimental concepts described in this report are designed to quantify the magnitude and relative location of solid mineral deposit formation due to a heated and vented borehole environment. The most simple tests address matrix effects only; after the process is understood in the homogeneous matrix, fracture effects would be investigated. Three experiment concepts have been proposed. Each has unique advantages and allows investigation of specific aspects of the precipitate formation process. All could be done in reasonable time (less than a year) and none of them are extremely expensive (the most expensive is probably the structurally loaded block test). The calculational ability exists to analyze the ''real'' situation and each of the experiment designs, and produce a credible series of tests. None of the designs requires the acquisition of material property data beyond current capabilities. The tests could be extended, if our understanding is consistent with the data produced, to analyze fracture effects. 7 figs

  17. Los Alamos and Lawrence Livermore National Laboratories Code-to-Code Comparison of Inter Lab Test Problem 1 for Asteroid Impact Hazard Mitigation

    Energy Technology Data Exchange (ETDEWEB)

    Weaver, Robert P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Paul [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Howley, Kirsten [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferguson, Jim Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gisler, Galen Ross [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Plesko, Catherine Suzanne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Managan, Rob [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Owen, Mike [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Wasem, Joseph [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bruck-Syal, Megan [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-15

    The NNSA Laboratories have entered into an interagency collaboration with the National Aeronautics and Space Administration (NASA) to explore strategies for prevention of Earth impacts by asteroids. Assessment of such strategies relies upon use of sophisticated multi-physics simulation codes. This document describes the task of verifying and cross-validating, between Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL), modeling capabilities and methods to be employed as part of the NNSA-NASA collaboration. The approach has been to develop a set of test problems and then to compare and contrast results obtained by use of a suite of codes, including MCNP, RAGE, Mercury, Ares, and Spheral. This document provides a short description of the codes, an overview of the idealized test problems, and discussion of the results for deflection by kinetic impactors and stand-off nuclear explosions.

  18. Automatic coding method of the ACR Code

    International Nuclear Information System (INIS)

    Park, Kwi Ae; Ihm, Jong Sool; Ahn, Woo Hyun; Baik, Seung Kook; Choi, Han Yong; Kim, Bong Gi

    1993-01-01

    The authors developed a computer program for automatic coding of ACR(American College of Radiology) code. The automatic coding of the ACR code is essential for computerization of the data in the department of radiology. This program was written in foxbase language and has been used for automatic coding of diagnosis in the Department of Radiology, Wallace Memorial Baptist since May 1992. The ACR dictionary files consisted of 11 files, one for the organ code and the others for the pathology code. The organ code was obtained by typing organ name or code number itself among the upper and lower level codes of the selected one that were simultaneous displayed on the screen. According to the first number of the selected organ code, the corresponding pathology code file was chosen automatically. By the similar fashion of organ code selection, the proper pathologic dode was obtained. An example of obtained ACR code is '131.3661'. This procedure was reproducible regardless of the number of fields of data. Because this program was written in 'User's Defined Function' from, decoding of the stored ACR code was achieved by this same program and incorporation of this program into program in to another data processing was possible. This program had merits of simple operation, accurate and detail coding, and easy adjustment for another program. Therefore, this program can be used for automation of routine work in the department of radiology

  19. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  20. Magnesium, Iron and Aluminum in LLNL Air Particulate and Rain Samples with Reference to Magnesium in Industrial Storm Water

    Energy Technology Data Exchange (ETDEWEB)

    Esser, Bradley K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bibby, Richard K. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fish, Craig [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-08-25

    Storm water runoff from the Lawrence Livermore National Laboratory’s (LLNL’s) main site and Site 300 periodically exceeds the Discharge Permit Numeric Action Level (NAL) for Magnesium (Mg) under the Industrial General Permit (IGP) Order No. 2014-0057-DWQ. Of particular interest is the source of magnesium in storm water runoff from the site. This special study compares new metals data from air particulate and precipitation samples from the LLNL main site and Site 300 to previous metals data for storm water from the main site and Site 300 and alluvial sediment from the main site to investigate the potential source of elevated Mg in storm water runoff. Data for three metals (Mg, Iron {Fe}, and Aluminum {Al}) were available from all media; data for additional metals, such as Europium (Eu), were available from rain, air particulates, and alluvial sediment. To attribute source, this study compared metals concentration data (for Mg, Al, and Fe) in storm water and rain; metal-metal correlations (Mg with Fe, Mg with Al, Al with Fe, Mg with Eu, Eu with Fe, and Eu with Al) in storm water, rain, air particulates, and sediments; and metal-metal ratios ((Mg/Fe, Mg/Al, Al/Fe, Mg/Eu, Eu/Fe, and Eu/Al) in storm water, rain, air particulates and sediments. The results presented in this study are consistent with a simple conceptual model where the source of Mg in storm water runoff is air particulate matter that has dry-deposited on impervious surfaces and subsequently entrained in runoff during precipitation events. Such a conceptual model is consistent with 1) higher concentrations of metals in storm water runoff than in precipitation, 2) the strong correlation of Mg with Aluminum (Al) and Iron (Fe) in both storm water and air particulates, and 3) the similarity in metal mass ratios between storm water and air particulates in contrast to the dissimilarity of metal mass ratios between storm water and precipitation or alluvial sediment. The strong correlation of Mg with Fe and Al

  1. Dynamic Shannon Coding

    OpenAIRE

    Gagie, Travis

    2005-01-01

    We present a new algorithm for dynamic prefix-free coding, based on Shannon coding. We give a simple analysis and prove a better upper bound on the length of the encoding produced than the corresponding bound for dynamic Huffman coding. We show how our algorithm can be modified for efficient length-restricted coding, alphabetic coding and coding with unequal letter costs.

  2. Fundamentals of convolutional coding

    CERN Document Server

    Johannesson, Rolf

    2015-01-01

    Fundamentals of Convolutional Coding, Second Edition, regarded as a bible of convolutional coding brings you a clear and comprehensive discussion of the basic principles of this field * Two new chapters on low-density parity-check (LDPC) convolutional codes and iterative coding * Viterbi, BCJR, BEAST, list, and sequential decoding of convolutional codes * Distance properties of convolutional codes * Includes a downloadable solutions manual

  3. Codes Over Hyperfields

    Directory of Open Access Journals (Sweden)

    Atamewoue Surdive

    2017-12-01

    Full Text Available In this paper, we define linear codes and cyclic codes over a finite Krasner hyperfield and we characterize these codes by their generator matrices and parity check matrices. We also demonstrate that codes over finite Krasner hyperfields are more interesting for code theory than codes over classical finite fields.

  4. Evaluation of dynamic range for LLNL streak cameras using high contrast pulses and pulse podiatry'' on the Nova laser system

    Energy Technology Data Exchange (ETDEWEB)

    Richards, J.B.; Weiland, T.L.; Prior, J.A.

    1990-07-01

    A standard LLNL streak camera has been used to analyze high contrast pulses on the Nova laser facility. These pulses have a plateau at their leading edge (foot) with an amplitude which is approximately 1% of the maximum pulse height. Relying on other features of the pulses and on signal multiplexing, we were able to determine how accurately the foot amplitude was being represented by the camera. Results indicate that the useful single channel dynamic range of the instrument approaches 100:1. 1 ref., 4 figs., 1 tab.

  5. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  6. A microwave FEL [free electron laser] code using waveguide modes

    International Nuclear Information System (INIS)

    Byers, J.A.; Cohen, R.H.

    1987-08-01

    A free electron laser code, GFEL, is being developed for application to the LLNL tokamak current drive experiment, MTX. This single frequency code solves for the slowly varying complex field amplitude using the usual wiggler-averaged equations of existing codes, in particular FRED, except that it describes the fields by a 2D expansion in the rectangular waveguide modes, using coupling coefficients similar to those developed by Wurtele, which include effects of spatial variations in the fields seen by the wiggler motion of the particles. Our coefficients differ from those of Wurtele in two respects. First, we have found a missing √2γ/a/sub w/ factor in his C/sub z/; when corrected this increases the effect of the E/sub z/ field component and this in turn reduces the amplitude of the TM mode. Second, we have consistently retained all terms of second order in the wiggle amplitude. Both corrections are necessary for accurate computation. GFEL has the capability of following the TE/sub 0n/ and TE(M)/sub m1/ modes simultaneously. GFEL produces results nearly identical to those from FRED if the coupling coefficients are adjusted to equal those implied by the algorithm in FRED. Normally, the two codes produce results that are similar but different in detail due to the different treatment of modes higher than TE/sub 01/. 5 refs., 2 figs., 1 tab

  7. Homological stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Jonas T., E-mail: jonastyleranderson@gmail.com

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  8. Diagnostic Coding for Epilepsy.

    Science.gov (United States)

    Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R

    2016-02-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  9. Coding of Neuroinfectious Diseases.

    Science.gov (United States)

    Barkley, Gregory L

    2015-12-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  10. Vector Network Coding

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L X L coding matrices that play a similar role as coding coefficients in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector co...

  11. Entropy Coding in HEVC

    OpenAIRE

    Sze, Vivienne; Marpe, Detlev

    2014-01-01

    Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Efficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes th...

  12. Generalized concatenated quantum codes

    International Nuclear Information System (INIS)

    Grassl, Markus; Shor, Peter; Smith, Graeme; Smolin, John; Zeng Bei

    2009-01-01

    We discuss the concept of generalized concatenated quantum codes. This generalized concatenation method provides a systematical way for constructing good quantum codes, both stabilizer codes and nonadditive codes. Using this method, we construct families of single-error-correcting nonadditive quantum codes, in both binary and nonbinary cases, which not only outperform any stabilizer codes for finite block length but also asymptotically meet the quantum Hamming bound for large block length.

  13. Rateless feedback codes

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Koike-Akino, Toshiaki; Orlik, Philip

    2012-01-01

    This paper proposes a concept called rateless feedback coding. We redesign the existing LT and Raptor codes, by introducing new degree distributions for the case when a few feedback opportunities are available. We show that incorporating feedback to LT codes can significantly decrease both...... the coding overhead and the encoding/decoding complexity. Moreover, we show that, at the price of a slight increase in the coding overhead, linear complexity is achieved with Raptor feedback coding....

  14. Advanced video coding systems

    CERN Document Server

    Gao, Wen

    2015-01-01

    This comprehensive and accessible text/reference presents an overview of the state of the art in video coding technology. Specifically, the book introduces the tools of the AVS2 standard, describing how AVS2 can help to achieve a significant improvement in coding efficiency for future video networks and applications by incorporating smarter coding tools such as scene video coding. Topics and features: introduces the basic concepts in video coding, and presents a short history of video coding technology and standards; reviews the coding framework, main coding tools, and syntax structure of AV

  15. Coding for dummies

    CERN Document Server

    Abraham, Nikhil

    2015-01-01

    Hands-on exercises help you learn to code like a pro No coding experience is required for Coding For Dummies,your one-stop guide to building a foundation of knowledge inwriting computer code for web, application, and softwaredevelopment. It doesn't matter if you've dabbled in coding or neverwritten a line of code, this book guides you through the basics.Using foundational web development languages like HTML, CSS, andJavaScript, it explains in plain English how coding works and whyit's needed. Online exercises developed by Codecademy, a leading online codetraining site, help hone coding skill

  16. 32 X 2.5 Gb/s Optical Code Division Multiplexing (O-CDM) For Agile Optical Networking (Phase II) Final Report CRADA No. TC02051.0

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mendez, A. J. [Mendez R & D Associates, El Segundo, CA (United States)

    2017-09-08

    This was a collaborative effort between Lawrence Livermore National Security, LLC (formerly The Regents of the University of California)/Lawrence Livermore National Laboratory (LLNL) and Mendez R & D Associates (MRDA) to develop and demonstrate a reconfigurable and cost effective design for optical code division multiplexing (O-CDM) with high spectral efficiency and throughput, as applied to the field of distributed computing, including multiple accessing (sharing of communication resources) and bidirectional data distribution in fiber-to-the-premise (FTTx) networks.

  17. Comparison of results between different precision MAFIA codes

    International Nuclear Information System (INIS)

    Farkas, D.; Tice, B.

    1990-01-01

    In order to satisfy the inquiries of the Mafia Code users at SLAC, an evaluation of these codes was done. This consisted of running a cavity with known solutions. This study considered only the time independent solutions. No wake-field calculations were tried. The two machines involved were the NMFECC Cray (e-machine) at LLNL and the IBM/3081 at SLAC. The primary difference between the implementation of the codes on these machines is that the Cray has 64-bit accuracy while the IBM version has 32-bit accuracy. Unfortunately this study is incomplete as the Post-processor (P3) could not be made to work properly on the SLAC machine. This meant that no q's were calculated and no field patterns were generated. A certain amount of guessing had to be done when constructing the comparison tables. This problem aside, the probable conclusions that may be drawn are: (1) thirty-two bit precision is adequate for frequency determination; (2) sixty-four bit precision is desirable for field determination. This conclusion is deduced from the accuracy statistics. The cavity selected for study was a rectangular one with the dimensions (4,3,5) in centimeters. Only half of this cavity was used (2,3,5) with the x dimension being the one that was halved. The boundary conditions (B.C.) on the plane of symmetry were varied between Neumann and Dirichlet so as to cover all possible modes. Ten (10) modes were ran for each boundary condition

  18. Discussion on LDPC Codes and Uplink Coding

    Science.gov (United States)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  19. Locally orderless registration code

    DEFF Research Database (Denmark)

    2012-01-01

    This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows.......This is code for the TPAMI paper "Locally Orderless Registration". The code requires intel threadding building blocks installed and is provided for 64 bit on mac, linux and windows....

  20. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    Shannon limit of the channel. Among the earliest discovered codes that approach the. Shannon limit were the low density parity check (LDPC) codes. The term low density arises from the property of the parity check matrix defining the code. We will now define this matrix and the role that it plays in decoding. 2. Linear Codes.

  1. Manually operated coded switch

    International Nuclear Information System (INIS)

    Barnette, J.H.

    1978-01-01

    The disclosure related to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made

  2. Coding in Muscle Disease.

    Science.gov (United States)

    Jones, Lyell K; Ney, John P

    2016-12-01

    Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.

  3. Summary of photochemical and radiative data used in the LLNL one-dimensional transport-kinetics model of the troposphere and stratosphere: 1982

    International Nuclear Information System (INIS)

    Connell, P.S.; Wuebbles, D.J.

    1983-01-01

    This report summarizes the contents and sources of the photochemical and radiative segment of the LLNL one-dimensional transport-kinetics model of the troposphere and stratosphere. Data include the solar flux incident at the top of the atmosphere, absorption spectra for O 2 , O 3 and NO 2 , and effective absorption coefficients for about 40 photolytic processes as functions of wavelength and, in a few cases, temperature and pressure. The current data set represents understanding of atmospheric photochemical processes as of late 1982 and relies largely on NASA Evaluation Number 5 of Chemical Kinetics and Photochemical Data for Use in Stratospheric Modeling, JPL Publication 82-57 (DeMore et al., 1982). Implementation in the model, including the treatment of multiple scattering and cloud cover, is discussed in Wuebbles (1981)

  4. LLNL Radiation Protection Program (RPP) Rev 9.2, Implementation of 10 CFR 835, 'Occupational Radiation Protection'

    Energy Technology Data Exchange (ETDEWEB)

    Shingleton, K. L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-06-15

    The Department of Energy (DOE) originally issued Part 10 CFR 835, Occupational Radiation Protection, on January 1, 1994. This regulation, hereafter referred to as “the Rule”, required DOE contractors to develop and maintain a DOE-approved Radiation Protection Program (RPP); DOE approved the initial Lawrence Livermore National Laboratory (LLNL) RPP (Rev 2) on 6/29/95. DOE issued a revision to the Rule on December 4, 1998 and approved LLNL’s revised RPP (Rev 7.1) on 11/18/99. DOE issued a second Rule revision on June 8, 2007 (effective July 9, 2007) and on June 13, 2008 approved LLNL’s RPP (Rev 9.0) which contained plans and measures for coming into compliance with the 2007 Rule changes. DOE issued a correction to the Rule on April 21, 2009.

  5. QR Codes 101

    Science.gov (United States)

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  6. Historical Doses from Tritiated Water and Tritiated Hydrogen Gas Released to the Atmosphere from Lawrence Livermore National Laboratory (LLNL). Part 5. Accidental Releases

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, S

    2007-08-15

    Over the course of fifty-three years, LLNL had six acute releases of tritiated hydrogen gas (HT) and one acute release of tritiated water vapor (HTO) that were too large relative to the annual releases to be included as part of the annual releases from normal operations detailed in Parts 3 and 4 of the Tritium Dose Reconstruction (TDR). Sandia National Laboratories/California (SNL/CA) had one such release of HT and one of HTO. Doses to the maximally exposed individual (MEI) for these accidents have been modeled using an equation derived from the time-dependent tritium model, UFOTRI, and parameter values based on expert judgment. All of these acute releases are described in this report. Doses that could not have been exceeded from the large HT releases of 1965 and 1970 were calculated to be 43 {micro}Sv (4.3 mrem) and 120 {micro}Sv (12 mrem) to an adult, respectively. Two published sets of dose predictions for the accidental HT release in 1970 are compared with the dose predictions of this TDR. The highest predicted dose was for an acute release of HTO in 1954. For this release, the dose that could not have been exceeded was estimated to have been 2 mSv (200 mrem), although, because of the high uncertainty about the predictions, the likely dose may have been as low as 360 {micro}Sv (36 mrem) or less. The estimated maximum exposures from the accidental releases were such that no adverse health effects would be expected. Appendix A lists all accidents and large routine puff releases that have occurred at LLNL and SNL/CA between 1953 and 2005. Appendix B describes the processes unique to tritium that must be modeled after an acute release, some of the time-dependent tritium models being used today, and the results of tests of these models.

  7. Summary of International Waste Management Programs (LLNL Input to SNL L3 MS: System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW)

    Energy Technology Data Exchange (ETDEWEB)

    Greenberg, Harris R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Blink, James A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Halsey, William G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Sutton, Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-08-11

    The Used Fuel Disposition Campaign (UFDC) within the Department of Energy’s Office of Nuclear Energy (DOE-NE) Fuel Cycle Technology (FCT) program has been tasked with investigating the disposal of the nation’s spent nuclear fuel (SNF) and high-level nuclear waste (HLW) for a range of potential waste forms and geologic environments. This Lessons Learned task is part of a multi-laboratory effort, with this LLNL report providing input to a Level 3 SNL milestone (System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW). The work package number is: FTLL11UF0328; the work package title is: Technical Bases / Lessons Learned; the milestone number is: M41UF032802; and the milestone title is: “LLNL Input to SNL L3 MS: System-Wide Integration and Site Selection Concepts for Future Disposition Options for HLW”. The system-wide integration effort will integrate all aspects of waste management and disposal, integrating the waste generators, interim storage, transportation, and ultimate disposal at a repository site. The review of international experience in these areas is required to support future studies that address all of these components in an integrated manner. Note that this report is a snapshot of nuclear power infrastructure and international waste management programs that is current as of August 2011, with one notable exception. No attempt has been made to discuss the currently evolving world-wide response to the tragic consequences of the earthquake and tsunami that devastated Japan on March 11, 2011, leaving more than 15,000 people dead and more than 8,000 people missing, and severely damaging the Fukushima Daiichi nuclear power complex. Continuing efforts in FY 2012 will update the data, and summarize it in an Excel spreadsheet for easy comparison and assist in the knowledge management of the study cases.

  8. Vectorization of nuclear codes for atmospheric transport and exposure calculation of radioactive materials

    International Nuclear Information System (INIS)

    Asai, Kiyoshi; Shinozawa, Naohisa; Ishikawa, Hirohiko; Chino, Masamichi; Hayashi, Takashi

    1983-02-01

    Three computer codes MATHEW, ADPIC of LLNL and GAMPUL of JAERI for prediction of wind field, concentration and external exposure rate of airborne radioactive materials are vectorized and the results are presented. Using the continuous equation of incompressible flow as a constraint, the MATHEW calculates the three dimensional wind field by a variational method. Using the particle-in -cell method, the ADPIC calculates the advection and diffusion of radioactive materials in three dimensional wind field and terrain, and gives the concentration of the materials in each cell of the domain. The GAMPUL calculates the external exposure rate assuming Gaussian plume type distribution of concentration. The vectorized code MATHEW attained 7.8 times speedup by a vector processor FACOM230-75 APU. The ADPIC and GAMPUL are estimated to attain 1.5 and 4 times speedup respectively on CRAY-1 type vector processor. (author)

  9. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  10. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  11. The materiality of Code

    DEFF Research Database (Denmark)

    Soon, Winnie

    2014-01-01

    This essay studies the source code of an artwork from a software studies perspective. By examining code that come close to the approach of critical code studies (Marino, 2006), I trace the network artwork, Pupufu (Lin, 2009) to understand various real-time approaches to social media platforms (MSN......, Twitter and Facebook). The focus is not to investigate the functionalities and efficiencies of the code, but to study and interpret the program level of code in order to trace the use of various technological methods such as third-party libraries and platforms’ interfaces. These are important...... to understand the socio-technical side of a changing network environment. Through the study of code, including but not limited to source code, technical specifications and other materials in relation to the artwork production, I would like to explore the materiality of code that goes beyond technical...

  12. Coding for optical channels

    CERN Document Server

    Djordjevic, Ivan; Vasic, Bane

    2010-01-01

    This unique book provides a coherent and comprehensive introduction to the fundamentals of optical communications, signal processing and coding for optical channels. It is the first to integrate the fundamentals of coding theory and optical communication.

  13. SEVERO code - user's manual

    International Nuclear Information System (INIS)

    Sacramento, A.M. do.

    1989-01-01

    This user's manual contains all the necessary information concerning the use of SEVERO code. This computer code is related to the statistics of extremes = extreme winds, extreme precipitation and flooding hazard risk analysis. (A.C.A.S.)

  14. Synthesizing Certified Code

    OpenAIRE

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach for formally demonstrating software quality. Its basic idea is to require code producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates that can be checked independently. Since code certification uses the same underlying technology as program verification, it requires detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding annotations to th...

  15. FERRET data analysis code

    International Nuclear Information System (INIS)

    Schmittroth, F.

    1979-09-01

    A documentation of the FERRET data analysis code is given. The code provides a way to combine related measurements and calculations in a consistent evaluation. Basically a very general least-squares code, it is oriented towards problems frequently encountered in nuclear data and reactor physics. A strong emphasis is on the proper treatment of uncertainties and correlations and in providing quantitative uncertainty estimates. Documentation includes a review of the method, structure of the code, input formats, and examples

  16. Stylize Aesthetic QR Code

    OpenAIRE

    Xu, Mingliang; Su, Hao; Li, Yafei; Li, Xi; Liao, Jing; Niu, Jianwei; Lv, Pei; Zhou, Bing

    2018-01-01

    With the continued proliferation of smart mobile devices, Quick Response (QR) code has become one of the most-used types of two-dimensional code in the world. Aiming at beautifying the appearance of QR codes, existing works have developed a series of techniques to make the QR code more visual-pleasant. However, these works still leave much to be desired, such as visual diversity, aesthetic quality, flexibility, universal property, and robustness. To address these issues, in this paper, we pro...

  17. Enhancing QR Code Security

    OpenAIRE

    Zhang, Linfan; Zheng, Shuang

    2015-01-01

    Quick Response code opens possibility to convey data in a unique way yet insufficient prevention and protection might lead into QR code being exploited on behalf of attackers. This thesis starts by presenting a general introduction of background and stating two problems regarding QR code security, which followed by a comprehensive research on both QR code itself and related issues. From the research a solution taking advantages of cloud and cryptography together with an implementation come af...

  18. Opening up codings?

    DEFF Research Database (Denmark)

    Steensig, Jakob; Heinemann, Trine

    2015-01-01

    doing formal coding and when doing more “traditional” conversation analysis research based on collections. We are more wary, however, of the implication that coding-based research is the end result of a process that starts with qualitative investigations and ends with categories that can be coded...

  19. Gauge color codes

    DEFF Research Database (Denmark)

    Bombin Palomo, Hector

    2015-01-01

    Color codes are topological stabilizer codes with unusual transversality properties. Here I show that their group of transversal gates is optimal and only depends on the spatial dimension, not the local geometry. I also introduce a generalized, subsystem version of color codes. In 3D they allow...

  20. Refactoring test code

    NARCIS (Netherlands)

    A. van Deursen (Arie); L.M.F. Moonen (Leon); A. van den Bergh; G. Kok

    2001-01-01

    textabstractTwo key aspects of extreme programming (XP) are unit testing and merciless refactoring. Given the fact that the ideal test code / production code ratio approaches 1:1, it is not surprising that unit tests are being refactored. We found that refactoring test code is different from

  1. Software Certification - Coding, Code, and Coders

    Science.gov (United States)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  2. The network code

    International Nuclear Information System (INIS)

    1997-01-01

    The Network Code defines the rights and responsibilities of all users of the natural gas transportation system in the liberalised gas industry in the United Kingdom. This report describes the operation of the Code, what it means, how it works and its implications for the various participants in the industry. The topics covered are: development of the competitive gas market in the UK; key points in the Code; gas transportation charging; impact of the Code on producers upstream; impact on shippers; gas storage; supply point administration; impact of the Code on end users; the future. (20 tables; 33 figures) (UK)

  3. Coding for Electronic Mail

    Science.gov (United States)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  4. NAGRADATA. Code key. Geology

    International Nuclear Information System (INIS)

    Mueller, W.H.; Schneider, B.; Staeuble, J.

    1984-01-01

    This reference manual provides users of the NAGRADATA system with comprehensive keys to the coding/decoding of geological and technical information to be stored in or retreaved from the databank. Emphasis has been placed on input data coding. When data is retreaved the translation into plain language of stored coded information is done automatically by computer. Three keys each, list the complete set of currently defined codes for the NAGRADATA system, namely codes with appropriate definitions, arranged: 1. according to subject matter (thematically) 2. the codes listed alphabetically and 3. the definitions listed alphabetically. Additional explanation is provided for the proper application of the codes and the logic behind the creation of new codes to be used within the NAGRADATA system. NAGRADATA makes use of codes instead of plain language for data storage; this offers the following advantages: speed of data processing, mainly data retrieval, economies of storage memory requirements, the standardisation of terminology. The nature of this thesaurian type 'key to codes' makes it impossible to either establish a final form or to cover the entire spectrum of requirements. Therefore, this first issue of codes to NAGRADATA must be considered to represent the current state of progress of a living system and future editions will be issued in a loose leave ringbook system which can be updated by an organised (updating) service. (author)

  5. XSOR codes users manual

    International Nuclear Information System (INIS)

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ''XSOR''. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms

  6. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    1999-01-01

    The present lecture has a main goal to show how the transport lattice calculations are realised in a standard computer code. This is illustrated on the example of the WIMSD code, belonging to the most popular tools for reactor calculations. Most of the approaches discussed here can be easily modified to any other lattice code. The description of the code assumes the basic knowledge of reactor lattice, on the level given in the lecture on 'Reactor lattice transport calculations'. For more advanced explanation of the WIMSD code the reader is directed to the detailed descriptions of the code cited in References. The discussion of the methods and models included in the code is followed by the generally used homogenisation procedure and several numerical examples of discrepancies in calculated multiplication factors based on different sources of library data. (author)

  7. DLLExternalCode

    Energy Technology Data Exchange (ETDEWEB)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  8. Toric Varieties and Codes, Error-correcting Codes, Quantum Codes, Secret Sharing and Decoding

    DEFF Research Database (Denmark)

    Hansen, Johan Peder

    We present toric varieties and associated toric codes and their decoding. Toric codes are applied to construct Linear Secret Sharing Schemes (LSSS) with strong multiplication by the Massey construction. Asymmetric Quantum Codes are obtained from toric codes by the A.R. Calderbank P.W. Shor and A.......M. Steane construction of stabilizer codes (CSS) from linear codes containing their dual codes....

  9. Thermal safety characterization on PETN, PBX-9407, LX-10-2, LX-17-1 and detonator in the LLNL's P-ODTX system

    Energy Technology Data Exchange (ETDEWEB)

    Hsu, P. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Strout, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, J. G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kahl, E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ellsworth, F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Healy, T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-09-21

    Incidents caused by fire and other thermal events can heat energetic materials that may lead to thermal explosion and result in structural damage and casualty. Thus, it is important to understand the response of energetic materials to thermal insults. The One-Dimensional-Time to Explosion (ODTX) system at the Lawrence Livermore National Laboratory (LLNL) has been used for decades to characterize thermal safety of energetic materials. In this study, an integration of a pressure monitoring element has been added into the ODTX system (P-ODTX) to perform thermal explosion (cook-off) experiments (thermal runaway) on PETN powder, PBX-9407, LX-10-2, LX-17-1, and detonator samples (cup tests). The P-ODTX testing generates useful data (thermal explosion temperature, thermal explosion time, and gas pressures) to assist with the thermal safety assessment of relevant energetic materials and components. This report summarizes the results of P-ODTX experiments that were performed from May 2015 to July 2017. Recent upgrades to the data acquisition system allows for rapid pressure monitoring in microsecond intervals during thermal explosion. These pressure data are also included in the report.

  10. An Optimal Linear Coding for Index Coding Problem

    OpenAIRE

    Pezeshkpour, Pouya

    2015-01-01

    An optimal linear coding solution for index coding problem is established. Instead of network coding approach by focus on graph theoric and algebraic methods a linear coding program for solving both unicast and groupcast index coding problem is presented. The coding is proved to be the optimal solution from the linear perspective and can be easily utilize for any number of messages. The importance of this work is lying mostly on the usage of the presented coding in the groupcast index coding ...

  11. The Aesthetics of Coding

    DEFF Research Database (Denmark)

    Andersen, Christian Ulrik

    2007-01-01

    Computer art is often associated with computer-generated expressions (digitally manipulated audio/images in music, video, stage design, media facades, etc.). In recent computer art, however, the code-text itself – not the generated output – has become the artwork (Perl Poetry, ASCII Art, obfuscated...... code, etc.). The presentation relates this artistic fascination of code to a media critique expressed by Florian Cramer, claiming that the graphical interface represents a media separation (of text/code and image) causing alienation to the computer’s materiality. Cramer is thus the voice of a new ‘code...... avant-garde’. In line with Cramer, the artists Alex McLean and Adrian Ward (aka Slub) declare: “art-oriented programming needs to acknowledge the conditions of its own making – its poesis.” By analysing the Live Coding performances of Slub (where they program computer music live), the presentation...

  12. Majorana fermion codes

    International Nuclear Information System (INIS)

    Bravyi, Sergey; Terhal, Barbara M; Leemhuis, Bernhard

    2010-01-01

    We initiate the study of Majorana fermion codes (MFCs). These codes can be viewed as extensions of Kitaev's one-dimensional (1D) model of unpaired Majorana fermions in quantum wires to higher spatial dimensions and interacting fermions. The purpose of MFCs is to protect quantum information against low-weight fermionic errors, that is, operators acting on sufficiently small subsets of fermionic modes. We examine to what extent MFCs can surpass qubit stabilizer codes in terms of their stability properties. A general construction of 2D MFCs is proposed that combines topological protection based on a macroscopic code distance with protection based on fermionic parity conservation. Finally, we use MFCs to show how to transform any qubit stabilizer code to a weakly self-dual CSS code.

  13. Theory of epigenetic coding.

    Science.gov (United States)

    Elder, D

    1984-06-07

    The logic of genetic control of development may be based on a binary epigenetic code. This paper revises the author's previous scheme dealing with the numerology of annelid metamerism in these terms. Certain features of the code had been deduced to be combinatorial, others not. This paradoxical contrast is resolved here by the interpretation that these features relate to different operations of the code; the combinatiorial to coding identity of units, the non-combinatorial to coding production of units. Consideration of a second paradox in the theory of epigenetic coding leads to a new solution which further provides a basis for epimorphic regeneration, and may in particular throw light on the "regeneration-duplication" phenomenon. A possible test of the model is also put forward.

  14. DISP1 code

    International Nuclear Information System (INIS)

    Vokac, P.

    1999-12-01

    DISP1 code is a simple tool for assessment of the dispersion of the fission product cloud escaping from a nuclear power plant after an accident. The code makes it possible to tentatively check the feasibility of calculations by more complex PSA3 codes and/or codes for real-time dispersion calculations. The number of input parameters is reasonably low and the user interface is simple enough to allow a rapid processing of sensitivity analyses. All input data entered through the user interface are stored in the text format. Implementation of dispersion model corrections taken from the ARCON96 code enables the DISP1 code to be employed for assessment of the radiation hazard within the NPP area, in the control room for instance. (P.A.)

  15. Phonological coding during reading.

    Science.gov (United States)

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  16. The aeroelastic code FLEXLAST

    Energy Technology Data Exchange (ETDEWEB)

    Visser, B. [Stork Product Eng., Amsterdam (Netherlands)

    1996-09-01

    To support the discussion on aeroelastic codes, a description of the code FLEXLAST was given and experiences within benchmarks and measurement programmes were summarized. The code FLEXLAST has been developed since 1982 at Stork Product Engineering (SPE). Since 1992 FLEXLAST has been used by Dutch industries for wind turbine and rotor design. Based on the comparison with measurements, it can be concluded that the main shortcomings of wind turbine modelling lie in the field of aerodynamics, wind field and wake modelling. (au)

  17. MORSE Monte Carlo code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described

  18. QR codes for dummies

    CERN Document Server

    Waters, Joe

    2012-01-01

    Find out how to effectively create, use, and track QR codes QR (Quick Response) codes are popping up everywhere, and businesses are reaping the rewards. Get in on the action with the no-nonsense advice in this streamlined, portable guide. You'll find out how to get started, plan your strategy, and actually create the codes. Then you'll learn to link codes to mobile-friendly content, track your results, and develop ways to give your customers value that will keep them coming back. It's all presented in the straightforward style you've come to know and love, with a dash of humor thrown

  19. Tokamak Systems Code

    International Nuclear Information System (INIS)

    Reid, R.L.; Barrett, R.J.; Brown, T.G.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged

  20. Efficient Coding of Information: Huffman Coding -RE ...

    Indian Academy of Sciences (India)

    to a stream of equally-likely symbols so as to recover the original stream in the event of errors. The for- ... The source-coding problem is one of finding a mapping from U to a ... probability that the random variable X takes the value x written as ...

  1. NR-code: Nonlinear reconstruction code

    Science.gov (United States)

    Yu, Yu; Pen, Ue-Li; Zhu, Hong-Ming

    2018-04-01

    NR-code applies nonlinear reconstruction to the dark matter density field in redshift space and solves for the nonlinear mapping from the initial Lagrangian positions to the final redshift space positions; this reverses the large-scale bulk flows and improves the precision measurement of the baryon acoustic oscillations (BAO) scale.

  2. Enhanced Verification Test Suite for Physics Simulation Codes

    Energy Technology Data Exchange (ETDEWEB)

    Kamm, J R; Brock, J S; Brandon, S T; Cotrell, D L; Johnson, B; Knupp, P; Rider, W; Trucano, T; Weirs, V G

    2008-10-10

    This document discusses problems with which to augment, in quantity and in quality, the existing tri-laboratory suite of verification problems used by Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL). The purpose of verification analysis is demonstrate whether the numerical results of the discretization algorithms in physics and engineering simulation codes provide correct solutions of the corresponding continuum equations. The key points of this document are: (1) Verification deals with mathematical correctness of the numerical algorithms in a code, while validation deals with physical correctness of a simulation in a regime of interest. This document is about verification. (2) The current seven-problem Tri-Laboratory Verification Test Suite, which has been used for approximately five years at the DOE WP laboratories, is limited. (3) Both the methodology for and technology used in verification analysis have evolved and been improved since the original test suite was proposed. (4) The proposed test problems are in three basic areas: (a) Hydrodynamics; (b) Transport processes; and (c) Dynamic strength-of-materials. (5) For several of the proposed problems we provide a 'strong sense verification benchmark', consisting of (i) a clear mathematical statement of the problem with sufficient information to run a computer simulation, (ii) an explanation of how the code result and benchmark solution are to be evaluated, and (iii) a description of the acceptance criterion for simulation code results. (6) It is proposed that the set of verification test problems with which any particular code be evaluated include some of the problems described in this document. Analysis of the proposed verification test problems constitutes part of a necessary--but not sufficient--step that builds confidence in physics and engineering simulation codes. More complicated test cases, including physics models of

  3. Monte Carlo code criticality benchmark comparisons for waste packaging

    International Nuclear Information System (INIS)

    Alesso, H.P.; Annese, C.E.; Buck, R.M.; Pearson, J.S.; Lloyd, W.R.

    1992-07-01

    COG is a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL). It solves the Boltzmann equation for the transport of neutrons and photons. The objective of this paper is to report on COG results for criticality benchmark experiments both on a Cray mainframe and on a HP 9000 workstation. COG has been recently ported to workstations to improve its accessibility to a wider community of users. COG has some similarities to a number of other computer codes used in the shielding and criticality community. The recently introduced high performance reduced instruction set (RISC) UNIX workstations provide computational power that approach mainframes at a fraction of the cost. A version of COG is currently being developed for the Hewlett Packard 9000/730 computer with a UNIX operating system. Subsequent porting operations will move COG to SUN, DEC, and IBM workstations. In addition, a CAD system for preparation of the geometry input for COG is being developed. In July 1977, Babcock ampersand Wilcox Co. (B ampersand W) was awarded a contract to conduct a series of critical experiments that simulated close-packed storage of LWR-type fuel. These experiments provided data for benchmarking and validating calculational methods used in predicting K-effective of nuclear fuel storage in close-packed, neutron poisoned arrays. Low enriched UO2 fuel pins in water-moderated lattices in fuel storage represent a challenging criticality calculation for Monte Carlo codes particularly when the fuel pins extend out of the water. COG and KENO calculational results of these criticality benchmark experiments are presented

  4. Synthesizing Certified Code

    Science.gov (United States)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  5. Code of Ethics

    Science.gov (United States)

    Division for Early Childhood, Council for Exceptional Children, 2009

    2009-01-01

    The Code of Ethics of the Division for Early Childhood (DEC) of the Council for Exceptional Children is a public statement of principles and practice guidelines supported by the mission of DEC. The foundation of this Code is based on sound ethical reasoning related to professional practice with young children with disabilities and their families…

  6. Interleaved Product LDPC Codes

    OpenAIRE

    Baldi, Marco; Cancellieri, Giovanni; Chiaraluce, Franco

    2011-01-01

    Product LDPC codes take advantage of LDPC decoding algorithms and the high minimum distance of product codes. We propose to add suitable interleavers to improve the waterfall performance of LDPC decoding. Interleaving also reduces the number of low weight codewords, that gives a further advantage in the error floor region.

  7. Insurance billing and coding.

    Science.gov (United States)

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms.

  8. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  9. Scrum Code Camps

    DEFF Research Database (Denmark)

    Pries-Heje, Lene; Pries-Heje, Jan; Dalgaard, Bente

    2013-01-01

    is required. In this paper we present the design of such a new approach, the Scrum Code Camp, which can be used to assess agile team capability in a transparent and consistent way. A design science research approach is used to analyze properties of two instances of the Scrum Code Camp where seven agile teams...

  10. RFQ simulation code

    International Nuclear Information System (INIS)

    Lysenko, W.P.

    1984-04-01

    We have developed the RFQLIB simulation system to provide a means to systematically generate the new versions of radio-frequency quadrupole (RFQ) linac simulation codes that are required by the constantly changing needs of a research environment. This integrated system simplifies keeping track of the various versions of the simulation code and makes it practical to maintain complete and up-to-date documentation. In this scheme, there is a certain standard version of the simulation code that forms a library upon which new versions are built. To generate a new version of the simulation code, the routines to be modified or added are appended to a standard command file, which contains the commands to compile the new routines and link them to the routines in the library. The library itself is rarely changed. Whenever the library is modified, however, this modification is seen by all versions of the simulation code, which actually exist as different versions of the command file. All code is written according to the rules of structured programming. Modularity is enforced by not using COMMON statements, simplifying the relation of the data flow to a hierarchy diagram. Simulation results are similar to those of the PARMTEQ code, as expected, because of the similar physical model. Different capabilities, such as those for generating beams matched in detail to the structure, are available in the new code for help in testing new ideas in designing RFQ linacs

  11. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  12. An analysis of options available for developing a common laser ray tracing package for Ares and Kull code frameworks

    Energy Technology Data Exchange (ETDEWEB)

    Weeratunga, S K

    2008-11-06

    Ares and Kull are mature code frameworks that support ALE hydrodynamics for a variety of HEDP applications at LLNL, using two widely different meshing approaches. While Ares is based on a 2-D/3-D block-structured mesh data base, Kull is designed to support unstructured, arbitrary polygonal/polyhedral meshes. In addition, both frameworks are capable of running applications on large, distributed-memory parallel machines. Currently, both these frameworks separately support assorted collections of physics packages related to HEDP, including one for the energy deposition by laser/ion-beam ray tracing. This study analyzes the options available for developing a common laser/ion-beam ray tracing package that can be easily shared between these two code frameworks and concludes with a set of recommendations for its development.

  13. 78 FR 18321 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2013-03-26

    ... Energy Conservation Code. International Existing Building Code. International Fire Code. International... Code. International Property Maintenance Code. International Residential Code. International Swimming Pool and Spa Code International Wildland-Urban Interface Code. International Zoning Code. ICC Standards...

  14. Validation of thermalhydraulic codes

    International Nuclear Information System (INIS)

    Wilkie, D.

    1992-01-01

    Thermalhydraulic codes require to be validated against experimental data collected over a wide range of situations if they are to be relied upon. A good example is provided by the nuclear industry where codes are used for safety studies and for determining operating conditions. Errors in the codes could lead to financial penalties, to the incorrect estimation of the consequences of accidents and even to the accidents themselves. Comparison between prediction and experiment is often described qualitatively or in approximate terms, e.g. ''agreement is within 10%''. A quantitative method is preferable, especially when several competing codes are available. The codes can then be ranked in order of merit. Such a method is described. (Author)

  15. Fracture flow code

    International Nuclear Information System (INIS)

    Dershowitz, W; Herbert, A.; Long, J.

    1989-03-01

    The hydrology of the SCV site will be modelled utilizing discrete fracture flow models. These models are complex, and can not be fully cerified by comparison to analytical solutions. The best approach for verification of these codes is therefore cross-verification between different codes. This is complicated by the variation in assumptions and solution techniques utilized in different codes. Cross-verification procedures are defined which allow comparison of the codes developed by Harwell Laboratory, Lawrence Berkeley Laboratory, and Golder Associates Inc. Six cross-verification datasets are defined for deterministic and stochastic verification of geometric and flow features of the codes. Additional datasets for verification of transport features will be documented in a future report. (13 figs., 7 tabs., 10 refs.) (authors)

  16. Particle-in-cell/accelerator code for space-charge dominated beam simulation

    Energy Technology Data Exchange (ETDEWEB)

    2012-05-08

    Warp is a multidimensional discrete-particle beam simulation program designed to be applicable where the beam space-charge is non-negligible or dominant. It is being developed in a collaboration among LLNL, LBNL and the University of Maryland. It was originally designed and optimized for heave ion fusion accelerator physics studies, but has received use in a broader range of applications, including for example laser wakefield accelerators, e-cloud studies in high enery accelerators, particle traps and other areas. At present it incorporates 3-D, axisymmetric (r,z) planar (x-z) and transverse slice (x,y) descriptions, with both electrostatic and electro-magnetic fields, and a beam envelope model. The code is guilt atop the Python interpreter language.

  17. Huffman coding in advanced audio coding standard

    Science.gov (United States)

    Brzuchalski, Grzegorz

    2012-05-01

    This article presents several hardware architectures of Advanced Audio Coding (AAC) Huffman noiseless encoder, its optimisations and working implementation. Much attention has been paid to optimise the demand of hardware resources especially memory size. The aim of design was to get as short binary stream as possible in this standard. The Huffman encoder with whole audio-video system has been implemented in FPGA devices.

  18. Report number codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, R.N. (ed.)

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  19. Report number codes

    International Nuclear Information System (INIS)

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name

  20. On the use of the HOTSPOT code for evaluating accidents involving radioactive materials

    International Nuclear Information System (INIS)

    Sattinger, D.; Sarussi, R.; Tzarfati, Y.; Levinson, S.; Tshuva, A.

    2004-01-01

    The HOTSPOT Health Physics code was created by LLNL in order to provide Health Physics personnel with a fast, field portable calculation tool for evaluating accidents involving radioactive materials. The HOTSPOT code is a first order approximation of the radiation effects associated with the atmospheric release of radioactive materials. HOTSPOT programs are reasonably accurate for a timely initial assessment. More importantly, HOTSPOT code produce a consistent output for the same input assumptions, and minimize the probability of errors associated with reading a graph incorrectly. Four general programs, Plume, Explosion, Fire, and Resuspension, calculate a downwind assessment following the release of radioactive material resulting from a continuous or puff release, explosive release, fuel or fire, or an area contamination event. Additional programs estimate the dose commitment from inhalation of any one of the radionuclides listed in the database of radionuclides, calibrate a radiation survey instrument for ground survey measurements, and screening of alpha emitters in the Lung. We believe that the HOTSPOT code is extremely valuable in providing reasonable and reliable guidance for a diversity of application. For example, we demonstrate the release of 241 Am(20Ci) to the atmosphere

  1. EM modeling for GPIR using 3D FDTD modeling codes

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.D.

    1994-10-01

    An analysis of the one-, two-, and three-dimensional electrical characteristics of structural cement and concrete is presented. This work connects experimental efforts in characterizing cement and concrete in the frequency and time domains with the Finite Difference Time Domain (FDTD) modeling efforts of these substances. These efforts include Electromagnetic (EM) modeling of simple lossless homogeneous materials with aggregate and targets and the modeling dispersive and lossy materials with aggregate and complex target geometries for Ground Penetrating Imaging Radar (GPIR). Two- and three-dimensional FDTD codes (developed at LLNL) where used for the modeling efforts. Purpose of the experimental and modeling efforts is to gain knowledge about the electrical properties of concrete typically used in the construction industry for bridges and other load bearing structures. The goal is to optimize the performance of a high-sample-rate impulse radar and data acquisition system and to design an antenna system to match the characteristics of this material. Results show agreement to within 2 dB of the amplitudes of the experimental and modeled data while the frequency peaks correlate to within 10% the differences being due to the unknown exact nature of the aggregate placement.

  2. Cryptography cracking codes

    CERN Document Server

    2014-01-01

    While cracking a code might seem like something few of us would encounter in our daily lives, it is actually far more prevalent than we may realize. Anyone who has had personal information taken because of a hacked email account can understand the need for cryptography and the importance of encryption-essentially the need to code information to keep it safe. This detailed volume examines the logic and science behind various ciphers, their real world uses, how codes can be broken, and the use of technology in this oft-overlooked field.

  3. Coded Splitting Tree Protocols

    DEFF Research Database (Denmark)

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  4. Transport theory and codes

    International Nuclear Information System (INIS)

    Clancy, B.E.

    1986-01-01

    This chapter begins with a neutron transport equation which includes the one dimensional plane geometry problems, the one dimensional spherical geometry problems, and numerical solutions. The section on the ANISN code and its look-alikes covers problems which can be solved; eigenvalue problems; outer iteration loop; inner iteration loop; and finite difference solution procedures. The input and output data for ANISN is also discussed. Two dimensional problems such as the DOT code are given. Finally, an overview of the Monte-Carlo methods and codes are elaborated on

  5. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  6. Fulcrum Network Codes

    DEFF Research Database (Denmark)

    2015-01-01

    Fulcrum network codes, which are a network coding framework, achieve three objectives: (i) to reduce the overhead per coded packet to almost 1 bit per source packet; (ii) to operate the network using only low field size operations at intermediate nodes, dramatically reducing complexity...... in the network; and (iii) to deliver an end-to-end performance that is close to that of a high field size network coding system for high-end receivers while simultaneously catering to low-end ones that can only decode in a lower field size. Sources may encode using a high field size expansion to increase...... the number of dimensions seen by the network using a linear mapping. Receivers can tradeoff computational effort with network delay, decoding in the high field size, the low field size, or a combination thereof....

  7. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed; Ghanem, Bernard; Wonka, Peter

    2018-01-01

    coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements

  8. SASSYS LMFBR systems code

    International Nuclear Information System (INIS)

    Dunn, F.E.; Prohammer, F.G.; Weber, D.P.

    1983-01-01

    The SASSYS LMFBR systems analysis code is being developed mainly to analyze the behavior of the shut-down heat-removal system and the consequences of failures in the system, although it is also capable of analyzing a wide range of transients, from mild operational transients through more severe transients leading to sodium boiling in the core and possible melting of clad and fuel. The code includes a detailed SAS4A multi-channel core treatment plus a general thermal-hydraulic treatment of the primary and intermediate heat-transport loops and the steam generators. The code can handle any LMFBR design, loop or pool, with an arbitrary arrangement of components. The code is fast running: usually faster than real time

  9. OCA Code Enforcement

    Data.gov (United States)

    Montgomery County of Maryland — The Office of the County Attorney (OCA) processes Code Violation Citations issued by County agencies. The citations can be viewed by issued department, issued date...

  10. The fast code

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, L.N.; Wilson, R.E. [Oregon State Univ., Dept. of Mechanical Engineering, Corvallis, OR (United States)

    1996-09-01

    The FAST Code which is capable of determining structural loads on a flexible, teetering, horizontal axis wind turbine is described and comparisons of calculated loads with test data are given at two wind speeds for the ESI-80. The FAST Code models a two-bladed HAWT with degrees of freedom for blade bending, teeter, drive train flexibility, yaw, and windwise and crosswind tower motion. The code allows blade dimensions, stiffnesses, and weights to differ and models tower shadow, wind shear, and turbulence. Additionally, dynamic stall is included as are delta-3 and an underslung rotor. Load comparisons are made with ESI-80 test data in the form of power spectral density, rainflow counting, occurrence histograms, and azimuth averaged bin plots. It is concluded that agreement between the FAST Code and test results is good. (au)

  11. Code Disentanglement: Initial Plan

    Energy Technology Data Exchange (ETDEWEB)

    Wohlbier, John Greaton [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kelley, Timothy M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockefeller, Gabriel M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Calef, Matthew Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-01-27

    The first step to making more ambitious changes in the EAP code base is to disentangle the code into a set of independent, levelized packages. We define a package as a collection of code, most often across a set of files, that provides a defined set of functionality; a package a) can be built and tested as an entity and b) fits within an overall levelization design. Each package contributes one or more libraries, or an application that uses the other libraries. A package set is levelized if the relationships between packages form a directed, acyclic graph and each package uses only packages at lower levels of the diagram (in Fortran this relationship is often describable by the use relationship between modules). Independent packages permit independent- and therefore parallel|development. The packages form separable units for the purposes of development and testing. This is a proven path for enabling finer-grained changes to a complex code.

  12. Induction technology optimization code

    International Nuclear Information System (INIS)

    Caporaso, G.J.; Brooks, A.L.; Kirbie, H.C.

    1992-01-01

    A code has been developed to evaluate relative costs of induction accelerator driver systems for relativistic klystrons. The code incorporates beam generation, transport and pulsed power system constraints to provide an integrated design tool. The code generates an injector/accelerator combination which satisfies the top level requirements and all system constraints once a small number of design choices have been specified (rise time of the injector voltage and aspect ratio of the ferrite induction cores, for example). The code calculates dimensions of accelerator mechanical assemblies and values of all electrical components. Cost factors for machined parts, raw materials and components are applied to yield a total system cost. These costs are then plotted as a function of the two design choices to enable selection of an optimum design based on various criteria. (Author) 11 refs., 3 figs

  13. VT ZIP Code Areas

    Data.gov (United States)

    Vermont Center for Geographic Information — (Link to Metadata) A ZIP Code Tabulation Area (ZCTA) is a statistical geographic entity that approximates the delivery area for a U.S. Postal Service five-digit...

  14. Bandwidth efficient coding

    CERN Document Server

    Anderson, John B

    2017-01-01

    Bandwidth Efficient Coding addresses the major challenge in communication engineering today: how to communicate more bits of information in the same radio spectrum. Energy and bandwidth are needed to transmit bits, and bandwidth affects capacity the most. Methods have been developed that are ten times as energy efficient at a given bandwidth consumption as simple methods. These employ signals with very complex patterns and are called "coding" solutions. The book begins with classical theory before introducing new techniques that combine older methods of error correction coding and radio transmission in order to create narrowband methods that are as efficient in both spectrum and energy as nature allows. Other topics covered include modulation techniques such as CPM, coded QAM and pulse design.

  15. Reactor lattice codes

    International Nuclear Information System (INIS)

    Kulikowska, T.

    2001-01-01

    The description of reactor lattice codes is carried out on the example of the WIMSD-5B code. The WIMS code in its various version is the most recognised lattice code. It is used in all parts of the world for calculations of research and power reactors. The version WIMSD-5B is distributed free of charge by NEA Data Bank. The description of its main features given in the present lecture follows the aspects defined previously for lattice calculations in the lecture on Reactor Lattice Transport Calculations. The spatial models are described, and the approach to the energy treatment is given. Finally the specific algorithm applied in fuel depletion calculations is outlined. (author)

  16. Critical Care Coding for Neurologists.

    Science.gov (United States)

    Nuwer, Marc R; Vespa, Paul M

    2015-10-01

    Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.

  17. Lattice Index Coding

    OpenAIRE

    Natarajan, Lakshmi; Hong, Yi; Viterbo, Emanuele

    2014-01-01

    The index coding problem involves a sender with K messages to be transmitted across a broadcast channel, and a set of receivers each of which demands a subset of the K messages while having prior knowledge of a different subset as side information. We consider the specific case of noisy index coding where the broadcast channel is Gaussian and every receiver demands all the messages from the source. Instances of this communication problem arise in wireless relay networks, sensor networks, and ...

  18. Towards advanced code simulators

    International Nuclear Information System (INIS)

    Scriven, A.H.

    1990-01-01

    The Central Electricity Generating Board (CEGB) uses advanced thermohydraulic codes extensively to support PWR safety analyses. A system has been developed to allow fully interactive execution of any code with graphical simulation of the operator desk and mimic display. The system operates in a virtual machine environment, with the thermohydraulic code executing in one virtual machine, communicating via interrupts with any number of other virtual machines each running other programs and graphics drivers. The driver code itself does not have to be modified from its normal batch form. Shortly following the release of RELAP5 MOD1 in IBM compatible form in 1983, this code was used as the driver for this system. When RELAP5 MOD2 became available, it was adopted with no changes needed in the basic system. Overall the system has been used for some 5 years for the analysis of LOBI tests, full scale plant studies and for simple what-if studies. For gaining rapid understanding of system dependencies it has proved invaluable. The graphical mimic system, being independent of the driver code, has also been used with other codes to study core rewetting, to replay results obtained from batch jobs on a CRAY2 computer system and to display suitably processed experimental results from the LOBI facility to aid interpretation. For the above work real-time execution was not necessary. Current work now centers on implementing the RELAP 5 code on a true parallel architecture machine. Marconi Simulation have been contracted to investigate the feasibility of using upwards of 100 processors, each capable of a peak of 30 MIPS to run a highly detailed RELAP5 model in real time, complete with specially written 3D core neutronics and balance of plant models. This paper describes the experience of using RELAP5 as an analyzer/simulator, and outlines the proposed methods and problems associated with parallel execution of RELAP5

  19. Cracking the Gender Codes

    DEFF Research Database (Denmark)

    Rennison, Betina Wolfgang

    2016-01-01

    extensive work to raise the proportion of women. This has helped slightly, but women remain underrepresented at the corporate top. Why is this so? What can be done to solve it? This article presents five different types of answers relating to five discursive codes: nature, talent, business, exclusion...... in leadership management, we must become more aware and take advantage of this complexity. We must crack the codes in order to crack the curve....

  20. PEAR code review

    International Nuclear Information System (INIS)

    De Wit, R.; Jamieson, T.; Lord, M.; Lafortune, J.F.

    1997-07-01

    As a necessary component in the continuous improvement and refinement of methodologies employed in the nuclear industry, regulatory agencies need to periodically evaluate these processes to improve confidence in results and ensure appropriate levels of safety are being achieved. The independent and objective review of industry-standard computer codes forms an essential part of this program. To this end, this work undertakes an in-depth review of the computer code PEAR (Public Exposures from Accidental Releases), developed by Atomic Energy of Canada Limited (AECL) to assess accidental releases from CANDU reactors. PEAR is based largely on the models contained in the Canadian Standards Association (CSA) N288.2-M91. This report presents the results of a detailed technical review of the PEAR code to identify any variations from the CSA standard and other supporting documentation, verify the source code, assess the quality of numerical models and results, and identify general strengths and weaknesses of the code. The version of the code employed in this review is the one which AECL intends to use for CANDU 9 safety analyses. (author)

  1. KENO-V code

    International Nuclear Information System (INIS)

    Cramer, S.N.

    1984-01-01

    The KENO-V code is the current release of the Oak Ridge multigroup Monte Carlo criticality code development. The original KENO, with 16 group Hansen-Roach cross sections and P 1 scattering, was one ot the first multigroup Monte Carlo codes and it and its successors have always been a much-used research tool for criticality studies. KENO-V is able to accept large neutron cross section libraries (a 218 group set is distributed with the code) and has a general P/sub N/ scattering capability. A supergroup feature allows execution of large problems on small computers, but at the expense of increased calculation time and system input/output operations. This supergroup feature is activated automatically by the code in a manner which utilizes as much computer memory as is available. The primary purpose of KENO-V is to calculate the system k/sub eff/, from small bare critical assemblies to large reflected arrays of differing fissile and moderator elements. In this respect KENO-V neither has nor requires the many options and sophisticated biasing techniques of general Monte Carlo codes

  2. Code, standard and specifications

    International Nuclear Information System (INIS)

    Abdul Nassir Ibrahim; Azali Muhammad; Ab. Razak Hamzah; Abd. Aziz Mohamed; Mohamad Pauzi Ismail

    2008-01-01

    Radiography also same as the other technique, it need standard. This standard was used widely and method of used it also regular. With that, radiography testing only practical based on regulations as mentioned and documented. These regulation or guideline documented in code, standard and specifications. In Malaysia, level one and basic radiographer can do radiography work based on instruction give by level two or three radiographer. This instruction was produced based on guideline that mention in document. Level two must follow the specifications mentioned in standard when write the instruction. From this scenario, it makes clearly that this radiography work is a type of work that everything must follow the rule. For the code, the radiography follow the code of American Society for Mechanical Engineer (ASME) and the only code that have in Malaysia for this time is rule that published by Atomic Energy Licensing Board (AELB) known as Practical code for radiation Protection in Industrial radiography. With the existence of this code, all the radiography must follow the rule or standard regulated automatically.

  3. LLNL superconducting magnets test facility

    Energy Technology Data Exchange (ETDEWEB)

    Manahan, R; Martovetsky, N; Moller, J; Zbasnik, J

    1999-09-16

    The FENIX facility at Lawrence Livermore National Laboratory was upgraded and refurbished in 1996-1998 for testing CICC superconducting magnets. The FENIX facility was used for superconducting high current, short sample tests for fusion programs in the late 1980s--early 1990s. The new facility includes a 4-m diameter vacuum vessel, two refrigerators, a 40 kA, 42 V computer controlled power supply, a new switchyard with a dump resistor, a new helium distribution valve box, several sets of power leads, data acquisition system and other auxiliary systems, which provide a lot of flexibility in testing of a wide variety of superconducting magnets in a wide range of parameters. The detailed parameters and capabilities of this test facility and its systems are described in the paper.

  4. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    OpenAIRE

    Gao, Yuan; Liu, Pengyu; Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content ...

  5. SPECTRAL AMPLITUDE CODING OCDMA SYSTEMS USING ENHANCED DOUBLE WEIGHT CODE

    Directory of Open Access Journals (Sweden)

    F.N. HASOON

    2006-12-01

    Full Text Available A new code structure for spectral amplitude coding optical code division multiple access systems based on double weight (DW code families is proposed. The DW has a fixed weight of two. Enhanced double-weight (EDW code is another variation of a DW code family that can has a variable weight greater than one. The EDW code possesses ideal cross-correlation properties and exists for every natural number n. A much better performance can be provided by using the EDW code compared to the existing code such as Hadamard and Modified Frequency-Hopping (MFH codes. It has been observed that theoretical analysis and simulation for EDW is much better performance compared to Hadamard and Modified Frequency-Hopping (MFH codes.

  6. Nuclear code abstracts (1975 edition)

    International Nuclear Information System (INIS)

    Akanuma, Makoto; Hirakawa, Takashi

    1976-02-01

    Nuclear Code Abstracts is compiled in the Nuclear Code Committee to exchange information of the nuclear code developments among members of the committee. Enlarging the collection, the present one includes nuclear code abstracts obtained in 1975 through liaison officers of the organizations in Japan participating in the Nuclear Energy Agency's Computer Program Library at Ispra, Italy. The classification of nuclear codes and the format of code abstracts are the same as those in the library. (auth.)

  7. Some new ternary linear codes

    Directory of Open Access Journals (Sweden)

    Rumen Daskalov

    2017-07-01

    Full Text Available Let an $[n,k,d]_q$ code be a linear code of length $n$, dimension $k$ and minimum Hamming distance $d$ over $GF(q$. One of the most important problems in coding theory is to construct codes with optimal minimum distances. In this paper 22 new ternary linear codes are presented. Two of them are optimal. All new codes improve the respective lower bounds in [11].

  8. ACE - Manufacturer Identification Code (MID)

    Data.gov (United States)

    Department of Homeland Security — The ACE Manufacturer Identification Code (MID) application is used to track and control identifications codes for manufacturers. A manufacturer is identified on an...

  9. Algebraic and stochastic coding theory

    CERN Document Server

    Kythe, Dave K

    2012-01-01

    Using a simple yet rigorous approach, Algebraic and Stochastic Coding Theory makes the subject of coding theory easy to understand for readers with a thorough knowledge of digital arithmetic, Boolean and modern algebra, and probability theory. It explains the underlying principles of coding theory and offers a clear, detailed description of each code. More advanced readers will appreciate its coverage of recent developments in coding theory and stochastic processes. After a brief review of coding history and Boolean algebra, the book introduces linear codes, including Hamming and Golay codes.

  10. Optical coding theory with Prime

    CERN Document Server

    Kwong, Wing C

    2013-01-01

    Although several books cover the coding theory of wireless communications and the hardware technologies and coding techniques of optical CDMA, no book has been specifically dedicated to optical coding theory-until now. Written by renowned authorities in the field, Optical Coding Theory with Prime gathers together in one volume the fundamentals and developments of optical coding theory, with a focus on families of prime codes, supplemented with several families of non-prime codes. The book also explores potential applications to coding-based optical systems and networks. Learn How to Construct

  11. The Aster code

    International Nuclear Information System (INIS)

    Delbecq, J.M.

    1999-01-01

    The Aster code is a 2D or 3D finite-element calculation code for structures developed by the R and D direction of Electricite de France (EdF). This dossier presents a complete overview of the characteristics and uses of the Aster code: introduction of version 4; the context of Aster (organisation of the code development, versions, systems and interfaces, development tools, quality assurance, independent validation); static mechanics (linear thermo-elasticity, Euler buckling, cables, Zarka-Casier method); non-linear mechanics (materials behaviour, big deformations, specific loads, unloading and loss of load proportionality indicators, global algorithm, contact and friction); rupture mechanics (G energy restitution level, restitution level in thermo-elasto-plasticity, 3D local energy restitution level, KI and KII stress intensity factors, calculation of limit loads for structures), specific treatments (fatigue, rupture, wear, error estimation); meshes and models (mesh generation, modeling, loads and boundary conditions, links between different modeling processes, resolution of linear systems, display of results etc..); vibration mechanics (modal and harmonic analysis, dynamics with shocks, direct transient dynamics, seismic analysis and aleatory dynamics, non-linear dynamics, dynamical sub-structuring); fluid-structure interactions (internal acoustics, mass, rigidity and damping); linear and non-linear thermal analysis; steels and metal industry (structure transformations); coupled problems (internal chaining, internal thermo-hydro-mechanical coupling, chaining with other codes); products and services. (J.S.)

  12. Adaptive distributed source coding.

    Science.gov (United States)

    Varodayan, David; Lin, Yao-Chung; Girod, Bernd

    2012-05-01

    We consider distributed source coding in the presence of hidden variables that parameterize the statistical dependence among sources. We derive the Slepian-Wolf bound and devise coding algorithms for a block-candidate model of this problem. The encoder sends, in addition to syndrome bits, a portion of the source to the decoder uncoded as doping bits. The decoder uses the sum-product algorithm to simultaneously recover the source symbols and the hidden statistical dependence variables. We also develop novel techniques based on density evolution (DE) to analyze the coding algorithms. We experimentally confirm that our DE analysis closely approximates practical performance. This result allows us to efficiently optimize parameters of the algorithms. In particular, we show that the system performs close to the Slepian-Wolf bound when an appropriate doping rate is selected. We then apply our coding and analysis techniques to a reduced-reference video quality monitoring system and show a bit rate saving of about 75% compared with fixed-length coding.

  13. Speech coding code- excited linear prediction

    CERN Document Server

    Bäckström, Tom

    2017-01-01

    This book provides scientific understanding of the most central techniques used in speech coding both for advanced students as well as professionals with a background in speech audio and or digital signal processing. It provides a clear connection between the whys hows and whats thus enabling a clear view of the necessity purpose and solutions provided by various tools as well as their strengths and weaknesses in each respect Equivalently this book sheds light on the following perspectives for each technology presented Objective What do we want to achieve and especially why is this goal important Resource Information What information is available and how can it be useful and Resource Platform What kind of platforms are we working with and what are their capabilities restrictions This includes computational memory and acoustic properties and the transmission capacity of devices used. The book goes on to address Solutions Which solutions have been proposed and how can they be used to reach the stated goals and ...

  14. Spatially coded backscatter radiography

    International Nuclear Information System (INIS)

    Thangavelu, S.; Hussein, E.M.A.

    2007-01-01

    Conventional radiography requires access to two opposite sides of an object, which makes it unsuitable for the inspection of extended and/or thick structures (airframes, bridges, floors etc.). Backscatter imaging can overcome this problem, but the indications obtained are difficult to interpret. This paper applies the coded aperture technique to gamma-ray backscatter-radiography in order to enhance the detectability of flaws. This spatial coding method involves the positioning of a mask with closed and open holes to selectively permit or block the passage of radiation. The obtained coded-aperture indications are then mathematically decoded to detect the presence of anomalies. Indications obtained from Monte Carlo calculations were utilized in this work to simulate radiation scattering measurements. These simulated measurements were used to investigate the applicability of this technique to the detection of flaws by backscatter radiography

  15. Aztheca Code; Codigo Aztheca

    Energy Technology Data Exchange (ETDEWEB)

    Quezada G, S.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, San Rafael Atlixco No. 186, Col. Vicentina, 09340 Ciudad de Mexico (Mexico); Centeno P, J.; Sanchez M, H., E-mail: sequga@gmail.com [UNAM, Facultad de Ingenieria, Ciudad Universitaria, Circuito Exterior s/n, 04510 Ciudad de Mexico (Mexico)

    2017-09-15

    This paper presents the Aztheca code, which is formed by the mathematical models of neutron kinetics, power generation, heat transfer, core thermo-hydraulics, recirculation systems, dynamic pressure and level models and control system. The Aztheca code is validated with plant data, as well as with predictions from the manufacturer when the reactor operates in a stationary state. On the other hand, to demonstrate that the model is applicable during a transient, an event occurred in a nuclear power plant with a BWR reactor is selected. The plant data are compared with the results obtained with RELAP-5 and the Aztheca model. The results show that both RELAP-5 and the Aztheca code have the ability to adequately predict the behavior of the reactor. (Author)

  16. The Coding Question.

    Science.gov (United States)

    Gallistel, C R

    2017-07-01

    Recent electrophysiological results imply that the duration of the stimulus onset asynchrony in eyeblink conditioning is encoded by a mechanism intrinsic to the cerebellar Purkinje cell. This raises the general question - how is quantitative information (durations, distances, rates, probabilities, amounts, etc.) transmitted by spike trains and encoded into engrams? The usual assumption is that information is transmitted by firing rates. However, rate codes are energetically inefficient and computationally awkward. A combinatorial code is more plausible. If the engram consists of altered synaptic conductances (the usual assumption), then we must ask how numbers may be written to synapses. It is much easier to formulate a coding hypothesis if the engram is realized by a cell-intrinsic molecular mechanism. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Revised SRAC code system

    International Nuclear Information System (INIS)

    Tsuchihashi, Keichiro; Ishiguro, Yukio; Kaneko, Kunio; Ido, Masaru.

    1986-09-01

    Since the publication of JAERI-1285 in 1983 for the preliminary version of the SRAC code system, a number of additions and modifications to the functions have been made to establish an overall neutronics code system. Major points are (1) addition of JENDL-2 version of data library, (2) a direct treatment of doubly heterogeneous effect on resonance absorption, (3) a generalized Dancoff factor, (4) a cell calculation based on the fixed boundary source problem, (5) the corresponding edit required for experimental analysis and reactor design, (6) a perturbation theory calculation for reactivity change, (7) an auxiliary code for core burnup and fuel management, etc. This report is a revision of the users manual which consists of the general description, input data requirements and their explanation, detailed information on usage, mathematics, contents of libraries and sample I/O. (author)

  18. Code query by example

    Science.gov (United States)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  19. The correspondence between projective codes and 2-weight codes

    NARCIS (Netherlands)

    Brouwer, A.E.; Eupen, van M.J.M.; Tilborg, van H.C.A.; Willems, F.M.J.

    1994-01-01

    The hyperplanes intersecting a 2-weight code in the same number of points obviously form the point set of a projective code. On the other hand, if we have a projective code C, then we can make a 2-weight code by taking the multiset of points E PC with multiplicity "Y(w), where W is the weight of

  20. Visualizing code and coverage changes for code review

    NARCIS (Netherlands)

    Oosterwaal, Sebastiaan; van Deursen, A.; De Souza Coelho, R.; Sawant, A.A.; Bacchelli, A.

    2016-01-01

    One of the tasks of reviewers is to verify that code modifications are well tested. However, current tools offer little support in understanding precisely how changes to the code relate to changes to the tests. In particular, it is hard to see whether (modified) test code covers the changed code.

  1. Turbo-Gallager Codes: The Emergence of an Intelligent Coding ...

    African Journals Online (AJOL)

    Today, both turbo codes and low-density parity-check codes are largely superior to other code families and are being used in an increasing number of modern communication systems including 3G standards, satellite and deep space communications. However, the two codes have certain distinctive characteristics that ...

  2. Code of Medical Ethics

    Directory of Open Access Journals (Sweden)

    . SZD-SZZ

    2017-03-01

    Full Text Available Te Code was approved on December 12, 1992, at the 3rd regular meeting of the General Assembly of the Medical Chamber of Slovenia and revised on April 24, 1997, at the 27th regular meeting of the General Assembly of the Medical Chamber of Slovenia. The Code was updated and harmonized with the Medical Association of Slovenia and approved on October 6, 2016, at the regular meeting of the General Assembly of the Medical Chamber of Slovenia.

  3. Supervised Convolutional Sparse Coding

    KAUST Repository

    Affara, Lama Ahmed

    2018-04-08

    Convolutional Sparse Coding (CSC) is a well-established image representation model especially suited for image restoration tasks. In this work, we extend the applicability of this model by proposing a supervised approach to convolutional sparse coding, which aims at learning discriminative dictionaries instead of purely reconstructive ones. We incorporate a supervised regularization term into the traditional unsupervised CSC objective to encourage the final dictionary elements to be discriminative. Experimental results show that using supervised convolutional learning results in two key advantages. First, we learn more semantically relevant filters in the dictionary and second, we achieve improved image reconstruction on unseen data.

  4. CONCEPT computer code

    International Nuclear Information System (INIS)

    Delene, J.

    1984-01-01

    CONCEPT is a computer code that will provide conceptual capital investment cost estimates for nuclear and coal-fired power plants. The code can develop an estimate for construction at any point in time. Any unit size within the range of about 400 to 1300 MW electric may be selected. Any of 23 reference site locations across the United States and Canada may be selected. PWR, BWR, and coal-fired plants burning high-sulfur and low-sulfur coal can be estimated. Multiple-unit plants can be estimated. Costs due to escalation/inflation and interest during construction are calculated

  5. Principles of speech coding

    CERN Document Server

    Ogunfunmi, Tokunbo

    2010-01-01

    It is becoming increasingly apparent that all forms of communication-including voice-will be transmitted through packet-switched networks based on the Internet Protocol (IP). Therefore, the design of modern devices that rely on speech interfaces, such as cell phones and PDAs, requires a complete and up-to-date understanding of the basics of speech coding. Outlines key signal processing algorithms used to mitigate impairments to speech quality in VoIP networksOffering a detailed yet easily accessible introduction to the field, Principles of Speech Coding provides an in-depth examination of the

  6. Increasing the efficiency of the TOUGH code for running large-scale problems in nuclear waste isolation

    International Nuclear Information System (INIS)

    Nitao, J.J.

    1990-08-01

    The TOUGH code developed at Lawrence Berkeley Laboratory (LBL) is being extensively used to numerically simulate the thermal and hydrologic environment around nuclear waste packages in the unsaturated zone for the Yucca Mountain Project. At the Lawrence Livermore National Laboratory (LLNL) we have rewritten approximately 80 percent of the TOUGH code to increase its speed and incorporate new options. The geometry of many requires large numbers of computational elements in order to realistically model detailed physical phenomena, and, as a result, large amounts of computer time are needed. In order to increase the speed of the code we have incorporated fast linear equation solvers, vectorization of substantial portions of code, improved automatic time stepping, and implementation of table look-up for the steam table properties. These enhancements have increased the speed of the code for typical problems by a factor of 20 on the Cray 2 computer. In addition to the increase in computational efficiency we have added several options: vapor pressure lowering; equivalent continuum treatment of fractures; energy and material volumetric, mass and flux accounting; and Stefan-Boltzmann radiative heat transfer. 5 refs

  7. Evaluation Codes from an Affine Veriety Code Perspective

    DEFF Research Database (Denmark)

    Geil, Hans Olav

    2008-01-01

    Evaluation codes (also called order domain codes) are traditionally introduced as generalized one-point geometric Goppa codes. In the present paper we will give a new point of view on evaluation codes by introducing them instead as particular nice examples of affine variety codes. Our study...... includes a reformulation of the usual methods to estimate the minimum distances of evaluation codes into the setting of affine variety codes. Finally we describe the connection to the theory of one-pointgeometric Goppa codes. Contents 4.1 Introduction...... . . . . . . . . . . . . . . . . . . . . . . . 171 4.9 Codes form order domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173 4.10 One-point geometric Goppa codes . . . . . . . . . . . . . . . . . . . . . . . . 176 4.11 Bibliographical Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 References...

  8. Dual Coding in Children.

    Science.gov (United States)

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  9. Physical layer network coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Popovski, Petar; Yomo, Hiroyuki

    2014-01-01

    Physical layer network coding (PLNC) has been proposed to improve throughput of the two-way relay channel, where two nodes communicate with each other, being assisted by a relay node. Most of the works related to PLNC are focused on a simple three-node model and they do not take into account...

  10. Radioactive action code

    International Nuclear Information System (INIS)

    Anon.

    1988-01-01

    A new coding system, 'Hazrad', for buildings and transportation containers for alerting emergency services personnel to the presence of radioactive materials has been developed in the United Kingdom. The hazards of materials in the buildings or transport container, together with the recommended emergency action, are represented by a number of codes which are marked on the building or container and interpreted from a chart carried as a pocket-size guide. Buildings would be marked with the familiar yellow 'radioactive' trefoil, the written information 'Radioactive materials' and a list of isotopes. Under this the 'Hazrad' code would be written - three symbols to denote the relative radioactive risk (low, medium or high), the biological risk (also low, medium or high) and the third showing the type of radiation emitted, alpha, beta or gamma. The response cards indicate appropriate measures to take, eg for a high biological risk, Bio3, the wearing of a gas-tight protection suit is advised. The code and its uses are explained. (U.K.)

  11. Building Codes and Regulations.

    Science.gov (United States)

    Fisher, John L.

    The hazard of fire is of great concern to libraries due to combustible books and new plastics used in construction and interiors. Building codes and standards can offer architects and planners guidelines to follow but these standards should be closely monitored, updated, and researched for fire prevention. (DS)

  12. Physics of codes

    International Nuclear Information System (INIS)

    Cooper, R.K.; Jones, M.E.

    1989-01-01

    The title given this paper is a bit presumptuous, since one can hardly expect to cover the physics incorporated into all the codes already written and currently being written. The authors focus on those codes which have been found to be particularly useful in the analysis and design of linacs. At that the authors will be a bit parochial and discuss primarily those codes used for the design of radio-frequency (rf) linacs, although the discussions of TRANSPORT and MARYLIE have little to do with the time structures of the beams being analyzed. The plan of this paper is first to describe rather simply the concepts of emittance and brightness, then to describe rather briefly each of the codes TRANSPORT, PARMTEQ, TBCI, MARYLIE, and ISIS, indicating what physics is and is not included in each of them. It is expected that the vast majority of what is covered will apply equally well to protons and electrons (and other particles). This material is intended to be tutorial in nature and can in no way be expected to be exhaustive. 31 references, 4 figures

  13. Reliability and code level

    NARCIS (Netherlands)

    Kasperski, M.; Geurts, C.P.W.

    2005-01-01

    The paper describes the work of the IAWE Working Group WBG - Reliability and Code Level, one of the International Codification Working Groups set up at ICWE10 in Copenhagen. The following topics are covered: sources of uncertainties in the design wind load, appropriate design target values for the

  14. Ready, steady… Code!

    CERN Multimedia

    Anaïs Schaeffer

    2013-01-01

    This summer, CERN took part in the Google Summer of Code programme for the third year in succession. Open to students from all over the world, this programme leads to very successful collaborations for open source software projects.   Image: GSoC 2013. Google Summer of Code (GSoC) is a global programme that offers student developers grants to write code for open-source software projects. Since its creation in 2005, the programme has brought together some 6,000 students from over 100 countries worldwide. The students selected by Google are paired with a mentor from one of the participating projects, which can be led by institutes, organisations, companies, etc. This year, CERN PH Department’s SFT (Software Development for Experiments) Group took part in the GSoC programme for the third time, submitting 15 open-source projects. “Once published on the Google Summer for Code website (in April), the projects are open to applications,” says Jakob Blomer, one of the o...

  15. CERN Code of Conduct

    CERN Document Server

    Department, HR

    2010-01-01

    The Code is intended as a guide in helping us, as CERN contributors, to understand how to conduct ourselves, treat others and expect to be treated. It is based around the five core values of the Organization. We should all become familiar with it and try to incorporate it into our daily life at CERN.

  16. Nuclear safety code study

    Energy Technology Data Exchange (ETDEWEB)

    Hu, H.H.; Ford, D.; Le, H.; Park, S.; Cooke, K.L.; Bleakney, T.; Spanier, J.; Wilburn, N.P.; O' Reilly, B.; Carmichael, B.

    1981-01-01

    The objective is to analyze an overpower accident in an LMFBR. A simplified model of the primary coolant loop was developed in order to understand the instabilities encountered with the MELT III and SAS codes. The computer programs were translated for switching to the IBM 4331. Numerical methods were investigated for solving the neutron kinetics equations; the Adams and Gear methods were compared. (DLC)

  17. Revised C++ coding conventions

    CERN Document Server

    Callot, O

    2001-01-01

    This document replaces the note LHCb 98-049 by Pavel Binko. After a few years of practice, some simplification and clarification of the rules was needed. As many more people have now some experience in writing C++ code, their opinion was also taken into account to get a commonly agreed set of conventions

  18. Corporate governance through codes

    NARCIS (Netherlands)

    Haxhi, I.; Aguilera, R.V.; Vodosek, M.; den Hartog, D.; McNett, J.M.

    2014-01-01

    The UK's 1992 Cadbury Report defines corporate governance (CG) as the system by which businesses are directed and controlled. CG codes are a set of best practices designed to address deficiencies in the formal contracts and institutions by suggesting prescriptions on the preferred role and

  19. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  20. Broadcast Coded Slotted ALOHA

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Brännström, Frederik; Graell i Amat, Alexandre

    2016-01-01

    We propose an uncoordinated medium access control (MAC) protocol, called all-to-all broadcast coded slotted ALOHA (B-CSA) for reliable all-to-all broadcast with strict latency constraints. In B-CSA, each user acts as both transmitter and receiver in a half-duplex mode. The half-duplex mode gives ...

  1. Software Defined Coded Networking

    DEFF Research Database (Denmark)

    Di Paola, Carla; Roetter, Daniel Enrique Lucani; Palazzo, Sergio

    2017-01-01

    the quality of each link and even across neighbouring links and using simulations to show that an additional reduction of packet transmission in the order of 40% is possible. Second, to advocate for the use of network coding (NC) jointly with software defined networking (SDN) providing an implementation...

  2. New code of conduct

    CERN Multimedia

    Laëtitia Pedroso

    2010-01-01

    During his talk to the staff at the beginning of the year, the Director-General mentioned that a new code of conduct was being drawn up. What exactly is it and what is its purpose? Anne-Sylvie Catherin, Head of the Human Resources (HR) Department, talked to us about the whys and wherefores of the project.   Drawing by Georges Boixader from the cartoon strip “The World of Particles” by Brian Southworth. A code of conduct is a general framework laying down the behaviour expected of all members of an organisation's personnel. “CERN is one of the very few international organisations that don’t yet have one", explains Anne-Sylvie Catherin. “We have been thinking about introducing a code of conduct for a long time but lacked the necessary resources until now”. The call for a code of conduct has come from different sources within the Laboratory. “The Equal Opportunities Advisory Panel (read also the "Equal opportuni...

  3. (Almost) practical tree codes

    KAUST Repository

    Khina, Anatoly

    2016-08-15

    We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.

  4. Decoding Codes on Graphs

    Indian Academy of Sciences (India)

    having a probability Pi of being equal to a 1. Let us assume ... equal to a 0/1 has no bearing on the probability of the. It is often ... bits (call this set S) whose individual bits add up to zero ... In the context of binary error-correct~ng codes, specifi-.

  5. The Redox Code.

    Science.gov (United States)

    Jones, Dean P; Sies, Helmut

    2015-09-20

    The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.

  6. Z₂-double cyclic codes

    OpenAIRE

    Borges, J.

    2014-01-01

    A binary linear code C is a Z2-double cyclic code if the set of coordinates can be partitioned into two subsets such that any cyclic shift of the coordinates of both subsets leaves invariant the code. These codes can be identified as submodules of the Z2[x]-module Z2[x]/(x^r − 1) × Z2[x]/(x^s − 1). We determine the structure of Z2-double cyclic codes giving the generator polynomials of these codes. The related polynomial representation of Z2-double cyclic codes and its duals, and the relation...

  7. Coding for urologic office procedures.

    Science.gov (United States)

    Dowling, Robert A; Painter, Mark

    2013-11-01

    This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Essential idempotents and simplex codes

    Directory of Open Access Journals (Sweden)

    Gladys Chalom

    2017-01-01

    Full Text Available We define essential idempotents in group algebras and use them to prove that every mininmal abelian non-cyclic code is a repetition code. Also we use them to prove that every minimal abelian code is equivalent to a minimal cyclic code of the same length. Finally, we show that a binary cyclic code is simplex if and only if is of length of the form $n=2^k-1$ and is generated by an essential idempotent.

  9. Rate-adaptive BCH codes for distributed source coding

    DEFF Research Database (Denmark)

    Salmistraro, Matteo; Larsen, Knud J.; Forchhammer, Søren

    2013-01-01

    This paper considers Bose-Chaudhuri-Hocquenghem (BCH) codes for distributed source coding. A feedback channel is employed to adapt the rate of the code during the decoding process. The focus is on codes with short block lengths for independently coding a binary source X and decoding it given its...... strategies for improving the reliability of the decoded result are analyzed, and methods for estimating the performance are proposed. In the analysis, noiseless feedback and noiseless communication are assumed. Simulation results show that rate-adaptive BCH codes achieve better performance than low...... correlated side information Y. The proposed codes have been analyzed in a high-correlation scenario, where the marginal probability of each symbol, Xi in X, given Y is highly skewed (unbalanced). Rate-adaptive BCH codes are presented and applied to distributed source coding. Adaptive and fixed checking...

  10. Lawrence Livermore National Laboratory (LLNL) Experimental Test Site (Site 300) Salinity Evaluation and Minimization Plan for Cooling Towers and Mechanical Equipment Discharges

    Energy Technology Data Exchange (ETDEWEB)

    Daily III, W D

    2010-02-24

    This document was created to comply with the Central Valley Regional Water Quality Control Board (CVRWQCB) Waste Discharge Requirement (Order No. 98-148). This order established new requirements to assess the effect of and effort required to reduce salts in process water discharged to the subsurface. This includes the review of technical, operational, and management options available to reduce total dissolved solids (TDS) concentrations in cooling tower and mechanical equipment water discharges at Lawrence Livermore National Laboratory's (LLNL's) Experimental Test Site (Site 300) facility. It was observed that for the six cooling towers currently in operation, the total volume of groundwater used as make up water is about 27 gallons per minute and the discharge to the subsurface via percolation pits is 13 gallons per minute. The extracted groundwater has a TDS concentration of 700 mg/L. The cooling tower discharge concentrations range from 700 to 1,400 mg/L. There is also a small volume of mechanical equipment effluent being discharged to percolation pits, with a TDS range from 400 to 3,300 mg/L. The cooling towers and mechanical equipment are maintained and operated in a satisfactory manner. No major leaks were identified. Currently, there are no re-use options being employed. Several approaches known to reduce the blow down flow rate and/or TDS concentration being discharged to the percolation pits and septic systems were reviewed for technical feasibility and cost efficiency. These options range from efforts as simple as eliminating leaks to implementing advanced and innovative treatment methods. The various options considered, and their anticipated effect on water consumption, discharge volumes, and reduced concentrations are listed and compared in this report. Based on the assessment, it was recommended that there is enough variability in equipment usage, chemistry, flow rate, and discharge configurations that each discharge location at Site 300

  11. Lawrence Livermore National Laboratory (LLNL) Experimental Test Site (Site 300) Salinity Evaluation and Minimization Plan for Cooling Towers and Mechanical Equipment Discharges

    International Nuclear Information System (INIS)

    Daily, W.D. III

    2010-01-01

    This document was created to comply with the Central Valley Regional Water Quality Control Board (CVRWQCB) Waste Discharge Requirement (Order No. 98-148). This order established new requirements to assess the effect of and effort required to reduce salts in process water discharged to the subsurface. This includes the review of technical, operational, and management options available to reduce total dissolved solids (TDS) concentrations in cooling tower and mechanical equipment water discharges at Lawrence Livermore National Laboratory's (LLNL's) Experimental Test Site (Site 300) facility. It was observed that for the six cooling towers currently in operation, the total volume of groundwater used as make up water is about 27 gallons per minute and the discharge to the subsurface via percolation pits is 13 gallons per minute. The extracted groundwater has a TDS concentration of 700 mg/L. The cooling tower discharge concentrations range from 700 to 1,400 mg/L. There is also a small volume of mechanical equipment effluent being discharged to percolation pits, with a TDS range from 400 to 3,300 mg/L. The cooling towers and mechanical equipment are maintained and operated in a satisfactory manner. No major leaks were identified. Currently, there are no re-use options being employed. Several approaches known to reduce the blow down flow rate and/or TDS concentration being discharged to the percolation pits and septic systems were reviewed for technical feasibility and cost efficiency. These options range from efforts as simple as eliminating leaks to implementing advanced and innovative treatment methods. The various options considered, and their anticipated effect on water consumption, discharge volumes, and reduced concentrations are listed and compared in this report. Based on the assessment, it was recommended that there is enough variability in equipment usage, chemistry, flow rate, and discharge configurations that each discharge location at Site 300 should be

  12. Modifications to LLNL Plutonium Packaging Systems (PuPS) to achieve ASME VIII UW-13.2(d) Requirements for the DOE Standard 3013-00 Outer Can Weld

    International Nuclear Information System (INIS)

    Riley, D; Dodson, K

    2001-01-01

    The Lawrence Livermore National Laboratory (LLNL) Plutonium Packaging System (PuPS) prepares packages to meet the DOE Standard 3013 (Reference 1). The PuPS equipment was supplied by the British Nuclear Fuels Limited (BNFL). The DOE Standard 3013 requires that the welding of the Outer Can meets ASME Section VIII Division 1 (Reference 2). ASME Section VIII references to ASME Section IX (Reference 3) for most of the welding requirements, but UW-13.2 (d) of Section VIII requires a certain depth and width of the weld. In this document the UW-13.2(d) requirement is described as the (a+b)/2t s ratio. This ratio has to be greater than or equal to one to meet the requirements of UW-13.2(d). The Outer Can welds had not been meeting this requirement. Three methods are being followed to resolve this issue: (1) Modify the welding parameters to achieve the requirement, (2) Submit a weld case to ASME that changes the UW-13.2(d) requirement for their review and approval, and (3) Change the requirements in the DOE-STD-3013. Each of these methods are being pursued. This report addresses how the first method was addressed for the LLNL PuPS. The experimental work involved adjusting the Outer Can rotational speed and the power applied to the can. These adjustments resulted in being able to achieve the ASME VIII, UW-13.2(d) requirement

  13. Entanglement-assisted quantum MDS codes constructed from negacyclic codes

    Science.gov (United States)

    Chen, Jianzhang; Huang, Yuanyuan; Feng, Chunhui; Chen, Riqing

    2017-12-01

    Recently, entanglement-assisted quantum codes have been constructed from cyclic codes by some scholars. However, how to determine the number of shared pairs required to construct entanglement-assisted quantum codes is not an easy work. In this paper, we propose a decomposition of the defining set of negacyclic codes. Based on this method, four families of entanglement-assisted quantum codes constructed in this paper satisfy the entanglement-assisted quantum Singleton bound, where the minimum distance satisfies q+1 ≤ d≤ n+2/2. Furthermore, we construct two families of entanglement-assisted quantum codes with maximal entanglement.

  14. TMRBAR: a code to calculate plasma parameters for tandem-mirror reactors operating in the MARS mode

    International Nuclear Information System (INIS)

    Campbell, R.B.

    1983-01-01

    The purpose of this report is to document the plasma power balance model currently used by LLNL to calculate steady state operating points for tandem mirror reactors. The code developed from this model, TMRBAR, has been used to predict the performance and define supplementary heating requirements for drivers used in the Mirror Advanced Reactor Study (MARS) and for the Fusion Power Demonstration (FPD) study. The equations solved included particle and energy balance for central cell and end cell species, quasineutrality at several cardinal points in the end cell region, as well as calculations of volumes, densities and average energies based on given constraints of beta profiles and fusion power output. Alpha particle ash is treated self-consistently, but no other impurity species is treated

  15. Efficient convolutional sparse coding

    Science.gov (United States)

    Wohlberg, Brendt

    2017-06-20

    Computationally efficient algorithms may be applied for fast dictionary learning solving the convolutional sparse coding problem in the Fourier domain. More specifically, efficient convolutional sparse coding may be derived within an alternating direction method of multipliers (ADMM) framework that utilizes fast Fourier transforms (FFT) to solve the main linear system in the frequency domain. Such algorithms may enable a significant reduction in computational cost over conventional approaches by implementing a linear solver for the most critical and computationally expensive component of the conventional iterative algorithm. The theoretical computational cost of the algorithm may be reduced from O(M.sup.3N) to O(MN log N), where N is the dimensionality of the data and M is the number of elements in the dictionary. This significant improvement in efficiency may greatly increase the range of problems that can practically be addressed via convolutional sparse representations.

  16. Coded Network Function Virtualization

    DEFF Research Database (Denmark)

    Al-Shuwaili, A.; Simone, O.; Kliewer, J.

    2016-01-01

    Network function virtualization (NFV) prescribes the instantiation of network functions on general-purpose network devices, such as servers and switches. While yielding a more flexible and cost-effective network architecture, NFV is potentially limited by the fact that commercial off......-the-shelf hardware is less reliable than the dedicated network elements used in conventional cellular deployments. The typical solution for this problem is to duplicate network functions across geographically distributed hardware in order to ensure diversity. In contrast, this letter proposes to leverage channel...... coding in order to enhance the robustness on NFV to hardware failure. The proposed approach targets the network function of uplink channel decoding, and builds on the algebraic structure of the encoded data frames in order to perform in-network coding on the signals to be processed at different servers...

  17. The NIMROD Code

    Science.gov (United States)

    Schnack, D. D.; Glasser, A. H.

    1996-11-01

    NIMROD is a new code system that is being developed for the analysis of modern fusion experiments. It is being designed from the beginning to make the maximum use of massively parallel computer architectures and computer graphics. The NIMROD physics kernel solves the three-dimensional, time-dependent two-fluid equations with neo-classical effects in toroidal geometry of arbitrary poloidal cross section. The NIMROD system also includes a pre-processor, a grid generator, and a post processor. User interaction with NIMROD is facilitated by a modern graphical user interface (GUI). The NIMROD project is using Quality Function Deployment (QFD) team management techniques to minimize re-engineering and reduce code development time. This paper gives an overview of the NIMROD project. Operation of the GUI is demonstrated, and the first results from the physics kernel are given.

  18. Computer code FIT

    International Nuclear Information System (INIS)

    Rohmann, D.; Koehler, T.

    1987-02-01

    This is a description of the computer code FIT, written in FORTRAN-77 for a PDP 11/34. FIT is an interactive program to decude position, width and intensity of lines of X-ray spectra (max. length of 4K channels). The lines (max. 30 lines per fit) may have Gauss- or Voigt-profile, as well as exponential tails. Spectrum and fit can be displayed on a Tektronix terminal. (orig.) [de

  19. Discrete Sparse Coding.

    Science.gov (United States)

    Exarchakis, Georgios; Lücke, Jörg

    2017-11-01

    Sparse coding algorithms with continuous latent variables have been the subject of a large number of studies. However, discrete latent spaces for sparse coding have been largely ignored. In this work, we study sparse coding with latents described by discrete instead of continuous prior distributions. We consider the general case in which the latents (while being sparse) can take on any value of a finite set of possible values and in which we learn the prior probability of any value from data. This approach can be applied to any data generated by discrete causes, and it can be applied as an approximation of continuous causes. As the prior probabilities are learned, the approach then allows for estimating the prior shape without assuming specific functional forms. To efficiently train the parameters of our probabilistic generative model, we apply a truncated expectation-maximization approach (expectation truncation) that we modify to work with a general discrete prior. We evaluate the performance of the algorithm by applying it to a variety of tasks: (1) we use artificial data to verify that the algorithm can recover the generating parameters from a random initialization, (2) use image patches of natural images and discuss the role of the prior for the extraction of image components, (3) use extracellular recordings of neurons to present a novel method of analysis for spiking neurons that includes an intuitive discretization strategy, and (4) apply the algorithm on the task of encoding audio waveforms of human speech. The diverse set of numerical experiments presented in this letter suggests that discrete sparse coding algorithms can scale efficiently to work with realistic data sets and provide novel statistical quantities to describe the structure of the data.

  20. Code of Practice

    International Nuclear Information System (INIS)

    Doyle, Colin; Hone, Christopher; Nowlan, N.V.

    1984-05-01

    This Code of Practice introduces accepted safety procedures associated with the use of alpha, beta, gamma and X-radiation in secondary schools (pupils aged 12 to 18) in Ireland, and summarises good practice and procedures as they apply to radiation protection. Typical dose rates at various distances from sealed sources are quoted, and simplified equations are used to demonstrate dose and shielding calculations. The regulatory aspects of radiation protection are outlined, and references to statutory documents are given

  1. Tokamak simulation code manual

    International Nuclear Information System (INIS)

    Chung, Moon Kyoo; Oh, Byung Hoon; Hong, Bong Keun; Lee, Kwang Won

    1995-01-01

    The method to use TSC (Tokamak Simulation Code) developed by Princeton plasma physics laboratory is illustrated. In KT-2 tokamak, time dependent simulation of axisymmetric toroidal plasma and vertical stability have to be taken into account in design phase using TSC. In this report physical modelling of TSC are described and examples of application in JAERI and SERI are illustrated, which will be useful when TSC is installed KAERI computer system. (Author) 15 refs., 6 figs., 3 tabs

  2. Status of MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  3. Codes of Good Governance

    DEFF Research Database (Denmark)

    Beck Jørgensen, Torben; Sørensen, Ditte-Lene

    2013-01-01

    Good governance is a broad concept used by many international organizations to spell out how states or countries should be governed. Definitions vary, but there is a clear core of common public values, such as transparency, accountability, effectiveness, and the rule of law. It is quite likely......, transparency, neutrality, impartiality, effectiveness, accountability, and legality. The normative context of public administration, as expressed in codes, seems to ignore the New Public Management and Reinventing Government reform movements....

  4. Orthopedics coding and funding.

    Science.gov (United States)

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. Copyright © 2014. Published by Elsevier Masson SAS.

  5. Code Modernization of VPIC

    Science.gov (United States)

    Bird, Robert; Nystrom, David; Albright, Brian

    2017-10-01

    The ability of scientific simulations to effectively deliver performant computation is increasingly being challenged by successive generations of high-performance computing architectures. Code development to support efficient computation on these modern architectures is both expensive, and highly complex; if it is approached without due care, it may also not be directly transferable between subsequent hardware generations. Previous works have discussed techniques to support the process of adapting a legacy code for modern hardware generations, but despite the breakthroughs in the areas of mini-app development, portable-performance, and cache oblivious algorithms the problem still remains largely unsolved. In this work we demonstrate how a focus on platform agnostic modern code-development can be applied to Particle-in-Cell (PIC) simulations to facilitate effective scientific delivery. This work builds directly on our previous work optimizing VPIC, in which we replaced intrinsic based vectorisation with compile generated auto-vectorization to improve the performance and portability of VPIC. In this work we present the use of a specialized SIMD queue for processing some particle operations, and also preview a GPU capable OpenMP variant of VPIC. Finally we include a lessons learnt. Work performed under the auspices of the U.S. Dept. of Energy by the Los Alamos National Security, LLC Los Alamos National Laboratory under contract DE-AC52-06NA25396 and supported by the LANL LDRD program.

  6. MELCOR computer code manuals

    Energy Technology Data Exchange (ETDEWEB)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L. [Sandia National Labs., Albuquerque, NM (United States); Hodge, S.A.; Hyman, C.R.; Sanders, R.L. [Oak Ridge National Lab., TN (United States)

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  7. MELCOR computer code manuals

    International Nuclear Information System (INIS)

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR's phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package

  8. Quality Improvement of MARS Code and Establishment of Code Coupling

    International Nuclear Information System (INIS)

    Chung, Bub Dong; Jeong, Jae Jun; Kim, Kyung Doo

    2010-04-01

    The improvement of MARS code quality and coupling with regulatory auditing code have been accomplished for the establishment of self-reliable technology based regulatory auditing system. The unified auditing system code was realized also by implementing the CANDU specific models and correlations. As a part of the quality assurance activities, the various QA reports were published through the code assessments. The code manuals were updated and published a new manual which describe the new models and correlations. The code coupling methods were verified though the exercise of plant application. The education-training seminar and technology transfer were performed for the code users. The developed MARS-KS is utilized as reliable auditing tool for the resolving the safety issue and other regulatory calculations. The code can be utilized as a base technology for GEN IV reactor applications

  9. Design of convolutional tornado code

    Science.gov (United States)

    Zhou, Hui; Yang, Yao; Gao, Hongmin; Tan, Lu

    2017-09-01

    As a linear block code, the traditional tornado (tTN) code is inefficient in burst-erasure environment and its multi-level structure may lead to high encoding/decoding complexity. This paper presents a convolutional tornado (cTN) code which is able to improve the burst-erasure protection capability by applying the convolution property to the tTN code, and reduce computational complexity by abrogating the multi-level structure. The simulation results show that cTN code can provide a better packet loss protection performance with lower computation complexity than tTN code.

  10. Random linear codes in steganography

    Directory of Open Access Journals (Sweden)

    Kamil Kaczyński

    2016-12-01

    Full Text Available Syndrome coding using linear codes is a technique that allows improvement in the steganographic algorithms parameters. The use of random linear codes gives a great flexibility in choosing the parameters of the linear code. In parallel, it offers easy generation of parity check matrix. In this paper, the modification of LSB algorithm is presented. A random linear code [8, 2] was used as a base for algorithm modification. The implementation of the proposed algorithm, along with practical evaluation of algorithms’ parameters based on the test images was made.[b]Keywords:[/b] steganography, random linear codes, RLC, LSB

  11. Containment Code Validation Matrix

    International Nuclear Information System (INIS)

    Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah

    2014-01-01

    The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description

  12. Decoding of concatenated codes with interleaved outer codes

    DEFF Research Database (Denmark)

    Justesen, Jørn; Høholdt, Tom; Thommesen, Christian

    2004-01-01

    Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes.......Recently Bleichenbacher et al. proposed a decoding algorithm for interleaved (N, K) Reed-Solomon codes, which allows close to N-K errors to be corrected in many cases. We discuss the application of this decoding algorithm to concatenated codes....

  13. TASS code topical report. V.1 TASS code technical manual

    International Nuclear Information System (INIS)

    Sim, Suk K.; Chang, W. P.; Kim, K. D.; Kim, H. C.; Yoon, H. Y.

    1997-02-01

    TASS 1.0 code has been developed at KAERI for the initial and reload non-LOCA safety analysis for the operating PWRs as well as the PWRs under construction in Korea. TASS code will replace various vendor's non-LOCA safety analysis codes currently used for the Westinghouse and ABB-CE type PWRs in Korea. This can be achieved through TASS code input modifications specific to each reactor type. The TASS code can be run interactively through the keyboard operation. A simimodular configuration used in developing the TASS code enables the user easily implement new models. TASS code has been programmed using FORTRAN77 which makes it easy to install and port for different computer environments. The TASS code can be utilized for the steady state simulation as well as the non-LOCA transient simulations such as power excursions, reactor coolant pump trips, load rejections, loss of feedwater, steam line breaks, steam generator tube ruptures, rod withdrawal and drop, and anticipated transients without scram (ATWS). The malfunctions of the control systems, components, operator actions and the transients caused by the malfunctions can be easily simulated using the TASS code. This technical report describes the TASS 1.0 code models including reactor thermal hydraulic, reactor core and control models. This TASS code models including reactor thermal hydraulic, reactor core and control models. This TASS code technical manual has been prepared as a part of the TASS code manual which includes TASS code user's manual and TASS code validation report, and will be submitted to the regulatory body as a TASS code topical report for a licensing non-LOCA safety analysis for the Westinghouse and ABB-CE type PWRs operating and under construction in Korea. (author). 42 refs., 29 tabs., 32 figs

  14. Construction of new quantum MDS codes derived from constacyclic codes

    Science.gov (United States)

    Taneja, Divya; Gupta, Manish; Narula, Rajesh; Bhullar, Jaskaran

    Obtaining quantum maximum distance separable (MDS) codes from dual containing classical constacyclic codes using Hermitian construction have paved a path to undertake the challenges related to such constructions. Using the same technique, some new parameters of quantum MDS codes have been constructed here. One set of parameters obtained in this paper has achieved much larger distance than work done earlier. The remaining constructed parameters of quantum MDS codes have large minimum distance and were not explored yet.

  15. Combinatorial neural codes from a mathematical coding theory perspective.

    Science.gov (United States)

    Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L

    2013-07-01

    Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.

  16. Convolutional coding techniques for data protection

    Science.gov (United States)

    Massey, J. L.

    1975-01-01

    Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.

  17. High Energy Transport Code HETC

    International Nuclear Information System (INIS)

    Gabriel, T.A.

    1985-09-01

    The physics contained in the High Energy Transport Code (HETC), in particular the collision models, are discussed. An application using HETC as part of the CALOR code system is also given. 19 refs., 5 figs., 3 tabs

  18. Code stroke in Asturias.

    Science.gov (United States)

    Benavente, L; Villanueva, M J; Vega, P; Casado, I; Vidal, J A; Castaño, B; Amorín, M; de la Vega, V; Santos, H; Trigo, A; Gómez, M B; Larrosa, D; Temprano, T; González, M; Murias, E; Calleja, S

    2016-04-01

    Intravenous thrombolysis with alteplase is an effective treatment for ischaemic stroke when applied during the first 4.5 hours, but less than 15% of patients have access to this technique. Mechanical thrombectomy is more frequently able to recanalise proximal occlusions in large vessels, but the infrastructure it requires makes it even less available. We describe the implementation of code stroke in Asturias, as well as the process of adapting various existing resources for urgent stroke care in the region. By considering these resources, and the demographic and geographic circumstances of our region, we examine ways of reorganising the code stroke protocol that would optimise treatment times and provide the most appropriate treatment for each patient. We distributed the 8 health districts in Asturias so as to permit referral of candidates for reperfusion therapies to either of the 2 hospitals with 24-hour stroke units and on-call neurologists and providing IV fibrinolysis. Hospitals were assigned according to proximity and stroke severity; the most severe cases were immediately referred to the hospital with on-call interventional neurology care. Patient triage was provided by pre-hospital emergency services according to the NIHSS score. Modifications to code stroke in Asturias have allowed us to apply reperfusion therapies with good results, while emphasising equitable care and managing the severity-time ratio to offer the best and safest treatment for each patient as soon as possible. Copyright © 2015 Sociedad Española de Neurología. Published by Elsevier España, S.L.U. All rights reserved.

  19. Decoding Xing-Ling codes

    DEFF Research Database (Denmark)

    Nielsen, Rasmus Refslund

    2002-01-01

    This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed.......This paper describes an efficient decoding method for a recent construction of good linear codes as well as an extension to the construction. Furthermore, asymptotic properties and list decoding of the codes are discussed....

  20. WWER reactor physics code applications

    International Nuclear Information System (INIS)

    Gado, J.; Kereszturi, A.; Gacs, A.; Telbisz, M.

    1994-01-01

    The coupled steady-state reactor physics and thermohydraulic code system KARATE has been developed and applied for WWER-1000 and WWER-440 operational calculations. The 3 D coupled kinetic code KIKO3D has been developed and validated for WWER-440 accident analysis applications. The coupled kinetic code SMARTA developed by VTT Helsinki has been applied for WWER-440 accident analysis. The paper gives a summary of the experience in code development and application. (authors). 10 refs., 2 tabs., 5 figs

  1. The path of code linting

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    Join the path of code linting and discover how it can help you reach higher levels of programming enlightenment. Today we will cover how to embrace code linters to offload cognitive strain on preserving style standards in your code base as well as avoiding error-prone constructs. Additionally, I will show you the journey ahead for integrating several code linters in the programming tools your already use with very little effort.

  2. The CORSYS neutronics code system

    International Nuclear Information System (INIS)

    Caner, M.; Krumbein, A.D.; Saphier, D.; Shapira, M.

    1994-01-01

    The purpose of this work is to assemble a code package for LWR core physics including coupled neutronics, burnup and thermal hydraulics. The CORSYS system is built around the cell code WIMS (for group microscopic cross section calculations) and 3-dimension diffusion code CITATION (for burnup and fuel management). We are implementing such a system on an IBM RS-6000 workstation. The code was rested with a simplified model of the Zion Unit 2 PWR. (authors). 6 refs., 8 figs., 1 tabs

  3. Bar codes for nuclear safeguards

    International Nuclear Information System (INIS)

    Keswani, A.N.; Bieber, A.M. Jr.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  4. Bar codes for nuclear safeguards

    International Nuclear Information System (INIS)

    Keswani, A.N.; Bieber, A.M.

    1983-01-01

    Bar codes similar to those used in supermarkets can be used to reduce the effort and cost of collecting nuclear materials accountability data. A wide range of equipment is now commercially available for printing and reading bar-coded information. Several examples of each of the major types of commercially-available equipment are given, and considerations are discussed both for planning systems using bar codes and for choosing suitable bar code equipment

  5. Quick response codes in Orthodontics

    Directory of Open Access Journals (Sweden)

    Moidin Shakil

    2015-01-01

    Full Text Available Quick response (QR code codes are two-dimensional barcodes, which encodes for a large amount of information. QR codes in Orthodontics are an innovative approach in which patient details, radiographic interpretation, and treatment plan can be encoded. Implementing QR code in Orthodontics will save time, reduces paperwork, and minimizes manual efforts in storage and retrieval of patient information during subsequent stages of treatment.

  6. Multiple LDPC decoding for distributed source coding and video coding

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Luong, Huynh Van; Huang, Xin

    2011-01-01

    Distributed source coding (DSC) is a coding paradigm for systems which fully or partly exploit the source statistics at the decoder to reduce the computational burden at the encoder. Distributed video coding (DVC) is one example. This paper considers the use of Low Density Parity Check Accumulate...... (LDPCA) codes in a DSC scheme with feed-back. To improve the LDPC coding performance in the context of DSC and DVC, while retaining short encoder blocks, this paper proposes multiple parallel LDPC decoding. The proposed scheme passes soft information between decoders to enhance performance. Experimental...

  7. Cinder begin creative coding

    CERN Document Server

    Rijnieks, Krisjanis

    2013-01-01

    Presented in an easy to follow, tutorial-style format, this book will lead you step-by-step through the multi-faceted uses of Cinder.""Cinder: Begin Creative Coding"" is for people who already have experience in programming. It can serve as a transition from a previous background in Processing, Java in general, JavaScript, openFrameworks, C++ in general or ActionScript to the framework covered in this book, namely Cinder. If you like quick and easy to follow tutorials that will let yousee progress in less than an hour - this book is for you. If you are searching for a book that will explain al

  8. UNSPEC: revisited (semaphore code)

    International Nuclear Information System (INIS)

    Neifert, R.D.

    1981-01-01

    The UNSPEC code is used to solve the problem of unfolding an observed x-ray spectrum given the response matrix of the measuring system and the measured signal values. UNSPEC uses an iterative technique to solve the unfold problem. Due to experimental errors in the measured signal values and/or computer round-off errors, discontinuities and oscillatory behavior may occur in the iterated spectrum. These can be suppressed by smoothing the results after each iteration. Input/output options and control cards are explained; sample input and output are provided

  9. The FLIC conversion codes

    International Nuclear Information System (INIS)

    Basher, J.C.

    1965-05-01

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  10. SPRAY code user's report

    International Nuclear Information System (INIS)

    Shire, P.R.

    1977-03-01

    The SPRAY computer code has been developed to model the effects of postulated sodium spray release from LMFBR piping within containment chambers. The calculation method utilizes gas convection, heat transfer and droplet combustion theory to calculate the pressure and temperature effects within the enclosure. The applicable range is 0-21 mol percent oxygen and .02-.30 inch droplets with or without humidity. Droplet motion and large sodium surface area combine to produce rapid heat release and pressure rise within the enclosed volume

  11. The FLIC conversion codes

    Energy Technology Data Exchange (ETDEWEB)

    Basher, J C [General Reactor Physics Division, Atomic Energy Establishment, Winfrith, Dorchester, Dorset (United Kingdom)

    1965-05-15

    This report describes the FORTRAN programmes, FLIC 1 and FLIC 2. These programmes convert programmes coded in one dialect of FORTRAN to another dialect of the same language. FLIC 1 is a general pattern recognition and replacement programme whereas FLIC 2 contains extensions directed towards the conversion of FORTRAN II and S2 programmes to EGTRAN 1 - the dialect now in use on the Winfrith KDF9. FII or S2 statements are replaced where possible by their E1 equivalents; other statements which may need changing are flagged. (author)

  12. Code Generation with Templates

    CERN Document Server

    Arnoldus, Jeroen; Serebrenik, A

    2012-01-01

    Templates are used to generate all kinds of text, including computer code. The last decade, the use of templates gained a lot of popularity due to the increase of dynamic web applications. Templates are a tool for programmers, and implementations of template engines are most times based on practical experience rather than based on a theoretical background. This book reveals the mathematical background of templates and shows interesting findings for improving the practical use of templates. First, a framework to determine the necessary computational power for the template metalanguage is presen

  13. Order functions and evaluation codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Pellikaan, Ruud; van Lint, Jack

    1997-01-01

    Based on the notion of an order function we construct and determine the parameters of a class of error-correcting evaluation codes. This class includes the one-point algebraic geometry codes as wella s the generalized Reed-Muller codes and the parameters are detremined without using the heavy...... machinery of algebraic geometry....

  14. Direct-semidirect (DSD) codes

    International Nuclear Information System (INIS)

    Cvelbar, F.

    1999-01-01

    Recent codes for direct-semidirect (DSD) model calculations in the form of answers to a detailed questionnaire are reviewed. These codes include those embodying the classical DSD approach covering only the transitions to the bound states (RAF, HIKARI, and those of the Bologna group), as well as the code CUPIDO++ that also treats transitions to unbound states. (author)

  15. Dual Coding, Reasoning and Fallacies.

    Science.gov (United States)

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  16. Strongly-MDS convolutional codes

    NARCIS (Netherlands)

    Gluesing-Luerssen, H; Rosenthal, J; Smarandache, R

    Maximum-distance separable (MDS) convolutional codes have the property that their free distance is maximal among all codes of the same rate and the same degree. In this paper, a class of MDS convolutional codes is introduced whose column distances reach the generalized Singleton bound at the

  17. Lattice polytopes in coding theory

    Directory of Open Access Journals (Sweden)

    Ivan Soprunov

    2015-05-01

    Full Text Available In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.

  18. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  19. Geochemical computer codes. A review

    International Nuclear Information System (INIS)

    Andersson, K.

    1987-01-01

    In this report a review of available codes is performed and some code intercomparisons are also discussed. The number of codes treating natural waters (groundwater, lake water, sea water) is large. Most geochemical computer codes treat equilibrium conditions, although some codes with kinetic capability are available. A geochemical equilibrium model consists of a computer code, solving a set of equations by some numerical method and a data base, consisting of thermodynamic data required for the calculations. There are some codes which treat coupled geochemical and transport modeling. Some of these codes solve the equilibrium and transport equations simultaneously while other solve the equations separately from each other. The coupled codes require a large computer capacity and have thus as yet limited use. Three code intercomparisons have been found in literature. It may be concluded that there are many codes available for geochemical calculations but most of them require a user that us quite familiar with the code. The user also has to know the geochemical system in order to judge the reliability of the results. A high quality data base is necessary to obtain a reliable result. The best results may be expected for the major species of natural waters. For more complicated problems, including trace elements, precipitation/dissolution, adsorption, etc., the results seem to be less reliable. (With 44 refs.) (author)

  20. Final Report for ''Client Server Software for the National Transport Code Collaboration''

    International Nuclear Information System (INIS)

    John R Cary, Johan A Carlsson

    2006-01-01

    The Tech-X contribution to the NTCC project was completed on 03/31/06. Below are some of the highlights of the final year. A TEQ users meeting was held at the Sherwood 2005 conference and a tech-support mail list was created (teq-users(at)fusion.txcorp.com). The stand-alone separatrix module was added to the NTCC repository and is available on the web. For the main TEQ module a portable build system was developed (based on GNU Autotools and similar to the separatrix build system). Especially IBM xlf had problems with mixed code (F77 with F90 snippets) in the same file and approximately 6000 lines of code was rewritten as pure F90. Circular dependencies between F90 modules were resolved to robustly allow correct compilation order. Exception handling was implemented in both the separatrix and TEQ modules and an user manual was written for TEQ. Johan Carlsson visited LLNL 05/16/05-05/20/05

  1. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Lei Ye

    2009-01-01

    Full Text Available This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are 1/2 and 1/3. The performances of both systems with high (10−2 and low (10−4 BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  2. New quantum codes constructed from quaternary BCH codes

    Science.gov (United States)

    Xu, Gen; Li, Ruihu; Guo, Luobin; Ma, Yuena

    2016-10-01

    In this paper, we firstly study construction of new quantum error-correcting codes (QECCs) from three classes of quaternary imprimitive BCH codes. As a result, the improved maximal designed distance of these narrow-sense imprimitive Hermitian dual-containing quaternary BCH codes are determined to be much larger than the result given according to Aly et al. (IEEE Trans Inf Theory 53:1183-1188, 2007) for each different code length. Thus, families of new QECCs are newly obtained, and the constructed QECCs have larger distance than those in the previous literature. Secondly, we apply a combinatorial construction to the imprimitive BCH codes with their corresponding primitive counterpart and construct many new linear quantum codes with good parameters, some of which have parameters exceeding the finite Gilbert-Varshamov bound for linear quantum codes.

  3. Quantum Codes From Cyclic Codes Over The Ring R 2

    International Nuclear Information System (INIS)

    Altinel, Alev; Güzeltepe, Murat

    2016-01-01

    Let R 2 denotes the ring F 2 + μF 2 + υ 2 + μυ F 2 + wF 2 + μwF 2 + υwF 2 + μυwF 2 . In this study, we construct quantum codes from cyclic codes over the ring R 2 , for arbitrary length n, with the restrictions μ 2 = 0, υ 2 = 0, w 2 = 0, μυ = υμ, μw = wμ, υw = wυ and μ (υw) = (μυ) w. Also, we give a necessary and sufficient condition for cyclic codes over R 2 that contains its dual. As a final point, we obtain the parameters of quantum error-correcting codes from cyclic codes over R 2 and we give an example of quantum error-correcting codes form cyclic codes over R 2 . (paper)

  4. Separate Turbo Code and Single Turbo Code Adaptive OFDM Transmissions

    Directory of Open Access Journals (Sweden)

    Burr Alister

    2009-01-01

    Full Text Available Abstract This paper discusses the application of adaptive modulation and adaptive rate turbo coding to orthogonal frequency-division multiplexing (OFDM, to increase throughput on the time and frequency selective channel. The adaptive turbo code scheme is based on a subband adaptive method, and compares two adaptive systems: a conventional approach where a separate turbo code is used for each subband, and a single turbo code adaptive system which uses a single turbo code over all subbands. Five modulation schemes (BPSK, QPSK, 8AMPM, 16QAM, and 64QAM are employed and turbo code rates considered are and . The performances of both systems with high ( and low ( BER targets are compared. Simulation results for throughput and BER show that the single turbo code adaptive system provides a significant improvement.

  5. Converter of a continuous code into the Grey code

    International Nuclear Information System (INIS)

    Gonchar, A.I.; TrUbnikov, V.R.

    1979-01-01

    Described is a converter of a continuous code into the Grey code used in a 12-charged precision amplitude-to-digital converter to decrease the digital component of spectrometer differential nonlinearity to +0.7% in the 98% range of the measured band. To construct the converter of a continuous code corresponding to the input signal amplitude into the Grey code used is the regularity in recycling of units and zeroes in each discharge of the Grey code in the case of a continuous change of the number of pulses of a continuous code. The converter is constructed on the elements of 155 series, the frequency of continuous code pulse passing at the converter input is 25 MHz

  6. Supervised Transfer Sparse Coding

    KAUST Repository

    Al-Shedivat, Maruan

    2014-07-27

    A combination of the sparse coding and transfer learn- ing techniques was shown to be accurate and robust in classification tasks where training and testing objects have a shared feature space but are sampled from differ- ent underlying distributions, i.e., belong to different do- mains. The key assumption in such case is that in spite of the domain disparity, samples from different domains share some common hidden factors. Previous methods often assumed that all the objects in the target domain are unlabeled, and thus the training set solely comprised objects from the source domain. However, in real world applications, the target domain often has some labeled objects, or one can always manually label a small num- ber of them. In this paper, we explore such possibil- ity and show how a small number of labeled data in the target domain can significantly leverage classifica- tion accuracy of the state-of-the-art transfer sparse cod- ing methods. We further propose a unified framework named supervised transfer sparse coding (STSC) which simultaneously optimizes sparse representation, domain transfer and classification. Experimental results on three applications demonstrate that a little manual labeling and then learning the model in a supervised fashion can significantly improve classification accuracy.

  7. Two-terminal video coding.

    Science.gov (United States)

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  8. The Art of Readable Code

    CERN Document Server

    Boswell, Dustin

    2011-01-01

    As programmers, we've all seen source code that's so ugly and buggy it makes our brain ache. Over the past five years, authors Dustin Boswell and Trevor Foucher have analyzed hundreds of examples of "bad code" (much of it their own) to determine why they're bad and how they could be improved. Their conclusion? You need to write code that minimizes the time it would take someone else to understand it-even if that someone else is you. This book focuses on basic principles and practical techniques you can apply every time you write code. Using easy-to-digest code examples from different languag

  9. Sub-Transport Layer Coding

    DEFF Research Database (Denmark)

    Hansen, Jonas; Krigslund, Jeppe; Roetter, Daniel Enrique Lucani

    2014-01-01

    Packet losses in wireless networks dramatically curbs the performance of TCP. This paper introduces a simple coding shim that aids IP-layer traffic in lossy environments while being transparent to transport layer protocols. The proposed coding approach enables erasure correction while being...... oblivious to the congestion control algorithms of the utilised transport layer protocol. Although our coding shim is indifferent towards the transport layer protocol, we focus on the performance of TCP when ran on top of our proposed coding mechanism due to its widespread use. The coding shim provides gains...

  10. The First Six Months of the LLNL-CfPA-MSSSO Search for Baryonic Dark Matter in the Galaxy's Halo via its Gravitational Microlensing Signature

    Science.gov (United States)

    Cook, K.; Alcock, C.; Allsman, R.; Axelrod, T.; Bennett, D.; Marshall, S.; Stubbs, C.; Griest, K.; Perlmutter, S.; Sutherland, W.; Freeman, K.; Peterson, B.; Quinn, P.; Rodgers, A.

    1992-12-01

    This collaboration, dubbed the MACHO Project (an acronym for MAssive Compact Halo Objects), has refurbished the 1.27-m, Great Melbourne Telescope at Mt. Stromlo and equipped it with a corrected {1°} FOV. The prime focus corrector yields a red and blue beam for simultaneous imaging in two passbands, 4500{ Angstroms}--6100{ Angstroms} and 6100{ Angstroms}--7900{ Angstroms}. Each beam is imaged by a 2x2 array of 2048x2048 pixel CCDs which are simultaneously read out from two amplifiers on each CCD. A 32 Megapixel dual-color image of 0.5 square degree is clocked directly into computer memory in less than 70 seconds. We are using this system to monitor more than 10(7) stars in the Magellanic Clouds for gravitational microlensing events and will soon monitor an additional 10(7) stars in the bulge of our galaxy. Image data goes directly into a reduction pipeline where photometry for stars in an image is determined and stored in a database. An early version of this pipeline has used a simple aperture photometry code and results from this will be presented. A more sophisticated PSF fitting photometry code is currently being installed in the pipeline and results should also be available at the meeting. The PSF fitting code has also been used to produce ~ 10(7) photometric measurements outside of the pipeline. This poster will present details of the instrumentation, data pipeline, observing conditions (weather and seeing), reductions and analyses for the first six months of dual-color observing. Eventually, we expect to be able to determine whether MACHOs are a significant component of the galactic halo in the mass range of \\(10^{-6} M_{\\sun} < M \\ {lower .5exhbox {\\: \\buildrel < \\over \\sim ;}} \\ 100 M_{\\sun}\\).

  11. 75 FR 19944 - International Code Council: The Update Process for the International Codes and Standards

    Science.gov (United States)

    2010-04-16

    ... documents from ICC's Chicago District Office: International Code Council, 4051 W Flossmoor Road, Country... Energy Conservation Code. International Existing Building Code. International Fire Code. International...

  12. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  13. Introduction of SCIENCE code package

    International Nuclear Information System (INIS)

    Lu Haoliang; Li Jinggang; Zhu Ya'nan; Bai Ning

    2012-01-01

    The SCIENCE code package is a set of neutronics tools based on 2D assembly calculations and 3D core calculations. It is made up of APOLLO2F, SMART and SQUALE and used to perform the nuclear design and loading pattern analysis for the reactors on operation or under construction of China Guangdong Nuclear Power Group. The purpose of paper is to briefly present the physical and numerical models used in each computation codes of the SCIENCE code pack age, including the description of the general structure of the code package, the coupling relationship of APOLLO2-F transport lattice code and SMART core nodal code, and the SQUALE code used for processing the core maps. (authors)

  14. Elements of algebraic coding systems

    CERN Document Server

    Cardoso da Rocha, Jr, Valdemar

    2014-01-01

    Elements of Algebraic Coding Systems is an introductory text to algebraic coding theory. In the first chapter, you'll gain inside knowledge of coding fundamentals, which is essential for a deeper understanding of state-of-the-art coding systems. This book is a quick reference for those who are unfamiliar with this topic, as well as for use with specific applications such as cryptography and communication. Linear error-correcting block codes through elementary principles span eleven chapters of the text. Cyclic codes, some finite field algebra, Goppa codes, algebraic decoding algorithms, and applications in public-key cryptography and secret-key cryptography are discussed, including problems and solutions at the end of each chapter. Three appendices cover the Gilbert bound and some related derivations, a derivation of the Mac- Williams' identities based on the probability of undetected error, and two important tools for algebraic decoding-namely, the finite field Fourier transform and the Euclidean algorithm f...

  15. Physical Layer Network Coding

    DEFF Research Database (Denmark)

    Fukui, Hironori; Yomo, Hironori; Popovski, Petar

    2013-01-01

    of interfering nodes and usage of spatial reservation mechanisms. Specifically, we introduce a reserved area in order to protect the nodes involved in two-way relaying from the interference caused by neighboring nodes. We analytically derive the end-to-end rate achieved by PLNC considering the impact......Physical layer network coding (PLNC) has the potential to improve throughput of multi-hop networks. However, most of the works are focused on the simple, three-node model with two-way relaying, not taking into account the fact that there can be other neighboring nodes that can cause....../receive interference. The way to deal with this problem in distributed wireless networks is usage of MAC-layer mechanisms that make a spatial reservation of the shared wireless medium, similar to the well-known RTS/CTS in IEEE 802.11 wireless networks. In this paper, we investigate two-way relaying in presence...

  16. Concatenated quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.

    1996-07-01

    One main problem for the future of practial quantum computing is to stabilize the computation against unwanted interactions with the environment and imperfections in the applied operations. Existing proposals for quantum memories and quantum channels require gates with asymptotically zero error to store or transmit an input quantum state for arbitrarily long times or distances with fixed error. This report gives a method which has the property that to store or transmit a qubit with maximum error {epsilon} requires gates with errors at most {ital c}{epsilon} and storage or channel elements with error at most {epsilon}, independent of how long we wish to store the state or how far we wish to transmit it. The method relies on using concatenated quantum codes and hierarchically implemented recovery operations. The overhead of the method is polynomial in the time of storage or the distance of the transmission. Rigorous and heuristic lower bounds for the constant {ital c} are given.

  17. Code des baux 2018

    CERN Document Server

    Vial-Pedroletti, Béatrice; Kendérian, Fabien; Chavance, Emmanuelle; Coutan-Lapalus, Christelle

    2017-01-01

    Le code des baux 2018 vous offre un contenu extrêmement pratique, fiable et à jour au 1er août 2017. Cette 16e édition intègre notamment : le décret du 27 juillet 2017 relatif à l’évolution de certains loyers dans le cadre d’une nouvelle location ou d’un renouvellement de bail, pris en application de l’article 18 de la loi n° 89-462 du 6 juillet 1989 ; la loi du 27 janvier 2017 relative à l’égalité et à la citoyenneté ; la loi du 9 décembre 2016 relative à la transparence, à la lutte contre la corruption et à la modernisation de la vie économique ; la loi du 18 novembre 2016 de modernisation de la justice du xxie siècle

  18. GOC: General Orbit Code

    International Nuclear Information System (INIS)

    Maddox, L.B.; McNeilly, G.S.

    1979-08-01

    GOC (General Orbit Code) is a versatile program which will perform a variety of calculations relevant to isochronous cyclotron design studies. In addition to the usual calculations of interest (e.g., equilibrium and accelerated orbits, focusing frequencies, field isochronization, etc.), GOC has a number of options to calculate injections with a charge change. GOC provides both printed and plotted output, and will follow groups of particles to allow determination of finite-beam properties. An interactive PDP-10 program called GIP, which prepares input data for GOC, is available. GIP is a very easy and convenient way to prepare complicated input data for GOC. Enclosed with this report are several microfiche containing source listings of GOC and other related routines and the printed output from a multiple-option GOC run

  19. LDGM Codes for Channel Coding and Joint Source-Channel Coding of Correlated Sources

    Directory of Open Access Journals (Sweden)

    Javier Garcia-Frias

    2005-05-01

    Full Text Available We propose a coding scheme based on the use of systematic linear codes with low-density generator matrix (LDGM codes for channel coding and joint source-channel coding of multiterminal correlated binary sources. In both cases, the structures of the LDGM encoder and decoder are shown, and a concatenated scheme aimed at reducing the error floor is proposed. Several decoding possibilities are investigated, compared, and evaluated. For different types of noisy channels and correlation models, the resulting performance is very close to the theoretical limits.

  20. Surface acoustic wave coding for orthogonal frequency coded devices

    Science.gov (United States)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  1. Entanglement-assisted quantum MDS codes from negacyclic codes

    Science.gov (United States)

    Lu, Liangdong; Li, Ruihu; Guo, Luobin; Ma, Yuena; Liu, Yang

    2018-03-01

    The entanglement-assisted formalism generalizes the standard stabilizer formalism, which can transform arbitrary classical linear codes into entanglement-assisted quantum error-correcting codes (EAQECCs) by using pre-shared entanglement between the sender and the receiver. In this work, we construct six classes of q-ary entanglement-assisted quantum MDS (EAQMDS) codes based on classical negacyclic MDS codes by exploiting two or more pre-shared maximally entangled states. We show that two of these six classes q-ary EAQMDS have minimum distance more larger than q+1. Most of these q-ary EAQMDS codes are new in the sense that their parameters are not covered by the codes available in the literature.

  2. An algebraic approach to graph codes

    DEFF Research Database (Denmark)

    Pinero, Fernando

    This thesis consists of six chapters. The first chapter, contains a short introduction to coding theory in which we explain the coding theory concepts we use. In the second chapter, we present the required theory for evaluation codes and also give an example of some fundamental codes in coding...... theory as evaluation codes. Chapter three consists of the introduction to graph based codes, such as Tanner codes and graph codes. In Chapter four, we compute the dimension of some graph based codes with a result combining graph based codes and subfield subcodes. Moreover, some codes in chapter four...

  3. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    Science.gov (United States)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  4. Self-complementary circular codes in coding theory.

    Science.gov (United States)

    Fimmel, Elena; Michel, Christian J; Starman, Martin; Strüngmann, Lutz

    2018-04-01

    Self-complementary circular codes are involved in pairing genetic processes. A maximal [Formula: see text] self-complementary circular code X of trinucleotides was identified in genes of bacteria, archaea, eukaryotes, plasmids and viruses (Michel in Life 7(20):1-16 2017, J Theor Biol 380:156-177, 2015; Arquès and Michel in J Theor Biol 182:45-58 1996). In this paper, self-complementary circular codes are investigated using the graph theory approach recently formulated in Fimmel et al. (Philos Trans R Soc A 374:20150058, 2016). A directed graph [Formula: see text] associated with any code X mirrors the properties of the code. In the present paper, we demonstrate a necessary condition for the self-complementarity of an arbitrary code X in terms of the graph theory. The same condition has been proven to be sufficient for codes which are circular and of large size [Formula: see text] trinucleotides, in particular for maximal circular codes ([Formula: see text] trinucleotides). For codes of small-size [Formula: see text] trinucleotides, some very rare counterexamples have been constructed. Furthermore, the length and the structure of the longest paths in the graphs associated with the self-complementary circular codes are investigated. It has been proven that the longest paths in such graphs determine the reading frame for the self-complementary circular codes. By applying this result, the reading frame in any arbitrary sequence of trinucleotides is retrieved after at most 15 nucleotides, i.e., 5 consecutive trinucleotides, from the circular code X identified in genes. Thus, an X motif of a length of at least 15 nucleotides in an arbitrary sequence of trinucleotides (not necessarily all of them belonging to X) uniquely defines the reading (correct) frame, an important criterion for analyzing the X motifs in genes in the future.

  5. COG10, Multiparticle Monte Carlo Code System for Shielding and Criticality Use

    International Nuclear Information System (INIS)

    2007-01-01

    1 - Description of program or function: COG is a modern, full-featured Monte Carlo radiation transport code which provides accurate answers to complex shielding, criticality, and activation problems. COG was written to be state-of-the-art and free of physics approximations and compromises found in earlier codes. COG is fully 3-D, uses point-wise cross sections and exact angular scattering, and allows a full range of biasing options to speed up solutions for deep penetration problems. Additionally, a criticality option is available for computing Keff for assemblies of fissile materials. ENDL or ENDFB cross section libraries may be used. COG home page: http://www-phys.llnl.gov/N_Div/COG/. Cross section libraries are included in the package. COG can use either the LLNL ENDL-90 cross section set or the ENDFB/VI set. Analytic surfaces are used to describe geometric boundaries. Parts (volumes) are described by a method of Constructive Solid Geometry. Surface types include surfaces of up to fourth order, and pseudo-surfaces such as boxes, finite cylinders, and figures of revolution. Repeated assemblies need be defined only once. Parts are visualized in cross-section and perspective picture views. Source and random-walk biasing techniques may be selected to improve solution statistics. These include source angular biasing, importance weighting, particle splitting and Russian roulette, path-length stretching, point detectors, scattered direction biasing, and forced collisions. Criticality - For a fissioning system, COG will compute Keff by transporting batches of neutrons through the system. Activation - COG can compute gamma-ray doses due to neutron-activated materials, starting with just a neutron source. Coupled Problems - COG can solve coupled problems involving neutrons, photons, and electrons. 2 - Methods:COG uses Monte Carlo methods to solve the Boltzmann transport equation for particles traveling through arbitrary 3-dimensional geometries. Neutrons, photons

  6. LLNL Yucca Mountain project - near-field environment characterization technical area: Letter report: EQ3/6 version 8: differences from version 7

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T.J.

    1994-09-29

    EQ3/6 is a software package for geochemical modeling of aqueous systems, such as water/rock or waste/water rock. It is being developed for a variety of applications in geochemical studies for the Yucca Mountain Site Characterization Project. The software has been extensively rewritten for Version 8. The source code has been extensively modernized. The software is now written in Fortran 77 with the most common extensions that are part of the new Fortran 90 standard. The architecture of the software has been improved for better performance and to allow the incorporation of new functional capabilities in Version 8 and planned subsequent versions. In particular, the structure of the major data arrays has been significantly altered and extended. Three new major functional capabilities have been incorporated in Version 8. The first of these allows the treatment of redox disequilibrium in reaction-path modeling. This is a natural extension of the long-running capability of providing for such disequilibrium in static speciation-solubility calculations. Such a capability is important, for example, when dealing with systems containing organic species and certain dissolved gas species. The user defines (and sets the controls for) the components in disequilibrium. Such corrections can now be made if the requisite data are present on a supporting data file. At present, this capability is supported only by the SHV data file, which is based on SUPCRT92. Equilibrium constants and other thermodynamic quantities are correct1961ed for pressures which lie off a standard curve, which is defined on the supporting data file and ordinarily corresponds to 1.013 bar up to IOOC, and the steam/liquid water equilibrium pressure up to 300C. The third new major capability is generic ion exchange option previously developed in prototype in a branch Version 7 level version of EQ3/6 by Brian Viani, Bill Bourcier, and Carol Bruton. This option has been modified to fit into the Version 8 data

  7. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  8. Numerical Tokamak Project code comparison

    International Nuclear Information System (INIS)

    Waltz, R.E.; Cohen, B.I.; Beer, M.A.

    1994-01-01

    The Numerical Tokamak Project undertook a code comparison using a set of TFTR tokamak parameters. Local radial annulus codes of both gyrokinetic and gyrofluid types were compared for both slab and toroidal case limits assuming ion temperature gradient mode turbulence in a pure plasma with adiabatic electrons. The heat diffusivities were found to be in good internal agreement within ± 50% of the group average over five codes

  9. Ethical codes in business practice

    OpenAIRE

    Kobrlová, Marie

    2013-01-01

    The diploma thesis discusses the issues of ethics and codes of ethics in business. The theoretical part defines basic concepts of ethics, presents its historical development and the methods and tools of business ethics. It also focuses on ethical codes and the area of law and ethics. The practical part consists of a quantitative survey, which provides views of selected business entities of business ethics and the use of codes of ethics in practice.

  10. QR code for medical information uses.

    Science.gov (United States)

    Fontelo, Paul; Liu, Fang; Ducut, Erick G

    2008-11-06

    We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.

  11. High Order Modulation Protograph Codes

    Science.gov (United States)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  12. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  13. Programming Entity Framework Code First

    CERN Document Server

    Lerman, Julia

    2011-01-01

    Take advantage of the Code First data modeling approach in ADO.NET Entity Framework, and learn how to build and configure a model based on existing classes in your business domain. With this concise book, you'll work hands-on with examples to learn how Code First can create an in-memory model and database by default, and how you can exert more control over the model through further configuration. Code First provides an alternative to the database first and model first approaches to the Entity Data Model. Learn the benefits of defining your model with code, whether you're working with an exis

  14. User manual of UNF code

    International Nuclear Information System (INIS)

    Zhang Jingshang

    2001-01-01

    The UNF code (2001 version) written in FORTRAN-90 is developed for calculating fast neutron reaction data of structure materials with incident energies from about 1 Kev up to 20 Mev. The code consists of the spherical optical model, the unified Hauser-Feshbach and exciton model. The man nal of the UNF code is available for users. The format of the input parameter files and the output files, as well as the functions of flag used in UNF code, are introduced in detail, and the examples of the format of input parameters files are given

  15. Semi-supervised sparse coding

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-07-06

    Sparse coding approximates the data sample as a sparse linear combination of some basic codewords and uses the sparse codes as new presentations. In this paper, we investigate learning discriminative sparse codes by sparse coding in a semi-supervised manner, where only a few training samples are labeled. By using the manifold structure spanned by the data set of both labeled and unlabeled samples and the constraints provided by the labels of the labeled samples, we learn the variable class labels for all the samples. Furthermore, to improve the discriminative ability of the learned sparse codes, we assume that the class labels could be predicted from the sparse codes directly using a linear classifier. By solving the codebook, sparse codes, class labels and classifier parameters simultaneously in a unified objective function, we develop a semi-supervised sparse coding algorithm. Experiments on two real-world pattern recognition problems demonstrate the advantage of the proposed methods over supervised sparse coding methods on partially labeled data sets.

  16. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-12-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  17. Coded aperture tomography revisited

    International Nuclear Information System (INIS)

    Bizais, Y.; Rowe, R.W.; Zubal, I.G.; Bennett, G.W.; Brill, A.B.

    1983-01-01

    Coded aperture (CA) Tomography never achieved wide spread use in Nuclear Medicine, except for the degenerate case of Seven Pinhole tomagraphy (7PHT). However it enjoys several attractive features (high sensitivity and tomographic ability with a statis detector). On the other hand, resolution is usually poor especially along the depth axis and the reconstructed volume is rather limited. Arguments are presented justifying the position that CA tomography can be useful for imaging time-varying 3D structures, if its major drawbacks (poor longitudinal resolution and difficulty in quantification) are overcome. Poor results obtained with 7PHT can be explained by both a very limited angular range sampled and a crude modelling of the image formation process. Therefore improvements can be expected by the use of a dual-detector system, along with a better understanding of its sampling properties and the use of more powerful reconstruction algorithms. Non overlapping multipinhole plates, because they do not involve a decoding procedure, should be considered first for practical applications. Use of real CA should be considered for cases in which non overlapping multipinhole plates do not lead to satisfactory solutions. We have been and currently are carrying out theoretical and experimental works, in order to define the factors which limit CA imaging and to propose satisfactory solutions for Dynamic Emission Tomography

  18. Mobile code security

    Science.gov (United States)

    Ramalingam, Srikumar

    2001-11-01

    A highly secure mobile agent system is very important for a mobile computing environment. The security issues in mobile agent system comprise protecting mobile hosts from malicious agents, protecting agents from other malicious agents, protecting hosts from other malicious hosts and protecting agents from malicious hosts. Using traditional security mechanisms the first three security problems can be solved. Apart from using trusted hardware, very few approaches exist to protect mobile code from malicious hosts. Some of the approaches to solve this problem are the use of trusted computing, computing with encrypted function, steganography, cryptographic traces, Seal Calculas, etc. This paper focuses on the simulation of some of these existing techniques in the designed mobile language. Some new approaches to solve malicious network problem and agent tampering problem are developed using public key encryption system and steganographic concepts. The approaches are based on encrypting and hiding the partial solutions of the mobile agents. The partial results are stored and the address of the storage is destroyed as the agent moves from one host to another host. This allows only the originator to make use of the partial results. Through these approaches some of the existing problems are solved.

  19. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup

    2017-04-11

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaickingand 4D light field view synthesis.

  20. Coding, cryptography and combinatorics

    CERN Document Server

    Niederreiter, Harald; Xing, Chaoping

    2004-01-01

    It has long been recognized that there are fascinating connections between cod­ ing theory, cryptology, and combinatorics. Therefore it seemed desirable to us to organize a conference that brings together experts from these three areas for a fruitful exchange of ideas. We decided on a venue in the Huang Shan (Yellow Mountain) region, one of the most scenic areas of China, so as to provide the additional inducement of an attractive location. The conference was planned for June 2003 with the official title Workshop on Coding, Cryptography and Combi­ natorics (CCC 2003). Those who are familiar with events in East Asia in the first half of 2003 can guess what happened in the end, namely the conference had to be cancelled in the interest of the health of the participants. The SARS epidemic posed too serious a threat. At the time of the cancellation, the organization of the conference was at an advanced stage: all invited speakers had been selected and all abstracts of contributed talks had been screened by the p...

  1. Consensus Convolutional Sparse Coding

    KAUST Repository

    Choudhury, Biswarup; Swanson, Robin; Heide, Felix; Wetzstein, Gordon; Heidrich, Wolfgang

    2017-01-01

    Convolutional sparse coding (CSC) is a promising direction for unsupervised learning in computer vision. In contrast to recent supervised methods, CSC allows for convolutional image representations to be learned that are equally useful for high-level vision tasks and low-level image reconstruction and can be applied to a wide range of tasks without problem-specific retraining. Due to their extreme memory requirements, however, existing CSC solvers have so far been limited to low-dimensional problems and datasets using a handful of low-resolution example images at a time. In this paper, we propose a new approach to solving CSC as a consensus optimization problem, which lifts these limitations. By learning CSC features from large-scale image datasets for the first time, we achieve significant quality improvements in a number of imaging tasks. Moreover, the proposed method enables new applications in high-dimensional feature learning that has been intractable using existing CSC methods. This is demonstrated for a variety of reconstruction problems across diverse problem domains, including 3D multispectral demosaicing and 4D light field view synthesis.

  2. Computer code abstract: NESTLE

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1995-01-01

    NESTLE is a few-group neutron diffusion equation solver utilizing the nodal expansion method (NEM) for eigenvalue, adjoint, and fixed-source steady-state and transient problems. The NESTLE code solve the eigenvalue (criticality), eigenvalue adjoint, external fixed-source steady-state, and external fixed-source or eigenvalue initiated transient problems. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two- or four-energy groups can be utilized, with all energy groups being thermal groups (i.e., upscatter exits) is desired. Core geometries modeled include Cartesian and hexagonal. Three-, two-, and one-dimensional models can be utilized with various symmetries. The thermal conditions predicted by the thermal-hydraulic model of the core are used to correct cross sections for temperature and density effects. Cross sections for temperature and density effects. Cross sections are parameterized by color, control rod state (i.e., in or out), and burnup, allowing fuel depletion to be modeled. Either a macroscopic or microscopic model may be employed

  3. Joint source-channel coding using variable length codes

    NARCIS (Netherlands)

    Balakirsky, V.B.

    2001-01-01

    We address the problem of joint source-channel coding when variable-length codes are used for information transmission over a discrete memoryless channel. Data transmitted over the channel are interpreted as pairs (m k ,t k ), where m k is a message generated by the source and t k is a time instant

  4. Interrelations of codes in human semiotic systems.

    OpenAIRE

    Somov, Georgij

    2016-01-01

    Codes can be viewed as mechanisms that enable relations of signs and their components, i.e., semiosis is actualized. The combinations of these relations produce new relations as new codes are building over other codes. Structures appear in the mechanisms of codes. Hence, codes can be described as transformations of structures from some material systems into others. Structures belong to different carriers, but exist in codes in their "pure" form. Building of codes over other codes fosters t...

  5. Further Generalisations of Twisted Gabidulin Codes

    DEFF Research Database (Denmark)

    Puchinger, Sven; Rosenkilde, Johan Sebastian Heesemann; Sheekey, John

    2017-01-01

    We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes.......We present a new family of maximum rank distance (MRD) codes. The new class contains codes that are neither equivalent to a generalised Gabidulin nor to a twisted Gabidulin code, the only two known general constructions of linear MRD codes....

  6. MARS Code in Linux Environment

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    2005-07-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  7. Allele coding in genomic evaluation

    Directory of Open Access Journals (Sweden)

    Christensen Ole F

    2011-06-01

    Full Text Available Abstract Background Genomic data are used in animal breeding to assist genetic evaluation. Several models to estimate genomic breeding values have been studied. In general, two approaches have been used. One approach estimates the marker effects first and then, genomic breeding values are obtained by summing marker effects. In the second approach, genomic breeding values are estimated directly using an equivalent model with a genomic relationship matrix. Allele coding is the method chosen to assign values to the regression coefficients in the statistical model. A common allele coding is zero for the homozygous genotype of the first allele, one for the heterozygote, and two for the homozygous genotype for the other allele. Another common allele coding changes these regression coefficients by subtracting a value from each marker such that the mean of regression coefficients is zero within each marker. We call this centered allele coding. This study considered effects of different allele coding methods on inference. Both marker-based and equivalent models were considered, and restricted maximum likelihood and Bayesian methods were used in inference. Results Theoretical derivations showed that parameter estimates and estimated marker effects in marker-based models are the same irrespective of the allele coding, provided that the model has a fixed general mean. For the equivalent models, the same results hold, even though different allele coding methods lead to different genomic relationship matrices. Calculated genomic breeding values are independent of allele coding when the estimate of the general mean is included into the values. Reliabilities of estimated genomic breeding values calculated using elements of the inverse of the coefficient matrix depend on the allele coding because different allele coding methods imply different models. Finally, allele coding affects the mixing of Markov chain Monte Carlo algorithms, with the centered coding being

  8. MARS Code in Linux Environment

    International Nuclear Information System (INIS)

    Hwang, Moon Kyu; Bae, Sung Won; Jung, Jae Joon; Chung, Bub Dong

    2005-01-01

    The two-phase system analysis code MARS has been incorporated into Linux system. The MARS code was originally developed based on the RELAP5/MOD3.2 and COBRA-TF. The 1-D module which evolved from RELAP5 alone could be applied for the whole NSSS system analysis. The 3-D module developed based on the COBRA-TF, however, could be applied for the analysis of the reactor core region where 3-D phenomena would be better treated. The MARS code also has several other code units that could be incorporated for more detailed analysis. The separate code units include containment analysis modules and 3-D kinetics module. These code modules could be optionally invoked to be coupled with the main MARS code. The containment code modules (CONTAIN and CONTEMPT), for example, could be utilized for the analysis of the plant containment phenomena in a coupled manner with the nuclear reactor system. The mass and energy interaction during the hypothetical coolant leakage accident could, thereby, be analyzed in a more realistic manner. In a similar way, 3-D kinetics could be incorporated for simulating the three dimensional reactor kinetic behavior, instead of using the built-in point kinetics model. The MARS code system, developed initially for the MS Windows environment, however, would not be adequate enough for the PC cluster system where multiple CPUs are available. When parallelism is to be eventually incorporated into the MARS code, MS Windows environment is not considered as an optimum platform. Linux environment, on the other hand, is generally being adopted as a preferred platform for the multiple codes executions as well as for the parallel application. In this study, MARS code has been modified for the adaptation of Linux platform. For the initial code modification, the Windows system specific features have been removed from the code. Since the coupling code module CONTAIN is originally in a form of dynamic load library (DLL) in the Windows system, a similar adaptation method

  9. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    Science.gov (United States)

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. © Society for the Experimental Analysis of Behavior.

  10. Error-correction coding for digital communications

    Science.gov (United States)

    Clark, G. C., Jr.; Cain, J. B.

    This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.

  11. Distributed space-time coding

    CERN Document Server

    Jing, Yindi

    2014-01-01

    Distributed Space-Time Coding (DSTC) is a cooperative relaying scheme that enables high reliability in wireless networks. This brief presents the basic concept of DSTC, its achievable performance, generalizations, code design, and differential use. Recent results on training design and channel estimation for DSTC and the performance of training-based DSTC are also discussed.

  12. NETWORK CODING BY BEAM FORMING

    DEFF Research Database (Denmark)

    2013-01-01

    Network coding by beam forming in networks, for example, in single frequency networks, can provide aid in increasing spectral efficiency. When network coding by beam forming and user cooperation are combined, spectral efficiency gains may be achieved. According to certain embodiments, a method...... cooperating with the plurality of user equipment to decode the received data....

  13. Building codes : obstacle or opportunity?

    Science.gov (United States)

    Alberto Goetzl; David B. McKeever

    1999-01-01

    Building codes are critically important in the use of wood products for construction. The codes contain regulations that are prescriptive or performance related for various kinds of buildings and construction types. A prescriptive standard might dictate that a particular type of material be used in a given application. A performance standard requires that a particular...

  14. Accelerator Physics Code Web Repository

    CERN Document Server

    Zimmermann, Frank; Bellodi, G; Benedetto, E; Dorda, U; Giovannozzi, Massimo; Papaphilippou, Y; Pieloni, T; Ruggiero, F; Rumolo, G; Schmidt, F; Todesco, E; Zotter, Bruno W; Payet, J; Bartolini, R; Farvacque, L; Sen, T; Chin, Y H; Ohmi, K; Oide, K; Furman, M; Qiang, J; Sabbi, G L; Seidl, P A; Vay, J L; Friedman, A; Grote, D P; Cousineau, S M; Danilov, V; Holmes, J A; Shishlo, A; Kim, E S; Cai, Y; Pivi, M; Kaltchev, D I; Abell, D T; Katsouleas, Thomas C; Boine-Frankenheim, O; Franchetti, G; Hofmann, I; Machida, S; Wei, J

    2006-01-01

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic acceleratorphysics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  15. LFSC - Linac Feedback Simulation Code

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  16. Interleaver Design for Turbo Coding

    DEFF Research Database (Denmark)

    Andersen, Jakob Dahl; Zyablov, Viktor

    1997-01-01

    By a combination of construction and random search based on a careful analysis of the low weight words and the distance properties of the component codes, it is possible to find interleavers for turbo coding with a high minimum distance. We have designed a block interleaver with permutations...

  17. Code breaking in the pacific

    CERN Document Server

    Donovan, Peter

    2014-01-01

    Covers the historical context and the evolution of the technically complex Allied Signals Intelligence (Sigint) activity against Japan from 1920 to 1945 Describes, explains and analyzes the code breaking techniques developed during the war in the Pacific Exposes the blunders (in code construction and use) made by the Japanese Navy that led to significant US Naval victories

  18. Development status of TUF code

    International Nuclear Information System (INIS)

    Liu, W.S.; Tahir, A.; Zaltsgendler

    1996-01-01

    An overview of the important development of the TUF code in 1995 is presented. The development in the following areas is presented: control of round-off error propagation, gas resolution and release models, and condensation induced water hammer. This development is mainly generated from station requests for operational support and code improvement. (author)

  19. Accident consequence assessment code development

    International Nuclear Information System (INIS)

    Homma, T.; Togawa, O.

    1991-01-01

    This paper describes the new computer code system, OSCAAR developed for off-site consequence assessment of a potential nuclear accident. OSCAAR consists of several modules which have modeling capabilities in atmospheric transport, foodchain transport, dosimetry, emergency response and radiological health effects. The major modules of the consequence assessment code are described, highlighting the validation and verification of the models. (author)

  20. The nuclear codes and guidelines

    International Nuclear Information System (INIS)

    Sonter, M.

    1984-01-01

    This paper considers problems faced by the mining industry when implementing the nuclear codes of practice. Errors of interpretation are likely. A major criticism is that the guidelines to the codes must be seen as recommendations only. They are not regulations. Specific clauses in the guidelines are criticised

  1. Survey of coded aperture imaging

    International Nuclear Information System (INIS)

    Barrett, H.H.

    1975-01-01

    The basic principle and limitations of coded aperture imaging for x-ray and gamma cameras are discussed. Current trends include (1) use of time varying apertures, (2) use of ''dilute'' apertures with transmission much less than 50%, and (3) attempts to derive transverse tomographic sections, unblurred by other planes, from coded images

  2. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    Energy Technology Data Exchange (ETDEWEB)

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  3. Grassmann codes and Schubert unions

    DEFF Research Database (Denmark)

    Hansen, Johan Peder; Johnsen, Trygve; Ranestad, Kristian

    2009-01-01

    We study subsets of Grassmann varieties over a field , such that these subsets are unions of Schubert cycles, with respect to a fixed flag. We study such sets in detail, and give applications to coding theory, in particular for Grassmann codes. For much is known about such Schubert unions with a ...

  4. On Network Coded Filesystem Shim

    DEFF Research Database (Denmark)

    Sørensen, Chres Wiant; Roetter, Daniel Enrique Lucani; Médard, Muriel

    2017-01-01

    Although network coding has shown the potential to revolutionize networking and storage, its deployment has faced a number of challenges. Usual proposals involve two approaches. First, deploying a new protocol (e.g., Multipath Coded TCP), or retrofitting another one (e.g., TCP/NC) to deliver bene...

  5. Running codes through the web

    International Nuclear Information System (INIS)

    Clark, R.E.H.

    2001-01-01

    Dr. Clark presented a report and demonstration of running atomic physics codes through the WWW. The atomic physics data is generated from Los Alamos National Laboratory (LANL) codes that calculate electron impact excitation, ionization, photoionization, and autoionization, and inversed processes through detailed balance. Samples of Web interfaces, input and output are given in the report

  6. Dynamic benchmarking of simulation codes

    International Nuclear Information System (INIS)

    Henry, R.E.; Paik, C.Y.; Hauser, G.M.

    1996-01-01

    Computer simulation of nuclear power plant response can be a full-scope control room simulator, an engineering simulator to represent the general behavior of the plant under normal and abnormal conditions, or the modeling of the plant response to conditions that would eventually lead to core damage. In any of these, the underlying foundation for their use in analysing situations, training of vendor/utility personnel, etc. is how well they represent what has been known from industrial experience, large integral experiments and separate effects tests. Typically, simulation codes are benchmarked with some of these; the level of agreement necessary being dependent upon the ultimate use of the simulation tool. However, these analytical models are computer codes, and as a result, the capabilities are continually enhanced, errors are corrected, new situations are imposed on the code that are outside of the original design basis, etc. Consequently, there is a continual need to assure that the benchmarks with important transients are preserved as the computer code evolves. Retention of this benchmarking capability is essential to develop trust in the computer code. Given the evolving world of computer codes, how is this retention of benchmarking capabilities accomplished? For the MAAP4 codes this capability is accomplished through a 'dynamic benchmarking' feature embedded in the source code. In particular, a set of dynamic benchmarks are included in the source code and these are exercised every time the archive codes are upgraded and distributed to the MAAP users. Three different types of dynamic benchmarks are used: plant transients; large integral experiments; and separate effects tests. Each of these is performed in a different manner. The first is accomplished by developing a parameter file for the plant modeled and an input deck to describe the sequence; i.e. the entire MAAP4 code is exercised. The pertinent plant data is included in the source code and the computer

  7. Distributed source coding of video

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Van Luong, Huynh

    2015-01-01

    A foundation for distributed source coding was established in the classic papers of Slepian-Wolf (SW) [1] and Wyner-Ziv (WZ) [2]. This has provided a starting point for work on Distributed Video Coding (DVC), which exploits the source statistics at the decoder side offering shifting processing...... steps, conventionally performed at the video encoder side, to the decoder side. Emerging applications such as wireless visual sensor networks and wireless video surveillance all require lightweight video encoding with high coding efficiency and error-resilience. The video data of DVC schemes differ from...... the assumptions of SW and WZ distributed coding, e.g. by being correlated in time and nonstationary. Improving the efficiency of DVC coding is challenging. This paper presents some selected techniques to address the DVC challenges. Focus is put on pin-pointing how the decoder steps are modified to provide...

  8. Non-Protein Coding RNAs

    CERN Document Server

    Walter, Nils G; Batey, Robert T

    2009-01-01

    This book assembles chapters from experts in the Biophysics of RNA to provide a broadly accessible snapshot of the current status of this rapidly expanding field. The 2006 Nobel Prize in Physiology or Medicine was awarded to the discoverers of RNA interference, highlighting just one example of a large number of non-protein coding RNAs. Because non-protein coding RNAs outnumber protein coding genes in mammals and other higher eukaryotes, it is now thought that the complexity of organisms is correlated with the fraction of their genome that encodes non-protein coding RNAs. Essential biological processes as diverse as cell differentiation, suppression of infecting viruses and parasitic transposons, higher-level organization of eukaryotic chromosomes, and gene expression itself are found to largely be directed by non-protein coding RNAs. The biophysical study of these RNAs employs X-ray crystallography, NMR, ensemble and single molecule fluorescence spectroscopy, optical tweezers, cryo-electron microscopy, and ot...

  9. Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Faber, M.H.; Sørensen, John Dalsgaard

    2003-01-01

    The present paper addresses fundamental concepts of reliability based code calibration. First basic principles of structural reliability theory are introduced and it is shown how the results of FORM based reliability analysis may be related to partial safety factors and characteristic values....... Thereafter the code calibration problem is presented in its principal decision theoretical form and it is discussed how acceptable levels of failure probability (or target reliabilities) may be established. Furthermore suggested values for acceptable annual failure probabilities are given for ultimate...... and serviceability limit states. Finally the paper describes the Joint Committee on Structural Safety (JCSS) recommended procedure - CodeCal - for the practical implementation of reliability based code calibration of LRFD based design codes....

  10. What Froze the Genetic Code?

    Directory of Open Access Journals (Sweden)

    Lluís Ribas de Pouplana

    2017-04-01

    Full Text Available The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  11. Verification of reactor safety codes

    International Nuclear Information System (INIS)

    Murley, T.E.

    1978-01-01

    The safety evaluation of nuclear power plants requires the investigation of wide range of potential accidents that could be postulated to occur. Many of these accidents deal with phenomena that are outside the range of normal engineering experience. Because of the expense and difficulty of full scale tests covering the complete range of accident conditions, it is necessary to rely on complex computer codes to assess these accidents. The central role that computer codes play in safety analyses requires that the codes be verified, or tested, by comparing the code predictions with a wide range of experimental data chosen to span the physical phenomena expected under potential accident conditions. This paper discusses the plans of the Nuclear Regulatory Commission for verifying the reactor safety codes being developed by NRC to assess the safety of light water reactors and fast breeder reactors. (author)

  12. What Froze the Genetic Code?

    Science.gov (United States)

    Ribas de Pouplana, Lluís; Torres, Adrian Gabriel; Rafels-Ybern, Àlbert

    2017-04-05

    The frozen accident theory of the Genetic Code was a proposal by Francis Crick that attempted to explain the universal nature of the Genetic Code and the fact that it only contains information for twenty amino acids. Fifty years later, it is clear that variations to the universal Genetic Code exist in nature and that translation is not limited to twenty amino acids. However, given the astonishing diversity of life on earth, and the extended evolutionary time that has taken place since the emergence of the extant Genetic Code, the idea that the translation apparatus is for the most part immobile remains true. Here, we will offer a potential explanation to the reason why the code has remained mostly stable for over three billion years, and discuss some of the mechanisms that allow species to overcome the intrinsic functional limitations of the protein synthesis machinery.

  13. Tristan code and its application

    Science.gov (United States)

    Nishikawa, K.-I.

    Since TRISTAN: The 3-D Electromagnetic Particle Code was introduced in 1990, it has been used for many applications including the simulations of global solar windmagnetosphere interaction. The most essential ingridients of this code have been published in the ISSS-4 book. In this abstract we describe some of issues and an application of this code for the study of global solar wind-magnetosphere interaction including a substorm study. The basic code (tristan.f) for the global simulation and a local simulation of reconnection with a Harris model (issrec2.f) are available at http:/www.physics.rutger.edu/˜kenichi. For beginners the code (isssrc2.f) with simpler boundary conditions is suitable to start to run simulations. The future of global particle simulations for a global geospace general circulation (GGCM) model with predictive capability (for Space Weather Program) is discussed.

  14. High efficiency video coding coding tools and specification

    CERN Document Server

    Wien, Mathias

    2015-01-01

    The video coding standard High Efficiency Video Coding (HEVC) targets at improved compression performance for video resolutions of HD and beyond, providing Ultra HD video at similar compressed bit rates as for HD video encoded with the well-established video coding standard H.264 | AVC. Based on known concepts, new coding structures and improved coding tools have been developed and specified in HEVC. The standard is expected to be taken up easily by established industry as well as new endeavors, answering the needs of todays connected and ever-evolving online world. This book presents the High Efficiency Video Coding standard and explains it in a clear and coherent language. It provides a comprehensive and consistently written description, all of a piece. The book targets at both, newbies to video coding as well as experts in the field. While providing sections with introductory text for the beginner, it suits as a well-arranged reference book for the expert. The book provides a comprehensive reference for th...

  15. Detecting non-coding selective pressure in coding regions

    Directory of Open Access Journals (Sweden)

    Blanchette Mathieu

    2007-02-01

    Full Text Available Abstract Background Comparative genomics approaches, where orthologous DNA regions are compared and inter-species conserved regions are identified, have proven extremely powerful for identifying non-coding regulatory regions located in intergenic or intronic regions. However, non-coding functional elements can also be located within coding region, as is common for exonic splicing enhancers, some transcription factor binding sites, and RNA secondary structure elements affecting mRNA stability, localization, or translation. Since these functional elements are located in regions that are themselves highly conserved because they are coding for a protein, they generally escaped detection by comparative genomics approaches. Results We introduce a comparative genomics approach for detecting non-coding functional elements located within coding regions. Codon evolution is modeled as a mixture of codon substitution models, where each component of the mixture describes the evolution of codons under a specific type of coding selective pressure. We show how to compute the posterior distribution of the entropy and parsimony scores under this null model of codon evolution. The method is applied to a set of growth hormone 1 orthologous mRNA sequences and a known exonic splicing elements is detected. The analysis of a set of CORTBP2 orthologous genes reveals a region of several hundred base pairs under strong non-coding selective pressure whose function remains unknown. Conclusion Non-coding functional elements, in particular those involved in post-transcriptional regulation, are likely to be much more prevalent than is currently known. With the numerous genome sequencing projects underway, comparative genomics approaches like that proposed here are likely to become increasingly powerful at detecting such elements.

  16. Coding for effective denial management.

    Science.gov (United States)

    Miller, Jackie; Lineberry, Joe

    2004-01-01

    Nearly everyone will agree that accurate and consistent coding of diagnoses and procedures is the cornerstone for operating a compliant practice. The CPT or HCPCS procedure code tells the payor what service was performed and also (in most cases) determines the amount of payment. The ICD-9-CM diagnosis code, on the other hand, tells the payor why the service was performed. If the diagnosis code does not meet the payor's criteria for medical necessity, all payment for the service will be denied. Implementation of an effective denial management program can help "stop the bleeding." Denial management is a comprehensive process that works in two ways. First, it evaluates the cause of denials and takes steps to prevent them. Second, denial management creates specific procedures for refiling or appealing claims that are initially denied. Accurate, consistent and compliant coding is key to both of these functions. The process of proactively managing claim denials also reveals a practice's administrative strengths and weaknesses, enabling radiology business managers to streamline processes, eliminate duplicated efforts and shift a larger proportion of the staff's focus from paperwork to servicing patients--all of which are sure to enhance operations and improve practice management and office morale. Accurate coding requires a program of ongoing training and education in both CPT and ICD-9-CM coding. Radiology business managers must make education a top priority for their coding staff. Front office staff, technologists and radiologists should also be familiar with the types of information needed for accurate coding. A good staff training program will also cover the proper use of Advance Beneficiary Notices (ABNs). Registration and coding staff should understand how to determine whether the patient's clinical history meets criteria for Medicare coverage, and how to administer an ABN if the exam is likely to be denied. Staff should also understand the restrictions on use of

  17. ESCADRE and ICARE code systems

    International Nuclear Information System (INIS)

    Reocreux, M.; Gauvain, J.

    1992-01-01

    The French sever accident code development program is following two parallel approaches: the first one is dealing with ''integral codes'' which are designed for giving immediate engineer answers, the second one is following a more mechanistic way in order to have the capability of detailed analysis of experiments, in order to get a better understanding of the scaling problem and reach a better confidence in plant calculations. In the first approach a complete system has been developed and is being used for practical cases: this is the ESCADRE system. In the second approach, a set of codes dealing first with primary circuit is being developed: a mechanistic core degradation code, ICARE, has been issued and is being coupled with the advanced thermalhydraulic code CATHARE. Fission product codes have been also coupled to CATHARE. The ''integral'' ESCADRE system and the mechanistic ICARE and associated codes are described. Their main characteristics are reviewed and the status of their development and assessment given. Future studies are finally discussed. 36 refs, 4 figs, 1 tab

  18. The ZPIC educational code suite

    Science.gov (United States)

    Calado, R.; Pardal, M.; Ninhos, P.; Helm, A.; Mori, W. B.; Decyk, V. K.; Vieira, J.; Silva, L. O.; Fonseca, R. A.

    2017-10-01

    Particle-in-Cell (PIC) codes are used in almost all areas of plasma physics, such as fusion energy research, plasma accelerators, space physics, ion propulsion, and plasma processing, and many other areas. In this work, we present the ZPIC educational code suite, a new initiative to foster training in plasma physics using computer simulations. Leveraging on our expertise and experience from the development and use of the OSIRIS PIC code, we have developed a suite of 1D/2D fully relativistic electromagnetic PIC codes, as well as 1D electrostatic. These codes are self-contained and require only a standard laptop/desktop computer with a C compiler to be run. The output files are written in a new file format called ZDF that can be easily read using the supplied routines in a number of languages, such as Python, and IDL. The code suite also includes a number of example problems that can be used to illustrate several textbook and advanced plasma mechanisms, including instructions for parameter space exploration. We also invite contributions to this repository of test problems that will be made freely available to the community provided the input files comply with the format defined by the ZPIC team. The code suite is freely available and hosted on GitHub at https://github.com/zambzamb/zpic. Work partially supported by PICKSC.

  19. Stability analysis by ERATO code

    International Nuclear Information System (INIS)

    Tsunematsu, Toshihide; Takeda, Tatsuoki; Matsuura, Toshihiko; Azumi, Masafumi; Kurita, Gen-ichi

    1979-12-01

    Problems in MHD stability calculations by ERATO code are described; which concern convergence property of results, equilibrium codes, and machine optimization of ERATO code. It is concluded that irregularity on a convergence curve is not due to a fault of the ERATO code itself but due to inappropriate choice of the equilibrium calculation meshes. Also described are a code to calculate an equilibrium as a quasi-inverse problem and a code to calculate an equilibrium as a result of a transport process. Optimization of the code with respect to I/O operations reduced both CPU time and I/O time considerably. With the FACOM230-75 APU/CPU multiprocessor system, the performance is about 6 times as high as with the FACOM230-75 CPU, showing the effectiveness of a vector processing computer for the kind of MHD computations. This report is a summary of the material presented at the ERATO workshop 1979(ORNL), supplemented with some details. (author)

  20. ETR/ITER systems code

    Energy Technology Data Exchange (ETDEWEB)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L. (ed.)

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  1. ETR/ITER systems code

    International Nuclear Information System (INIS)

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs

  2. LFSC - Linac Feedback Simulation Code

    International Nuclear Information System (INIS)

    Ivanov, Valentin; Fermilab

    2008-01-01

    The computer program LFSC ( ) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output

  3. Coded communications with nonideal interleaving

    Science.gov (United States)

    Laufer, Shaul

    1991-02-01

    Burst error channels - a type of block interference channels - feature increasing capacity but decreasing cutoff rate as the memory rate increases. Despite the large capacity, there is degradation in the performance of practical coding schemes when the memory length is excessive. A short-coding error parameter (SCEP) was introduced, which expresses a bound on the average decoding-error probability for codes shorter than the block interference length. The performance of a coded slow frequency-hopping communication channel is analyzed for worst-case partial band jamming and nonideal interleaving, by deriving expressions for the capacity and cutoff rate. The capacity and cutoff rate, respectively, are shown to approach and depart from those of a memoryless channel corresponding to the transmission of a single code letter per hop. For multiaccess communications over a slot-synchronized collision channel without feedback, the channel was considered as a block interference channel with memory length equal to the number of letters transmitted in each slot. The effects of an asymmetrical background noise and a reduced collision error rate were studied, as aspects of real communications. The performance of specific convolutional and Reed-Solomon codes was examined for slow frequency-hopping systems with nonideal interleaving. An upper bound is presented for the performance of a Viterbi decoder for a convolutional code with nonideal interleaving, and a soft decision diversity combining technique is introduced.

  4. Coding and decoding for code division multiple user communication systems

    Science.gov (United States)

    Healy, T. J.

    1985-01-01

    A new algorithm is introduced which decodes code division multiple user communication signals. The algorithm makes use of the distinctive form or pattern of each signal to separate it from the composite signal created by the multiple users. Although the algorithm is presented in terms of frequency-hopped signals, the actual transmitter modulator can use any of the existing digital modulation techniques. The algorithm is applicable to error-free codes or to codes where controlled interference is permitted. It can be used when block synchronization is assumed, and in some cases when it is not. The paper also discusses briefly some of the codes which can be used in connection with the algorithm, and relates the algorithm to past studies which use other approaches to the same problem.

  5. GAMERA - The New Magnetospheric Code

    Science.gov (United States)

    Lyon, J.; Sorathia, K.; Zhang, B.; Merkin, V. G.; Wiltberger, M. J.; Daldorff, L. K. S.

    2017-12-01

    The Lyon-Fedder-Mobarry (LFM) code has been a main-line magnetospheric simulation code for 30 years. The code base, designed in the age of memory to memory vector ma- chines,is still in wide use for science production but needs upgrading to ensure the long term sustainability. In this presentation, we will discuss our recent efforts to update and improve that code base and also highlight some recent results. The new project GAM- ERA, Grid Agnostic MHD for Extended Research Applications, has kept the original design characteristics of the LFM and made significant improvements. The original de- sign included high order numerical differencing with very aggressive limiting, the ability to use arbitrary, but logically rectangular, grids, and maintenance of div B = 0 through the use of the Yee grid. Significant improvements include high-order upwinding and a non-clipping limiter. One other improvement with wider applicability is an im- proved averaging technique for the singularities in polar and spherical grids. The new code adopts a hybrid structure - multi-threaded OpenMP with an overarching MPI layer for large scale and coupled applications. The MPI layer uses a combination of standard MPI and the Global Array Toolkit from PNL to provide a lightweight mechanism for coupling codes together concurrently. The single processor code is highly efficient and can run magnetospheric simulations at the default CCMC resolution faster than real time on a MacBook pro. We have run the new code through the Athena suite of tests, and the results compare favorably with the codes available to the astrophysics community. LFM/GAMERA has been applied to many different situations ranging from the inner and outer heliosphere and magnetospheres of Venus, the Earth, Jupiter and Saturn. We present example results the Earth's magnetosphere including a coupled ring current (RCM), the magnetospheres of Jupiter and Saturn, and the inner heliosphere.

  6. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministic and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.SCALE 6.2 provides many new capabilities and significant improvements of existing features.New capabilities include:• ENDF/B-VII.1 nuclear data libraries CE and MG with enhanced group structures,• Neutron covariance data based on ENDF/B-VII.1 and supplemented with ORNL data,• Covariance data for fission product yields and decay constants,• Stochastic uncertainty and correlation quantification for any SCALE sequence with Sampler,• Parallel calculations with KENO,• Problem-dependent temperature corrections for CE calculations,• CE shielding and criticality accident alarm system analysis with MAVRIC,• CE

  7. Code-Mixing and Code Switchingin The Process of Learning

    Directory of Open Access Journals (Sweden)

    Diyah Atiek Mustikawati

    2016-09-01

    Full Text Available This study aimed to describe a form of code switching and code mixing specific form found in the teaching and learning activities in the classroom as well as determining factors influencing events stand out that form of code switching and code mixing in question.Form of this research is descriptive qualitative case study which took place in Al Mawaddah Boarding School Ponorogo. Based on the analysis and discussion that has been stated in the previous chapter that the form of code mixing and code switching learning activities in Al Mawaddah Boarding School is in between the use of either language Java language, Arabic, English and Indonesian, on the use of insertion of words, phrases, idioms, use of nouns, adjectives, clauses, and sentences. Code mixing deciding factor in the learning process include: Identification of the role, the desire to explain and interpret, sourced from the original language and its variations, is sourced from a foreign language. While deciding factor in the learning process of code, includes: speakers (O1, partners speakers (O2, the presence of a third person (O3, the topic of conversation, evoke a sense of humour, and just prestige. The significance of this study is to allow readers to see the use of language in a multilingual society, especially in AL Mawaddah boarding school about the rules and characteristics variation in the language of teaching and learning activities in the classroom. Furthermore, the results of this research will provide input to the ustadz / ustadzah and students in developing oral communication skills and the effectiveness of teaching and learning strategies in boarding schools.

  8. Neural Decoder for Topological Codes

    Science.gov (United States)

    Torlai, Giacomo; Melko, Roger G.

    2017-07-01

    We present an algorithm for error correction in topological codes that exploits modern machine learning techniques. Our decoder is constructed from a stochastic neural network called a Boltzmann machine, of the type extensively used in deep learning. We provide a general prescription for the training of the network and a decoding strategy that is applicable to a wide variety of stabilizer codes with very little specialization. We demonstrate the neural decoder numerically on the well-known two-dimensional toric code with phase-flip errors.

  9. Coding chaotic billiards. Pt. 3

    International Nuclear Information System (INIS)

    Ullmo, D.; Giannoni, M.J.

    1993-01-01

    Non-tiling compact billiard defined on the pseudosphere is studied 'a la Morse coding'. As for most bounded systems, the coding is non exact. However, two sets of approximate grammar rules can be obtained, one specifying forbidden codes, and the other allowed ones. In-between some sequences remain in the 'unknown' zone, but their relative amount can be reduced to zero if one lets the length of the approximate grammar rules goes to infinity. The relationship between these approximate grammar rules and the 'pruning front' introduced by Cvitanovic et al. is discussed. (authors). 13 refs., 10 figs., 1 tab

  10. Iterative nonlinear unfolding code: TWOGO

    International Nuclear Information System (INIS)

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors

  11. Atlas C++ Coding Standard Specification

    CERN Document Server

    Albrand, S; Barberis, D; Bosman, M; Jones, B; Stavrianakou, M; Arnault, C; Candlin, D; Candlin, R; Franck, E; Hansl-Kozanecka, Traudl; Malon, D; Qian, S; Quarrie, D; Schaffer, R D

    2001-01-01

    This document defines the ATLAS C++ coding standard, that should be adhered to when writing C++ code. It has been adapted from the original "PST Coding Standard" document (http://pst.cern.ch/HandBookWorkBook/Handbook/Programming/programming.html) CERN-UCO/1999/207. The "ATLAS standard" comprises modifications, further justification and examples for some of the rules in the original PST document. All changes were discussed in the ATLAS Offline Software Quality Control Group and feedback from the collaboration was taken into account in the "current" version.

  12. Writing the Live Coding Book

    DEFF Research Database (Denmark)

    Blackwell, Alan; Cox, Geoff; Lee, Sang Wong

    2016-01-01

    This paper is a speculation on the relationship between coding and writing, and the ways in which technical innovations and capabilities enable us to rethink each in terms of the other. As a case study, we draw on recent experiences of preparing a book on live coding, which integrates a wide range...... of personal, historical, technical and critical perspectives. This book project has been both experimental and reflective, in a manner that allows us to draw on critical understanding of both code and writing, and point to the potential for new practices in the future....

  13. LiveCode mobile development

    CERN Document Server

    Lavieri, Edward D

    2013-01-01

    A practical guide written in a tutorial-style, ""LiveCode Mobile Development Hotshot"" walks you step-by-step through 10 individual projects. Every project is divided into sub tasks to make learning more organized and easy to follow along with explanations, diagrams, screenshots, and downloadable material.This book is great for anyone who wants to develop mobile applications using LiveCode. You should be familiar with LiveCode and have access to a smartphone. You are not expected to know how to create graphics or audio clips.

  14. Network Coding Fundamentals and Applications

    CERN Document Server

    Medard, Muriel

    2011-01-01

    Network coding is a field of information and coding theory and is a method of attaining maximum information flow in a network. This book is an ideal introduction for the communications and network engineer, working in research and development, who needs an intuitive introduction to network coding and to the increased performance and reliability it offers in many applications. This book is an ideal introduction for the research and development communications and network engineer who needs an intuitive introduction to the theory and wishes to understand the increased performance and reliabil

  15. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  16. Tree Coding of Bilevel Images

    DEFF Research Database (Denmark)

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...

  17. Studies on DANESS Code Modeling

    International Nuclear Information System (INIS)

    Jeong, Chang Joon

    2009-09-01

    The DANESS code modeling study has been performed. DANESS code is widely used in a dynamic fuel cycle analysis. Korea Atomic Energy Research Institute (KAERI) has used the DANESS code for the Korean national nuclear fuel cycle scenario analysis. In this report, the important models such as Energy-demand scenario model, New Reactor Capacity Decision Model, Reactor and Fuel Cycle Facility History Model, and Fuel Cycle Model are investigated. And, some models in the interface module are refined and inserted for Korean nuclear fuel cycle model. Some application studies have also been performed for GNEP cases and for US fast reactor scenarios with various conversion ratios

  18. Beam-dynamics codes used at DARHT

    Energy Technology Data Exchange (ETDEWEB)

    Ekdahl, Jr., Carl August [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-01

    Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.

  19. Introduction to coding and information theory

    CERN Document Server

    Roman, Steven

    1997-01-01

    This book is intended to introduce coding theory and information theory to undergraduate students of mathematics and computer science. It begins with a review of probablity theory as applied to finite sample spaces and a general introduction to the nature and types of codes. The two subsequent chapters discuss information theory: efficiency of codes, the entropy of information sources, and Shannon's Noiseless Coding Theorem. The remaining three chapters deal with coding theory: communication channels, decoding in the presence of errors, the general theory of linear codes, and such specific codes as Hamming codes, the simplex codes, and many others.

  20. Allegheny County Zip Code Boundaries

    Data.gov (United States)

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — This dataset demarcates the zip code boundaries that lie within Allegheny County.If viewing this description on the Western Pennsylvania Regional Data Center’s open...