WorldWideScience

Sample records for lanl roadrunner models

  1. Trainblazing with roadrunner

    Energy Technology Data Exchange (ETDEWEB)

    Henning, Paul J [Los Alamos National Laboratory; White, Andrew B [Los Alamos National Laboratory

    2009-01-01

    In June 2008, a new supercomputer broke the petaflop/s performance barrier, more than doubling the computational performance of the next fastest machine on the TopSOO Supercomputing Sites list (http://topSOO.org).This computer, named Roadrunner, is the result of an intensive collaboration between IBM and Los Alamos National Laboratory, where it is now located. Aside from its performance, Roadrunner has two distinguishing characteristics: a very good power/performance ratio and a 'hybrid' computer architecture that mixes several types of processors. By November 2008, the traditionally-architected Jaguar computer at Oak Ridge National Laboratory was neck-and-neck with Roadrunner in the performance race, but it requires almost 2.8 times the electric power of Roadrunner. This difference translates into millions of dollars per year in operating costs.

  2. Conceptual Model of Climate Change Impacts at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Dewart, Jean Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-17

    Goal 9 of the LANL FY15 Site Sustainability Plan (LANL 2014a) addresses Climate Change Adaptation. As part of Goal 9, the plan reviews many of the individual programs the Laboratory has initiated over the past 20 years to address climate change impacts to LANL (e.g. Wildland Fire Management Plan, Forest Management Plan, etc.). However, at that time, LANL did not yet have a comprehensive approach to climate change adaptation. To fill this gap, the FY15 Work Plan for the LANL Long Term Strategy for Environmental Stewardship and Sustainability (LANL 2015) included a goal of (1) establishing a comprehensive conceptual model of climate change impacts at LANL and (2) establishing specific climate change indices to measure climate change and impacts at Los Alamos. Establishing a conceptual model of climate change impacts will demonstrate that the Laboratory is addressing climate change impacts in a comprehensive manner. This paper fulfills the requirement of goal 1. The establishment of specific indices of climate change at Los Alamos (goal 2), will improve our ability to determine climate change vulnerabilities and assess risk. Future work will include prioritizing risks, evaluating options/technologies/costs, and where appropriate, taking actions. To develop a comprehensive conceptual model of climate change impacts, we selected the framework provided in the National Oceanic and Atmospheric Administration (NOAA) Climate Resilience Toolkit (http://toolkit.climate.gov/).

  3. VEPCO network model reconciliation of LANL and MZA model data

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1992-12-15

    The LANL DC load flow model of the VEPCO transmission network shows 210 more substations than the AC load flow model produced by MZA utility Consultants. MZA was requested to determine the source of the difference. The AC load flow model used for this study utilizes 2 standard network algorithms (Decoupled or Newton). The solution time of each is affected by the number of substations. The more substations included, the longer the model will take to solve. In addition, the ability of the algorithms to converge to a solution is affected by line loadings and characteristics. Convergence is inhibited by numerous lightly loaded and electrically short lines. The MZA model reduces the total substations to 343 by creating equivalent loads and generation. Most of the omitted substations are lightly loaded and rated at 115 kV. The MZA model includes 16 substations not included in the LANL model. These represent new generation including Non-Utility Generator (NUG) sites, additional substations and an intertie (Wake, to CP and L). This report also contains data from the Italian State AC power flow model and the Duke Power Company AC flow model.

  4. LANL*V2.0: global modeling and validation

    Directory of Open Access Journals (Sweden)

    S. Zaharia

    2011-08-01

    Full Text Available We describe in this paper the new version of LANL*, an artificial neural network (ANN for calculating the magnetic drift invariant L*. This quantity is used for modeling radiation belt dynamics and for space weather applications. We have implemented the following enhancements in the new version: (1 we have removed the limitation to geosynchronous orbit and the model can now be used for a much larger region. (2 The new version is based on the improved magnetic field model by Tsyganenko and Sitnov (2005 (TS05 instead of the older model by Tsyganenko et al. (2003. We have validated the model and compared our results to L* calculations with the TS05 model based on ephemerides for CRRES, Polar, GPS, a LANL geosynchronous satellite, and a virtual RBSP type orbit. We find that the neural network performs very well for all these orbits with an error typically ΔL* * V2.0 artificial neural network is orders of magnitudes faster than traditional numerical field line integration techniques with the TS05 model. It has applications to real-time radiation belt forecasting, analysis of data sets involving decades of satellite of observations, and other problems in space weather.

  5. Advances in petascale kinetic plasma simulation with VPIC and Roadrunner

    Energy Technology Data Exchange (ETDEWEB)

    Bowers, Kevin J [Los Alamos National Laboratory; Albright, Brian J [Los Alamos National Laboratory; Yin, Lin [Los Alamos National Laboratory; Daughton, William S [Los Alamos National Laboratory; Roytershteyn, Vadim [Los Alamos National Laboratory; Kwan, Thomas J T [Los Alamos National Laboratory

    2009-01-01

    VPIC, a first-principles 3d electromagnetic charge-conserving relativistic kinetic particle-in-cell (PIC) code, was recently adapted to run on Los Alamos's Roadrunner, the first supercomputer to break a petaflop (10{sup 15} floating point operations per second) in the TOP500 supercomputer performance rankings. They give a brief overview of the modeling capabilities and optimization techniques used in VPIC and the computational characteristics of petascale supercomputers like Roadrunner. They then discuss three applications enabled by VPIC's unprecedented performance on Roadrunner: modeling laser plasma interaction in upcoming inertial confinement fusion experiments at the National Ignition Facility (NIF), modeling short pulse laser GeV ion acceleration and modeling reconnection in magnetic confinement fusion experiments.

  6. Optimizing the Forward Algorithm for Hidden Markov Model on IBM Roadrunner clusters

    Directory of Open Access Journals (Sweden)

    SOIMAN, S.-I.

    2015-05-01

    Full Text Available In this paper we present a parallel solution of the Forward Algorithm for Hidden Markov Models. The Forward algorithm compute a probability of a hidden state from Markov model at a certain time, this process being recursively. The whole process requires large computational resources for those models with a large number of states and long observation sequences. Our solution in order to reduce the computational time is a multilevel parallelization of Forward algorithm. Two types of cores were used in our implementation, for each level of parallelization, cores that are graved on the same chip of PowerXCell8i processor. This hybrid architecture of processors permitted us to obtain a speedup factor over 40 relative to the sequential algorithm for a model with 24 states and 25 millions of observable symbols. Experimental results showed that the parallel Forward algorithm can evaluate the probability of an observation sequence on a hidden Markov model 40 times faster than the classic one does. Based on the performance obtained, we demonstrate the applicability of this parallel implementation of Forward algorithm in complex problems such as large vocabulary speech recognition.

  7. Experiences from the Roadrunner petascale hybrid systems

    Energy Technology Data Exchange (ETDEWEB)

    Kerbyson, Darren J [Los Alamos National Laboratory; Pakin, Scott [Los Alamos National Laboratory; Lang, Mike [Los Alamos National Laboratory; Sancho Pitarch, Jose C [Los Alamos National Laboratory; Davis, Kei [Los Alamos National Laboratory; Barker, Kevin J [Los Alamos National Laboratory; Peraza, Josh [Los Alamos National Laboratory

    2010-01-01

    The combination of flexible microprocessors (AMD Opterons) with high-performing accelerators (IBM PowerXCell 8i) resulted in the extremely powerful Roadrunner system. Many challenges in both hardware and software were overcome to achieve its goals. In this talk we detail some of the experiences in achieving performance on the Roadrunner system. In particular we examine several implementations of the kernel application, Sweep3D, using a work-queue approach, a more portable Thread-building-blocks approach, and an MPI on the accelerator approach.

  8. Using the Internet in Middle Schools: A Model for Success. A Collaborative Effort between Los Alamos National Laboratory (LANL) and Los Alamos Middle School (LAMS).

    Science.gov (United States)

    Addessio, Barbara K.; And Others

    Los Alamos National Laboratory (LANL) developed a model for school networking using Los Alamos Middle School as a testbed. The project was a collaborative effort between the school and the laboratory. The school secured administrative funding for hardware and software; and LANL provided the network architecture, installation, consulting, and…

  9. Home range dynamics, habitat selection, and survival of Greater Roadrunners

    Science.gov (United States)

    Kelley, S.W.; Ransom, D.; Butcher, J.A.; Schulz, G.G.; Surber, B.W.; Pinchak, W.E.; Santamaria, C.A.; Hurtado, L.A.

    2011-01-01

    Greater Roadrunners (Geococcyx californianus) are common, poorly studied birds of arid and semi-arid ecosystems in the southwestern United States. Conservation of this avian predator requires a detailed understanding of their movements and spatial requirements that is currently lacking. From 2006 to 2009, we quantified home-range and core area sizes and overlap, habitat selection, and survival of roadrunners (N= 14 males and 20 females) in north-central Texas using radio-telemetry and fixed kernel estimators. Median home-range and core-area sizes were 90.4 ha and 19.2 ha for males and 80.1 ha and 16.7 ha for females, respectively. The size of home range and core areas did not differ significantly by either sex or season. Our home range estimates were twice as large (x??= 108.9 ha) as earlier published estimates based on visual observations (x??= 28-50 ha). Mean percent overlap was 38.4% for home ranges and 13.7% for core areas. Male roadrunners preferred mesquite woodland and mesquite savanna cover types, and avoided the grass-forb cover type. Female roadrunners preferred mesquite savanna and riparian woodland cover types, and avoided grass-forb habitat. Kaplan-Meier annual survival probabilities for females (0.452 ?? 0.118[SE]) were twice that estimated for males (0.210 ?? 0.108), but this difference was not significant. Mortality rates of male roadrunners were higher than those of females during the spring when males call from elevated perches, court females, and chase competing males. Current land use practices that target woody-shrub removal to enhance livestock forage production could be detrimental to roadrunner populations by reducing availability of mesquite woodland and mesquite savanna habitat required for nesting and roosting and increasing the amount of grass-forb habitat that roadrunners avoid. ??2011 The Authors. Journal of Field Ornithology ??2011 Association of Field Ornithologists.

  10. LANL12-RS-108J Report on Device Modeler Testing of the Device Modeler Tool Kit. DMTK in FY14

    Energy Technology Data Exchange (ETDEWEB)

    Temple, Brian Allen [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Pimentel, David A. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)

    2014-09-28

    This document covers the various testing and modifications of the Device Modeler Tool Kit (DMTK) for project LANL12-RS-108J in FY14. The testing has been comprised of different device modelers and trainees for device modeling using DMTK on the secure network for a few test problems. Most of these problems have been synthetic data problems. There has been a local secure network training drill where one of the trainees has used DMTK for real data. DMTK has also been used on a laptop for a deployed real data training drill. Once DMTK gets into the home team, it will be used for more training drills (TDs) which will contain real data in the future.

  11. LANL Meteorology Program

    Energy Technology Data Exchange (ETDEWEB)

    Dewart, Jean Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-09

    The goal of the Meteorology Program is to provide all routine meteorology measurements for LANL operational requirements. This report discusses the program, its routine operations, and other services.

  12. The role of a detailed aqueous phase source release model in the LANL area G performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Shuman, R.; Hollis, D.K. [Los Alamos National Lab., NM (United States)] [and others

    1995-12-31

    A preliminary draft of the Performance Assessment for the Los Alamos National Laboratory (LANL) low-level radioactive waste disposal facility at Area G is currently being completed as required by Department of Energy orders. A detailed review of the inventory data base records and the existing models for source release led to the development of a new modeling capability to describe the liquid phase transport from the waste package volumes. Nuclide quantities are sorted down to four waste package release categories for modeling: rapid release, soil, concrete/sludge, and corrosion. Geochemistry for the waste packages was evaluated in terms of the equilibrium coefficients, Kds, and elemental solubility limits, Csl, interpolated from the literature. Percolation calculations for the base case closure cover show a highly skewed distribution with an average of 4 mm/yr percolation from the disposal unit bottom. The waste release model is based on a compartment representation of the package efflux, and depends on package size, percolation rate or Darcy flux, retardation coefficient, and moisture content.

  13. Simulating Rayleigh-Taylor (RT) instability using PPM hydrodynamics @scale on Roadrunner (u)

    Energy Technology Data Exchange (ETDEWEB)

    Woodward, Paul R [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Rockefeller, Gabriel M [Los Alamos National Laboratory; Fryer, Christopher L [Los Alamos National Laboratory; Dimonte, Guy [Los Alamos National Laboratory; Dai, W [Los Alamos National Laboratory; Kares, R. J. [Los Alamos National Laboratory

    2011-01-05

    The effect of initial conditions on the self-similar growth of the RT instability is investigated using a hydrodynamics code based on the piecewise-parabolic-method (PPM). The PPM code was converted to the hybrid architecture of Roadrunner in order to perform the simulations at extremely high speed and spatial resolution. This paper describes the code conversion to the Cell processor, the scaling studies to 12 CU's on Roadrunner and results on the dependence of the RT growth rate on initial conditions. The relevance of the Roadrunner implementation of this PPM code to other existing and anticipated computer architectures is also discussed.

  14. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    Energy Technology Data Exchange (ETDEWEB)

    Koller, Josep [Los Alamos National Laboratory; Reeves, Geoffrey D [Los Alamos National Laboratory; Friedel, Reiner H W [Los Alamos National Laboratory

    2008-01-01

    Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models over more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand

  15. Weather at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Bruggeman, David Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-19

    This report gives general information about how to become a meteorologist and what kinds of jobs exist in that field. Then it goes into detail about why weather is monitored at LANL, how it is done, and where the data can be accessed online.

  16. LANL Institutional Decision Support By Process Modeling and Analysis Group (AET-2)

    Energy Technology Data Exchange (ETDEWEB)

    Booth, Steven Richard [Los Alamos National Laboratory

    2016-04-04

    AET-2 has expertise in process modeling, economics, business case analysis, risk assessment, Lean/Six Sigma tools, and decision analysis to provide timely decision support to LANS leading to continuous improvement. This capability is critical during the current tight budgetary environment as LANS pushes to identify potential areas of cost savings and efficiencies. An important arena is business systems and operations, where processes can impact most or all laboratory employees. Lab-wide efforts are needed to identify and eliminate inefficiencies to accomplish Director McMillan’s charge of “doing more with less.” LANS faces many critical and potentially expensive choices that require sound decision support to ensure success. AET-2 is available to provide this analysis support to expedite the decisions at hand.

  17. LANL Summer 2016 Report

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, Paul Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-26

    The Monte Carlo N-Particle (MCNP) transport code developed at Los Alamos National Laboratory (LANL) utilizes nuclear cross-section data in a compact ENDF (ACE) format. The accuracy of MCNP calculations depends on the accuracy of nuclear ACE data tables, which depends on the accuracy of the original ENDF files. There are some noticeable differences in ENDF files from one generation to the next, even among the more common fissile materials. As the next generation of ENDF files is being prepared, several software tools were developed to simulate a large number of benchmarks in MCNP (over 1000), collect data from these simulations, and visually represent the results.

  18. LANL Summer 2016 Report

    Energy Technology Data Exchange (ETDEWEB)

    Mendoza, Paul Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-26

    The Monte Carlo N-Particle (MCNP) transport code developed at Los Alamos National Laboratory (LANL) utilizes nuclear cross-section data in a compact ENDF (ACE) format. The accuracy of MCNP calculations depend on the accuracy of nuclear ACE data tables, which depend on the accuracy of the original ENDF files. There are some noticeable differences in ENDF files from one generation to the next, even among the more common fissile materials. As the next generation of ENDF files are being prepared, several software tools were developed to simulate a large number of benchmarks in MCNP (over 1000), collect data from these simulations, and visually represent the results.

  19. 2006 LANL Radionuclide Air Emissions Report

    Energy Technology Data Exchange (ETDEWEB)

    David P. Fuehne

    2007-06-30

    This report describes the impacts from emissions of radionuclides at Los Alamos National Laboratory (LANL) for calendar year 2006. This report fulfills the requirements established by the Radionuclide National Emissions Standards for Hazardous Air Pollutants (Rad-NESHAP). This report is prepared by LANL's Rad-NESHAP compliance team, part of the Environmental Protection Division. The information in this report is required under the Clean Air Act and is being reported to the U.S. Environmental Protection Agency (EPA). The highest effective dose equivalent (EDE) to an off-site member of the public was calculated using procedures specified by the EPA and described in this report. LANL's EDE was 0.47 mrem for 2006. The annual limit established by the EPA is 10 mrem per year. During calendar year 2006, LANL continuously monitored radionuclide emissions at 28 release points, or stacks. The Laboratory estimates emissions from an additional 58 release points using radionuclide usage source terms. Also, LANL uses a network of air samplers around the Laboratory perimeter to monitor ambient airborne levels of radionuclides. To provide data for dispersion modeling and dose assessment, LANL maintains and operates meteorological monitoring systems. From these measurement systems, a comprehensive evaluation is conducted to calculate the EDE for the Laboratory. The EDE is evaluated as any member of the public at any off-site location where there is a residence, school, business, or office. In 2006, this location was the Los Alamos Airport Terminal. The majority of this dose is due to ambient air sampling of plutonium emitted from 2006 clean-up activities at an environmental restoration site (73-002-99; ash pile). Doses reported to the EPA for the past 10 years are shown in Table E1.

  20. Covariance evaluation work at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Kawano, Toshihiko [Los Alamos National Laboratory; Talou, Patrick [Los Alamos National Laboratory; Young, Phillip [Los Alamos National Laboratory; Hale, Gerald [Los Alamos National Laboratory; Chadwick, M B [Los Alamos National Laboratory; Little, R C [Los Alamos National Laboratory

    2008-01-01

    Los Alamos evaluates covariances for nuclear data library, mainly for actinides above the resonance regions and light elements in the enUre energy range. We also develop techniques to evaluate the covariance data, like Bayesian and least-squares fitting methods, which are important to explore the uncertainty information on different types of physical quantities such as elastic scattering angular distribution, or prompt neutron fission spectra. This paper summarizes our current activities of the covariance evaluation work at LANL, including the actinide and light element data mainly for the criticality safety study and transmutation technology. The Bayesian method based on the Kalman filter technique, which combines uncertainties in the theoretical model and experimental data, is discussed.

  1. LANL continuity of operations plan

    Energy Technology Data Exchange (ETDEWEB)

    Senutovitch, Diane M [Los Alamos National Laboratory

    2010-12-22

    The Los Alamos National Laboratory (LANL) is a premier national security research institution, delivering scientific and engineering solutions for the nation's most crucial and complex problems. Our primary responsibility is to ensure the safety, security, and reliability of the nation's nuclear stockpile. LANL emphasizes worker safety, effective operational safeguards and security, and environmental stewardship, outstanding science remains the foundation of work at the Laboratory. In addition to supporting the Laboratory's core national security mission, our work advances bioscience, chemistry, computer science, earth and environmental sciences, materials science, and physics disciplines. To accomplish LANL's mission, we must ensure that the Laboratory EFs continue to be performed during a continuity event, including localized acts of nature, accidents, technological or attack-related emergencies, and pandemic or epidemic events. The LANL Continuity of Operations (COOP) Plan documents the overall LANL COOP Program and provides the operational framework to implement continuity policies, requirements, and responsibilities at LANL, as required by DOE 0 150.1, Continuity Programs, May 2008. LANL must maintain its ability to perform the nation's PMEFs, which are: (1) maintain the safety and security of nuclear materials in the DOE Complex at fixed sites and in transit; (2) respond to a nuclear incident, both domestically and internationally, caused by terrorist activity, natural disaster, or accident, including mobilizing the resources to support these efforts; and (3) support the nation's energy infrastructure. This plan supports Continuity of Operations for Los Alamos National Laboratory (LANL). This plan issues LANL policy as directed by the DOE 0 150.1, Continuity Programs, and provides direction for the orderly continuation of LANL EFs for 30 days of closure or 60 days for a pandemic/epidemic event. Initiation of COOP operations may

  2. LANL PDMLink Product Structure Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Scully, Christopher J. [Los Alamos National Laboratory

    2012-08-29

    Over the past 2 and a half years LANL has done both functionality exploration as well as production implementations of PDMLink Product Structure to control the configuration of many of the LANL Design Agency Products. Based on this experience LANL has been recommending for over a year that future product structure implementation in PDMLink do not use the two digit suffix in the number field of enterprise parts (or WTParts). The suffix will be part of one of the attributes for Part Number. Per the TBP's the two digit suffix represents a change in form, fit, or function in a part or a change in the production agency or a number of other conditions. It also denotes backward compatibility with earlier suffixed parts (see TBP 402 section 3.1).

  3. A Performance Evaluation of QR-eigensolver on IBM Roadrunner cluster for Large Sparse Matrices

    Directory of Open Access Journals (Sweden)

    Ionela RUSU

    2013-01-01

    Full Text Available The paper presents a performance analysis of theQR eigensolver from ScaLAPACK library on the IBMRoadrunner machine. A ScaLAPACK-based testing platformwas developed in order to evaluate the performance of a parallelsolver to compute the eigenvalues and eigenvectors for largescalesparse matrices. Our experiments showed encouragingresults on the IBM Roadrunner cluster, the acceleration factorgained was up to 40 for large matrices. This result is bright tosolve problems that involve scientific and large-scale computing.

  4. 369 TFlop/s molecular dynamics simulations on the Roadrunner general-purpose heterogeneous supercomputer

    Energy Technology Data Exchange (ETDEWEB)

    Swaminarayan, Sriram [Los Alamos National Laboratory; Germann, Timothy C [Los Alamos National Laboratory; Kadau, Kai [Los Alamos National Laboratory; Fossum, Gordon C [IBM CORPORATION

    2008-01-01

    The authors present timing and performance numbers for a short-range parallel molecular dynamics (MD) code, SPaSM, that has been rewritten for the heterogeneous Roadrunner supercomputer. Each Roadrunner compute node consists of two AMD Opteron dual-core microprocessors and four PowerXCell 8i enhanced Cell microprocessors, so that there are four MPI ranks per node, each with one Opteron and one Cell. The interatomic forces are computed on the Cells (each with one PPU and eight SPU cores), while the Opterons are used to direct inter-rank communication and perform I/O-heavy periodic analysis, visualization, and checkpointing tasks. The performance measured for our initial implementation of a standard Lennard-Jones pair potential benchmark reached a peak of 369 Tflop/s double-precision floating-point performance on the full Roadrunner system (27.7% of peak), corresponding to 124 MFlop/Watt/s at a price of approximately 3.69 MFlops/dollar. They demonstrate an initial target application, the jetting and ejection of material from a shocked surface.

  5. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    Directory of Open Access Journals (Sweden)

    G. D. Reeves

    2009-02-01

    Full Text Available We describe here a new method for calculating the magnetic drift invariant, L*, that is used for modeling radiation belt dynamics and for other space weather applications. L* (pronounced L-star is directly proportional to the integral of the magnetic flux contained within the surface defined by a charged particle moving in the Earth's geomagnetic field. Under adiabatic changes to the geomagnetic field L* is a conserved quantity, while under quasi-adiabatic fluctuations diffusion (with respect to a particle's L* is the primary term in equations of particle dynamics. In particular the equations of motion for the very energetic particles that populate the Earth's radiation belts are most commonly expressed by diffusion in three dimensions: L*, energy (or momentum, and pitch angle (the dot product of velocity and the magnetic field vector. Expressing dynamics in these coordinates reduces the dimensionality of the problem by referencing the particle distribution functions to values at the magnetic equatorial point of a magnetic "drift shell" (or L-shell irrespective of local time (or longitude. While the use of L* aids in simplifying the equations of motion, practical applications such as space weather forecasting using realistic geomagnetic fields require sophisticated magnetic field models that, in turn, require computationally intensive numerical integration. Typically a single L* calculation can require on the order of 105 calls to a magnetic field model and each point in the simulation domain and each calculated pitch angle has a different value of L*. We describe here the development and validation of a neural network surrogate model for calculating L* in sophisticated geomagnetic field models with a high degree of fidelity at computational speeds that are millions of times faster than direct numerical field line mapping and integration. This new surrogate model has applications to real-time radiation belt forecasting, analysis of data sets

  6. LANL Robotic Vessel Scanning

    Energy Technology Data Exchange (ETDEWEB)

    Webber, Nels W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-25

    Los Alamos National Laboratory in J-1 DARHT Operations Group uses 6ft spherical vessels to contain hazardous materials produced in a hydrodynamic experiment. These contaminated vessels must be analyzed by means of a worker entering the vessel to locate, measure, and document every penetration mark on the vessel. If the worker can be replaced by a highly automated robotic system with a high precision scanner, it will eliminate the risks to the worker and provide management with an accurate 3D model of the vessel presenting the existing damage with the flexibility to manipulate the model for better and more in-depth assessment.The project was successful in meeting the primary goal of installing an automated system which scanned a 6ft vessel with an elapsed time of 45 minutes. This robotic system reduces the total time for the original scope of work by 75 minutes and results in excellent data accumulation and transmission to the 3D model imaging program.

  7. LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications

    Directory of Open Access Journals (Sweden)

    J. Koller

    2009-07-01

    Full Text Available We describe here a new method for calculating the magnetic drift invariant, L*, that is used for modeling radiation belt dynamics and for other space weather applications. L* (pronounced L-star is directly proportional to the integral of the magnetic flux contained within the surface defined by a charged particle moving in the Earth's geomagnetic field. Under adiabatic changes to the geomagnetic field L* is a conserved quantity, while under quasi-adiabatic fluctuations diffusion (with respect to a particle's L* is the primary term in equations of particle dynamics. In particular the equations of motion for the very energetic particles that populate the Earth's radiation belts are most commonly expressed by diffusion in three dimensions: L*, energy (or momentum, and pitch angle (the dot product of velocity and the magnetic field vector. Expressing dynamics in these coordinates reduces the dimensionality of the problem by referencing the particle distribution functions to values at the magnetic equatorial point of a magnetic "drift shell" (or L-shell irrespective of local time (or longitude. While the use of L* aids in simplifying the equations of motion, practical applications such as space weather forecasting using realistic geomagnetic fields require sophisticated magnetic field models that, in turn, require computationally intensive numerical integration. Typically a single L* calculation can require on the order of 105 calls to a magnetic field model and each point in the simulation domain and each calculated pitch angle has a different value of L*. We describe here the development and validation of a neural network surrogate model for calculating L* in sophisticated geomagnetic field models with a high degree of fidelity at computational speeds that are millions of times faster than direct numerical field line mapping and integration. This new surrogate model has

  8. Special Analysis: 2016-003 Upgrade of Area G PA=CA Model to Updated Versions of GoldSim Software and to LANL Analysts

    Energy Technology Data Exchange (ETDEWEB)

    Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay Hanson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shuman, Rob [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-01

    The Los Alamos National Laboratory (LANL) generates radioactive waste as a result of various activities. Operational waste is generated from a wide variety of research and development activities including nuclear weapons development, energy production, and medical research. Environmental restoration (ER), and decontamination and decommissioning (D&D) waste is generated as contaminated sites and facilities at LANL undergo cleanup or remediation. The majority of this waste is low-level radioactive waste (LLW) and is disposed of at the Technical Area 54 (TA-54), Area G disposal facility. These analyses estimate rates of radionuclide release from the waste disposed of at the facility, simulate the movement of radionuclides through the environment, and project potential radiation doses to humans for several onsite and offsite exposure scenarios. The assessments are based on existing site and disposal facility data, and assumptions about future rates and methods of waste disposal.

  9. US Department of Energy report 1996 LANL radionuclide air emissions

    Energy Technology Data Exchange (ETDEWEB)

    Jacobson, K.W.

    1997-08-01

    Presented is the Laboratory-wide certified report regarding radioactive effluents released into the air by the Los Alamos National Laboratory (LANL) in 1996. This information is required under the Clean Air Act and is being reported to the U.S. Environmental Protection Agency (EPA). The effective dose equivalent (EDE) to a hypothetical maximum exposed individual (MEI) of the public was calculated, using procedures specified by the EPA and described in this report. That dose was 1.93 mrem for 1996. Emissions of {sup 11}C, {sup 13}N, and {sup 15}O from a 1-mA, 800 MeV proton accelerator contributed over 92% of the EDE to LANL`s MEI. Using CAP88, the EPA`s dose assessment model, more than 86% of the total dose received by the MEI was via the air immersion pathway.

  10. Dynamic load balancing of matrix-vector multiplications on roadrunner compute nodes

    Energy Technology Data Exchange (ETDEWEB)

    Sancho Pitarch, Jose Carlos [Los Alamos National Laboratory

    2009-01-01

    Hybrid architectures that combine general purpose processors with accelerators are being adopted in several large-scale systems such as the petaflop Roadrunner supercomputer at Los Alamos. In this system, dual-core Opteron host processors are tightly coupled with PowerXCell 8i processors within each compute node. In this kind of hybrid architecture, an accelerated mode of operation is typically used to offload performance hotspots in the computation to the accelerators. In this paper we explore the suitability of a variant of this acceleration mode in which the performance hotspots are actually shared between the host and the accelerators. To achieve this we have designed a new load balancing algorithm, which is optimized for the Roadrunner compute nodes, to dynamically distribute computation and associated data between the host and the accelerators at runtime. Results are presented using this approach for sparse and dense matrix-vector multiplications that show load-balancing can improve performance by up to 24% over solely using the accelerators.

  11. Overview of the SHIELDS Project at LANL

    Science.gov (United States)

    Jordanova, V.; Delzanno, G. L.; Henderson, M. G.; Godinez, H. C.; Jeffery, C. A.; Lawrence, E. C.; Meierbachtol, C.; Moulton, D.; Vernon, L.; Woodroffe, J. R.; Toth, G.; Welling, D. T.; Yu, Y.; Birn, J.; Thomsen, M. F.; Borovsky, J.; Denton, M.; Albert, J.; Horne, R. B.; Lemon, C. L.; Markidis, S.; Young, S. L.

    2015-12-01

    The near-Earth space environment is a highly dynamic and coupled system through a complex set of physical processes over a large range of scales, which responds nonlinearly to driving by the time-varying solar wind. Predicting variations in this environment that can affect technologies in space and on Earth, i.e. "space weather", remains a big space physics challenge. We present a recently funded project through the Los Alamos National Laboratory (LANL) Directed Research and Development (LDRD) program that is developing a new capability to understand, model, and predict Space Hazards Induced near Earth by Large Dynamic Storms, the SHIELDS framework. The project goals are to specify the dynamics of the hot (keV) particles (the seed population for the radiation belts) on both macro- and micro-scale, including important physics of rapid particle injection and acceleration associated with magnetospheric storms/substorms and plasma waves. This challenging problem is addressed using a team of world-class experts in the fields of space science and computational plasma physics and state-of-the-art models and computational facilities. New data assimilation techniques employing data from LANL instruments on the Van Allen Probes and geosynchronous satellites are developed in addition to physics-based models. This research will provide a framework for understanding of key radiation belt drivers that may accelerate particles to relativistic energies and lead to spacecraft damage and failure. The ability to reliably distinguish between various modes of failure is critically important in anomaly resolution and forensics. SHIELDS will enhance our capability to accurately specify and predict the near-Earth space environment where operational satellites reside.

  12. Green roofs: potential at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Pacheco, Elena M [Los Alamos National Laboratory

    2009-01-01

    strokes, heat exhaustion, and pollution that can agitate the respiratory system. The most significant savings associated with green roofs is in the reduction of cooling demands due to the green roof's thermal mass and their insulating properties. Unlike a conventional roof system, a green roof does not absorb solar radiation and transfer that heat into the interior of a building. Instead the vegetation acts as a shade barrier and stabilizes the roof temperature so that interior temperatures remain comfortable for the occupants. Consequently there is less of a demand for air conditioning, and thus less money spent on energy. At LANL the potential of green roof systems has already been realized with the construction of the accessible green roof on the Otowi building. To further explore the possibilities and prospective benefits of green roofs though, the initial capital costs must be invested. Three buildings, TA-03-1698, TA-03-0502, and TA-53-0031 have all been identified as sound candidates for a green roof retrofit project. It is recommended that LANL proceed with further analysis of these projects and implementation of the green roofs. Furthermore, it is recommended that an urban forestry program be initiated to provide supplemental support to the environmental goals of green roofs. The obstacles barring green roof construction are most often budgetary and structural concerns. Given proper resources, however, the engineers and design professionals at LANL would surely succeed in the proper implementation of green roof systems so as to optimize their ecological and monetary benefits for the entire organization.

  13. LANL capabilities towards bioenergy and biofuels programs

    Energy Technology Data Exchange (ETDEWEB)

    Olivares, Jose A [Los Alamos National Laboratory; Park, Min S [Los Alamos National Laboratory; Unkefer, Clifford J [Los Alamos National Laboratory; Bradbury, Andrew M [Los Alamos National Laboratory; Waldo, Geoffrey S [Los Alamos National Laboratory

    2009-01-01

    LANL invented technology for increasing growth and productivity of photosysnthetic organisms, including algae and higher plants. The technology has been extensively tested at the greenhouse and field scale for crop plants. Initial bioreactor testing of its efficacy on algal growth has shown promising results. It increases algal growth rates even under optimwn nutrient supply and careful pH control with CO{sub 2} continuously available. The technology uses a small organic molecule, applied to the plant surfaces or added to the algal growth medium. CO{sub 2} concentration is necessary to optimize algal production in either ponds or reactors. LANL has successfully designed, built and demonstrated an effective, efficient technology using DOE funding. Such a system would be very valuable for capitalizing on local inexpensive sources of CO{sub 2} for algal production operations. Furthermore, our protein engineering team has a concept to produce highly stable carbonic anhydyrase (CA) enzyme, which could be very useful to assure maximum utilization of the CO{sub 2} supply. Stable CA could be used either imnlobilized on solid supports or engineered into the algal strain. The current technologies for harvesting the algae and obtaining the lipids do not meet the needs for rapid, low cost separations for high volumes of material. LANL has obtained proof of concept for the high volume flowing stream concentration of algae, algal lysis and separation of the lipid, protein and water fractions, using acoustic platforms. This capability is targeted toward developing biosynthetics, chiral syntheses, high throughput protein expression and purification, organic chemistry, recognition ligands, and stable isotopes geared toward Bioenergy applications. Areas of expertise include stable isotope chemistry, biomaterials, polymers, biopolymers, organocatalysis, advanced characterization methods, and chemistry of model compounds. The ultimate realization of the ability to design and

  14. Science, technology and engineering at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Mercer-smith, Janet A [Los Alamos National Laboratory; Wallace, Terry C [Los Alamos National Laboratory

    2011-01-06

    The Laboratory provides science solution to the mission areas of nuclear deterrence, global security, and energy security. The capabilities support the Laboratory's vision as the premier national security science laboratory. The strength of LANL's science is at the core of the Laboratory. The Laboratory addresses important science questions for stockpile stewardship, emerging threats, and energy. The underpinning science vitality to support mission areas is supported through the Post Doc program, the fundamental science program in LDRD, collaborations fostered through the Institutes, and the LANL user facilities. LANL fosters the strategy of Science that Matters through investments, people, and facilities.

  15. LANL environmental restoration site ranking system: System description. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Merkhofer, L.; Kann, A.; Voth, M. [Applied Decision Analysis, Inc., Menlo Park, CA (United States)

    1992-10-13

    The basic structure of the LANL Environmental Restoration (ER) Site Ranking System and its use are described in this document. A related document, Instructions for Generating Inputs for the LANL ER Site Ranking System, contains detailed descriptions of the methods by which necessary inputs for the system will be generated. LANL has long recognized the need to provide a consistent basis for comparing the risks and other adverse consequences associated with the various waste problems at the Lab. The LANL ER Site Ranking System is being developed to help address this need. The specific purpose of the system is to help improve, defend, and explain prioritization decisions at the Potential Release Site (PRS) and Operable Unit (OU) level. The precise relationship of the Site Ranking System to the planning and overall budget processes is yet to be determined, as the system is still evolving. Generally speaking, the Site Ranking System will be used as a decision aid. That is, the system will be used to aid in the planning and budgetary decision-making process. It will never be used alone to make decisions. Like all models, the system can provide only a partial and approximate accounting of the factors important to budget and planning decisions. Decision makers at LANL will have to consider factors outside of the formal system when making final choices. Some of these other factors are regulatory requirements, DOE policy, and public concern. The main value of the site ranking system, therefore, is not the precise numbers it generates, but rather the general insights it provides.

  16. Large-scale functional models of visual cortex for remote sensing

    Energy Technology Data Exchange (ETDEWEB)

    Brumby, Steven P [Los Alamos National Laboratory; Kenyon, Garrett [Los Alamos National Laboratory; Rasmussen, Craig E [Los Alamos National Laboratory; Swaminarayan, Sriram [Los Alamos National Laboratory; Bettencourt, Luis [Los Alamos National Laboratory; Landecker, Will [PORTLAND STATE UNIV.

    2009-01-01

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simple region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.

  17. LANL seismic screening method for existing buildings

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, S.L.; Feller, K.C.; Fritz de la Orta, G.O. [and others

    1997-01-01

    The purpose of the Los Alamos National Laboratory (LANL) Seismic Screening Method is to provide a comprehensive, rational, and inexpensive method for evaluating the relative seismic integrity of a large building inventory using substantial life-safety as the minimum goal. The substantial life-safety goal is deemed to be satisfied if the extent of structural damage or nonstructural component damage does not pose a significant risk to human life. The screening is limited to Performance Category (PC) -0, -1, and -2 buildings and structures. Because of their higher performance objectives, PC-3 and PC-4 buildings automatically fail the LANL Seismic Screening Method and will be subject to a more detailed seismic analysis. The Laboratory has also designated that PC-0, PC-1, and PC-2 unreinforced masonry bearing wall and masonry infill shear wall buildings fail the LANL Seismic Screening Method because of their historically poor seismic performance or complex behavior. These building types are also recommended for a more detailed seismic analysis. The results of the LANL Seismic Screening Method are expressed in terms of separate scores for potential configuration or physical hazards (Phase One) and calculated capacity/demand ratios (Phase Two). This two-phase method allows the user to quickly identify buildings that have adequate seismic characteristics and structural capacity and screen them out from further evaluation. The resulting scores also provide a ranking of those buildings found to be inadequate. Thus, buildings not passing the screening can be rationally prioritized for further evaluation. For the purpose of complying with Executive Order 12941, the buildings failing the LANL Seismic Screening Method are deemed to have seismic deficiencies, and cost estimates for mitigation must be prepared. Mitigation techniques and cost-estimate guidelines are not included in the LANL Seismic Screening Method.

  18. 2008 LANL radionuclide air emissions report

    Energy Technology Data Exchange (ETDEWEB)

    Fuehne, David P.

    2009-06-01

    The emissions of radionuclides from Department of Energy Facilities such as Los Alamos National Laboratory (LANL) are regulated by the Amendments to the Clean Air Act of 1990, National Emissions Standards for Hazardous Air Pollutants (40 CFR 61 Subpart H). These regulations established an annual dose limit of 10 mrem to the maximally exposed member of the public attributable to emissions of radionuclides. This document describes the emissions of radionuclides from LANL and the dose calculations resulting from these emissions for calendar year 2008. This report meets the reporting requirements established in the regulations.

  19. 2009 LANL radionuclide air emissions report

    Energy Technology Data Exchange (ETDEWEB)

    Fuehne, David P.

    2010-06-01

    The emissions of radionuclides from Department of Energy Facilities such as Los Alamos National Laboratory (LANL) are regulated by the Amendments to the Clean Air Act of 1990, National Emissions Standards for Hazardous Air Pollutants (40 CFR 61 Subpart H). These regulations established an annual dose limit of 10 mrem to the maximally exposed member of the public attributable to emissions of radionuclides. This document describes the emissions of radionuclides from LANL and the dose calculations resulting from these emissions for calendar year 2009. This report meets the reporting requirements established in the regulations.

  20. 2010 LANL radionuclide air emissions report /

    Energy Technology Data Exchange (ETDEWEB)

    Fuehne, David P.

    2011-06-01

    The emissions of radionuclides from Department of Energy Facilities such as Los Alamos National Laboratory (LANL) are regulated by the Amendments to the Clean Air Act of 1990, National Emissions Standards for Hazardous Air Pollutants (40 CFR 61 Subpart H). These regulations established an annual dose limit of 10 mrem to the maximally exposed member of the public attributable to emissions of radionuclides. This document describes the emissions of radionuclides from LANL and the dose calculations resulting from these emissions for calendar year 2010. This report meets the reporting requirements established in the regulations.

  1. Thermal Analysis of LANL Ion Exchange Column

    Energy Technology Data Exchange (ETDEWEB)

    Laurinat, J.E.

    1999-06-16

    This document reports results from an ion exchange column heat transfer analysis requested by Los Alamos National Laboratory (LANL). The object of the analysis is to demonstrate that the decay heat from the Pu-238 will not cause resin bed temperatures to increase to a level where the resin significantly degrades.

  2. A Markov Chain Monte Carlo Algorithm for Infrasound Atmospheric Sounding: Application to the Humming Roadrunner experiment in New Mexico

    Science.gov (United States)

    Lalande, Jean-Marie; Waxler, Roger; Velea, Doru

    2016-04-01

    As infrasonic waves propagate at long ranges through atmospheric ducts it has been suggested that observations of such waves can be used as a remote sensing techniques in order to update properties such as temperature and wind speed. In this study we investigate a new inverse approach based on Markov Chain Monte Carlo methods. This approach as the advantage of searching for the full Probability Density Function in the parameter space at a lower computational cost than extensive parameters search performed by the standard Monte Carlo approach. We apply this inverse methods to observations from the Humming Roadrunner experiment (New Mexico) and discuss implications for atmospheric updates, explosion characterization, localization and yield estimation.

  3. Revised Thermal Analysis of LANL Ion Exchange Column

    Energy Technology Data Exchange (ETDEWEB)

    Laurinat, J

    2006-04-11

    following an interruption of flow to the column were calculated. The transient calculations were terminated after the maximum resin bed temperature reached the Technical Standard of 60 C, which was set to prevent significant resin degradation. The LANL column differs from the FWR column in that it has a significantly smaller radius, 3.73 cm nominal versus approximately 28 cm. It follows that natural convection removes heat much more effectively from the LANL column, so that the column may reach thermal equilibrium. Consequently, the calculations for a flow interruption were extended until an approach to thermal equilibrium was observed. The LANL ion exchange process also uses a different resin than was used in the FWR column. The LANL column uses Reillex HPQ{trademark} resin, which is more resistant to attack by nitric acid than the Ionac 641{trademark} resin used in the FWR column. Heat generation from the resin oxidation reaction with nitric acid is neglected in this analysis since LANL will be treating the resin to remove the LTE prior to loading the resin in the columns. Calculations were performed using a finite difference computer code, which incorporates models for absorption and elution of plutonium and for forced and natural convection within the resin bed. Calculations for normal column operation during loading were performed using an initial temperature and a feed temperature equal to the ambient air temperature. The model for the normal flow calculations did not include natural convection within the resin bed. The no flow calculations were started with the temperature and concentration profiles at the end of the loading stage, when there would be a maximum amount of plutonium either adsorbed on the resin or in the feed solution in the column.

  4. Trails at LANL - Public Meeting and Forum - July 26, 2016

    Energy Technology Data Exchange (ETDEWEB)

    Pava, Daniel Seth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-26

    These are the slides of a meeting about trails at Los Alamos National Laboratory. The meeting goals are the folllowing: to inform and educate citizens about LANL trails management issues that include resource protection, safety, security and trails etiquette; to explain how and why LANL trails can be closed and reopened; and to understand your concerns and ideas about LANL trails use.

  5. Findings: LANL outsourcing focus groups

    Energy Technology Data Exchange (ETDEWEB)

    Jannotta, M.J.; McCabe, V.B.

    1996-12-31

    In March 1996, a series of 24 3-hour dialog focus groups were held with randomly selected Laboratory employees and contractors to gain their perceptions regarding potentials and problems for privatization and consolidation. A secondary goal was to educate and inform the workforce about potentials and issues in privatization and consolidation. Two hundred and thirty-six participants engaged in a learning session and structured input exercises resulting in 2,768 usable comments. Comments were categorized using standard qualitative methods; resulting categories included positive and negative comments on four models (consolidation, spin offs, outsourcing, and corporate partnering) and implications for the workforce, the Laboratory, and the local economy. Categories were in the areas of increasing/decreasing jobs, expertise, opportunity/salary/benefits, quality/efficiency, and effect on the local area and economy. An additional concern was losing Laboratory culture and history. Data were gathered and categorized on employee opinion regarding elements of successful transition to the four models, and issues emerged in the areas of terms and conditions of employment; communication; involvement; sound business planning; ethics and fairness; community infrastructure. From the aggregated opinion of the participants, it is recommended that decision-makers: Plan using sound business principles and continually communicate plans to the workforce; Respect workforce investments in the Laboratory; Tell the workforce exactly what is going on at all times; Understand that economic growth in Northern New Mexico is not universally viewed as positive; and Establish dialog with stakeholders on growth issues.

  6. Pure Phase Solubility Limits: LANL

    Energy Technology Data Exchange (ETDEWEB)

    C. Stockman

    2001-01-26

    The natural and engineered system at Yucca Mountain (YM) defines the site-specific conditions under which one must determine to what extent the engineered and the natural geochemical barriers will prevent the release of radioactive material from the repository. Most important mechanisms for retention or enhancement of radionuclide transport include precipitation or co-precipitation of radionuclide-bearing solid phases (solubility limits), complexation in solution, sorption onto surfaces, colloid formation, and diffusion. There may be many scenarios that could affect the near-field environment, creating chemical conditions more aggressive than the conditions presented by the unperturbed system (such as pH changes beyond the range of 6 to 9 or significant changes in the ionic strength of infiltrated waters). For an extended period of time, the near-field water composition may be quite different and more extreme in pH, ionic strength, and CO{sub 2} partial pressure (or carbonate concentration) than waters at some distance from the repository. Reducing conditions, high pH (up to 11), and low carbonate concentration may be present in the near-field after reaction of infiltrating groundwater with engineered barrier systems, such as cementitious materials. In the far-field, conditions are controlled by the rock-mass buffer providing a near-neutral, oxidizing, low-ionic-strength environment that controls radionuclide solubility limits and sorption capacities. There is the need for characterization of variable chemical conditions that affect solubility, speciation, and sorption reactions. Modeling of the groundwater chemistry is required and leads to an understanding of solubility and speciation of the important radionuclides. Because experimental studies cannot be performed under the numerous potential chemical conditions, solubility limitations must rely on geochemical modeling of the radionuclide's chemistry. Fundamental thermodynamic properties, such as solubility

  7. Pure Phase Solubility Limits: LANL

    Energy Technology Data Exchange (ETDEWEB)

    C. Stockman

    2001-01-26

    The natural and engineered system at Yucca Mountain (YM) defines the site-specific conditions under which one must determine to what extent the engineered and the natural geochemical barriers will prevent the release of radioactive material from the repository. Most important mechanisms for retention or enhancement of radionuclide transport include precipitation or co-precipitation of radionuclide-bearing solid phases (solubility limits), complexation in solution, sorption onto surfaces, colloid formation, and diffusion. There may be many scenarios that could affect the near-field environment, creating chemical conditions more aggressive than the conditions presented by the unperturbed system (such as pH changes beyond the range of 6 to 9 or significant changes in the ionic strength of infiltrated waters). For an extended period of time, the near-field water composition may be quite different and more extreme in pH, ionic strength, and CO{sub 2} partial pressure (or carbonate concentration) than waters at some distance from the repository. Reducing conditions, high pH (up to 11), and low carbonate concentration may be present in the near-field after reaction of infiltrating groundwater with engineered barrier systems, such as cementitious materials. In the far-field, conditions are controlled by the rock-mass buffer providing a near-neutral, oxidizing, low-ionic-strength environment that controls radionuclide solubility limits and sorption capacities. There is the need for characterization of variable chemical conditions that affect solubility, speciation, and sorption reactions. Modeling of the groundwater chemistry is required and leads to an understanding of solubility and speciation of the important radionuclides. Because experimental studies cannot be performed under the numerous potential chemical conditions, solubility limitations must rely on geochemical modeling of the radionuclide's chemistry. Fundamental thermodynamic properties, such as solubility

  8. Support to LANL: Cost estimation. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1993-10-04

    This report summarizes the activities and progress by ICF Kaiser Engineers conducted on behalf of Los Alamos National Laboratories (LANL) for the US Department of Energy, Office of Waste Management (EM-33) in the area of improving methods for Cost Estimation. This work was conducted between October 1, 1992 and September 30, 1993. ICF Kaiser Engineers supported LANL in providing the Office of Waste Management with planning and document preparation services for a Cost and Schedule Estimating Guide (Guide). The intent of the Guide was to use Activity-Based Cost (ABC) estimation as a basic method in preparing cost estimates for DOE planning and budgeting documents, including Activity Data Sheets (ADSs), which form the basis for the Five Year Plan document. Prior to the initiation of the present contract with LANL, ICF Kaiser Engineers was tasked to initiate planning efforts directed toward a Guide. This work, accomplished from June to September, 1992, included visits to eight DOE field offices and consultation with DOE Headquarters staff to determine the need for a Guide, the desired contents of a Guide, and the types of ABC estimation methods and documentation requirements that would be compatible with current or potential practices and expertise in existence at DOE field offices and their contractors.

  9. Overview of LANL and ESH&Q

    Energy Technology Data Exchange (ETDEWEB)

    Dutro, Cynthia L [Los Alamos National Laboratory

    2011-01-13

    ESH&Q FY11 objectives are to: (1) Clearly define the ESH&O standards and requirements for institutional programs to ensure compliance with contractual and regulatory requirements, and communicate the relevant requirements, including specific work activities and associated priorities that must be completed, to LANL Organizations; (2) Provide qualified ESH&O subject matter expertise, training support, centralized and deployed services, tools, and procedures to meet both internal customer needs and institutional operational requirements, subject to institutional funding; and (3) Provide support to the Laboratory to meet operational commitments and performance goals.

  10. Advanced accelerator and mm-wave structure research at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Simakov, Evgenya Ivanovna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-22

    This document outlines acceleration projects and mm-wave structure research performed at LANL. The motivation for PBG research is described first, with reference to couplers for superconducting accelerators and structures for room-temperature accelerators and W-band TWTs. These topics are then taken up in greater detail: PBG structures and the MIT PBG accelerator; SRF PBG cavities at LANL; X-band PBG cavities at LANL; and W-band PBG TWT at LANL. The presentation concludes by describing other advanced accelerator projects: beam shaping with an Emittance Exchanger, diamond field emitter array cathodes, and additive manufacturing of novel accelerator structures.

  11. MULTICOMPONENT SEISMIC ANALYSIS AND CALIBRATION TO IMPROVE RECOVERY FROM ALGAL MOUNDS: APPLICATION TO THE ROADRUNNER/TOWAOC AREA OF THE PARADOX BASIN, UTE MOUNTAIN UTE RESERVATION, COLORADO

    Energy Technology Data Exchange (ETDEWEB)

    Paul La Pointe; Claudia Rebne; Steve Dobbs

    2003-07-10

    This report describes the results made in fulfillment of contract DE-FG26-02NT15451, ''Multicomponent Seismic Analysis and Calibration to Improve Recovery from Algal Mounds: Application to the Roadrunner/Towaoc Area of the Paradox Basin, Ute Mountain Ute Reservation, Colorado''. Optimizing development of highly heterogeneous reservoirs where porosity and permeability vary in unpredictable ways due to facies variations can be challenging. An important example of this is in the algal mounds of the Lower and Upper Ismay reservoirs of the Paradox Basin in Utah and Colorado. It is nearly impossible to develop a forward predictive model to delineate regions of better reservoir development, and so enhanced recovery processes must be selected and designed based upon data that can quantitatively or qualitatively distinguish regions of good or bad reservoir permeability and porosity between existing well control. Recent advances in seismic acquisition and processing offer new ways to see smaller features with more confidence, and to characterize the internal structure of reservoirs such as algal mounds. However, these methods have not been tested. This project will acquire cutting edge, three-dimensional, nine-component (3D9C) seismic data and utilize recently-developed processing algorithms, including the mapping of azimuthal velocity changes in amplitude variation with offset, to extract attributes that relate to variations in reservoir permeability and porosity. In order to apply advanced seismic methods a detailed reservoir study is needed to calibrate the seismic data to reservoir permeability, porosity and lithofacies. This will be done by developing a petrological and geological characterization of the mounds from well data; acquiring and processing the 3D9C data; and comparing the two using advanced pattern recognition tools such as neural nets. In addition, should the correlation prove successful, the resulting data will be evaluated from the

  12. With Whom Does LANL Publish? – A look at LANL collaborations from 1990 -2015 using the Web of Science

    Energy Technology Data Exchange (ETDEWEB)

    Springer, Everett P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-17

    Collaborations are critical to the science, technology, and engineering achievements at Los Alamos National Laboratory (LANL). This report analyzed the collaborations as measured through peer-reviewed publications from the Web of Science (WoS) database for LANL for the 1990 – 2015 period. Both a cumulative analysis over the entire time period and annual analyses were performed. The results found that the Department of Energy national laboratories, University of California campuses, and other academic institutions collaborate with LANL on regular basis. Results provide insights into trends in peer-reviewed papers collaborations for LANL.

  13. Treadmill Desks at LANL - Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Fellows, Samara Kia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-28

    It is well established that sedentariness is the largest, preventable contributor to premature death, eclipsing smoking in recent years. One approach to reduce sedentariness is by using a treadmill desk to perform office work while walking at a low speed.We found an increased interest level when the treadmill desks were first introduced to LANL, but after a few months interest appeared to drop. It is possible that treadmill desk use was occurring, but subjects did not record their use. The treadmill desks will not be readily available for purchase by employees due to the study outcome. Additionally, conclusive changes in body measurements could not be performed due to lack of follow up by 58% of the participants.

  14. 2014 LANL Radionuclide Air Emissions Report

    Energy Technology Data Exchange (ETDEWEB)

    Fuehne, David Patrick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-21

    This report describes the emissions of airborne radionuclides from operations at Los Alamos National Laboratory (LANL) for calendar year 2014, and the resulting off-site dose from these emissions. This document fulfills the requirements established by the National Emissions Standards for Hazardous Air Pollutants in 40 CFR 61, Subpart H – Emissions of Radionuclides other than Radon from Department of Energy Facilities, commonly referred to as the Radionuclide NESHAP or Rad-NESHAP. Compliance with this regulation and preparation of this document is the responsibility of LANL’s RadNESHAP compliance program, which is part of the Environmental Protection Division. The information in this report is required under the Clean Air Act and is being submitted to the U.S. Environmental Protection Agency (EPA) Region 6.

  15. LANL Environmental ALARA Program Status Report for CY 2016

    Energy Technology Data Exchange (ETDEWEB)

    Whicker, Jeffrey Jay [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mcnaughton, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-24

    Los Alamos National Laboratory (LANL) ensures that radiation exposures to members of the public and the environment from LANL operations, past and present, are below regulatory thresholds and are as low as reasonably achievable (ALARA) through compliance with DOE Order 458.1 Radiation Protection for the Public and the Environment, and LANL Policy 412 Environmental Radiation Protection (LANL2016a). In 2007, a finding (RL.2-F-1) and observation (RL.2-0-1) in the NNSA/ LASO report, September 2007, Release of Property (Land) Containing Residual Radioactive Material Self-Assessment Report, indicated that LANL had no policy or documented process in place for the release of property containing residual radioactive material. In response, LANL developed PD410, Los Alamos National Laboratory Environmental ALARA Program. The most recent version of this document became effective in 2014 (LANL 2014a). The document provides program authorities, responsibilities, descriptions, processes, and thresholds for conducting qualitative and quantitative ALARA analyses for prospective and actual radiation exposures to the public and t o the environment resulting from DOE activities conducted on the LANL site.

  16. ALTERNATIVES OF MACCS2 IN LANL DISPERSION ANALYSIS FOR ONSITE AND OFFSITE DOSES

    Energy Technology Data Exchange (ETDEWEB)

    Wang, John HC [Los Alamos National Laboratory

    2012-05-01

    In modeling atmospheric dispersion to determine accidental release of radiological material, one of the common statistical analysis tools used at Los Alamos National Laboratory (LANL) is MELCOR Accident Consequence Code System, Version 2 (MACCS2). MACCS2, however, has some limitations and shortfalls for both onsite and offsite applications. Alternative computer codes, which could provide more realistic calculations, are being investigated for use at LANL. In the Yucca Mountain Project (YMP), the suitability of MACCS2 for the calculation of onsite worker doses was a concern; therefore, ARCON96 was chosen to replace MACCS2. YMP's use of ARCON96 provided results which clearly demonstrated the program's merit for onsite worker safety analyses in a wide range of complex configurations and scenarios. For offsite public exposures, the conservatism of MACCS2 on the treatment of turbulence phenomena at LANL is examined in this paper. The results show a factor of at least two conservatism in calculated public doses. The new EPA air quality model, AERMOD, which implements advanced meteorological turbulence calculations, is a good candidate for LANL applications to provide more confidence in the accuracy of offsite public dose projections.

  17. LANL/Green Star spectrometer tests

    Energy Technology Data Exchange (ETDEWEB)

    Sampson, T.E.; Cremers, T.L.; Vo, D.T. [Los Alamos National Lab., NM (United States); Seldiakov, Y.P.; Dorin, A.B.; Kondrashov, M.V. [Green Star, Moscow (Russian Federation); Timoshin, V.I. [VNIINM, Moscow (Russian Federation)

    1997-12-01

    The US and Russia have agreed to the joint development of a nondestructive assay system for use to support the dismantlement of nuclear weapons in Russia. This nondestructive assay system will be used to measure plutonium produced by the conversion of Russian nuclear weapons. The NDA system for Russia will be patterned after the ARIES NDA system being constructed at Los Alamos. One goal of the program is to produce an NDA system for use in Russia that maximizes the use of Russian resources to facilitate maintenance and future upgrades. The Green Star SBS50 Single Board Spectrometer system (Green Star Ltd., Moscow, Russia) has been suggested for use as the data acquisition component for gamma ray instruments in the system. Possible uses are for plutonium isotopic analysis and also segmented gamma scanning. Green Star has also developed analysis software for the SBS50. This software, both plutonium isotopic analysis and uranium enrichment analysis, was developed specifically for customs/border inspection applications (low counting rate applications and identification as opposed to quantification) and was not intended for MC and A applications. Because of the relative immaturity of the Green Star plutonium isotopic analysis software (it has been under development for only one year and is patterned after US development circa 1980), it was tentatively agreed, before the tests, that the Russian NDA system would use the Los Alamos PC/FRAM software for plutonium isotopic analysis. However, it was also decided to include the Green Star plutonium isotopic software in the testing, both to quantify its performance for MC and A applications and also to provide additional data to Green Star for further development of their software. The main purpose of the testing was to evaluate the SBS-50 spectrometer as a data acquisition device for use with LANL software.

  18. Management Academy LANL Business Systems: Property Management, Course #31036

    Energy Technology Data Exchange (ETDEWEB)

    Shepherd, Michael J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rinke, Helen Mae [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hanson, Todd [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wolfe, Randy P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-08

    Los Alamos National Laboratory (LANL) is responsible for the efficient economical management of all government property in its stewardship. This training explains the role LANL managers have in managing, controlling, and disposing of government property. The Laboratory's goal is good asset management. By properly managing property across the facility, Laboratory managers can help ASM improve government property utilization and extend asset life, while reducing asset-related operating costs and expenditures.

  19. LANL Environmental ALARA Program Status Report for CY 2015

    Energy Technology Data Exchange (ETDEWEB)

    Whicker, Jeffrey Jay [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mcnaughton, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gillis, Jessica Mcdonnel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-29

    Los Alamos National Laboratory (LANL) ensures that radiation exposures to members of the public and the environment from LANL operations, past and present, are below regulatory thresholds and are as low as reasonably achievable (ALARA) through compliance with DOE Order 458.1 Radiation Protection for the Public and the Environment, and LANL Policy 412 Environmental Radiation Protection. In 2007, a finding (RL.2-F-1) and observation (RL.2-0-1) in the NNSA/ LASO report, September 2007, Release of Property (Land) Containing Residual Radioactive Material Self-Assessment Report, indicated that LANL had no policy or documented process in place for the release of property containing residual radioactive material. In response, LANL developed PD410, Los Alamos National Laboratory Environmental ALARA Program. The most recent version of this document became effective on September 28, 2011. The document provides program authorities, responsibilities, descriptions, processes, and thresholds for conducting qualitative and quantitative ALARA analyses for prospective and actual radiation exposures to the public and t o the environment resulting from DOE activities conducted on the LANL site.

  20. Multicomponent Seismic Analysis and Calibration to Improve Recovery from Algal Mounds: Application to the Roadrunner/Towaoc area of the Paradox Basin, UTE Mountain UTE Reservation, Colorado

    Energy Technology Data Exchange (ETDEWEB)

    Joe Hachey

    2007-09-30

    The goals of this project were: (1) To enhance recovery of oil contained within algal mounds on the Ute Mountain Ute tribal lands. (2) To promote the use of advanced technology and expand the technical capability of the Native American Oil production corporations by direct assistance in the current project and dissemination of technology to other Tribes. (3) To develop an understanding of multicomponent seismic data as it relates to the variations in permeability and porosity of algal mounds, as well as lateral facies variations, for use in both reservoir development and exploration. (4) To identify any undiscovered algal mounds for field-extension within the area of seismic coverage. (5) To evaluate the potential for applying CO{sub 2} floods, steam floods, water floods or other secondary or tertiary recovery processes to increase production. The technical work scope was carried out by: (1) Acquiring multicomponent seismic data over the project area; (2) Processing and reprocessing the multicomponent data to extract as much geological and engineering data as possible within the budget and time-frame of the project; (3) Preparing maps and data volumes of geological and engineering data based on the multicomponent seismic and well data; (4) Selecting drilling targets if warranted by the seismic interpretation; (5) Constructing a static reservoir model of the project area; and (6) Constructing a dynamic history-matched simulation model from the static model. The original project scope covered a 6 mi{sup 2} (15.6 km{sup 2}) area encompassing two algal mound fields (Towaoc and Roadrunner). 3D3C seismic data was to acquired over this area to delineate mound complexes and image internal reservoir properties such as porosity and fluid saturations. After the project began, the Red Willow Production Company, a project partner and fully-owned company of the Southern Ute Tribe, contributed additional money to upgrade the survey to a nine-component (3D9C) survey. The purpose

  1. Status of LANL Efforts to Effectively Use Sequoia

    Energy Technology Data Exchange (ETDEWEB)

    Nystrom, William David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-14

    Los Alamos National Laboratory (LANL) is currently working on 3 new production applications, VPC, xRage, and Pagosa. VPIC was designed to be a 3D relativist, electromagnetic Particle-In-Cell code for plasma simulation. xRage, a 3D AMR mesh amd multi physics hydro code. Pagosa, is a 3D structured mesh and multi physics hydro code.

  2. Lessons learned in TRU waste process improvement at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Del Signore, J. C.; Huchton, J. (Judith); Martin, B. (Beverly); Lindahl, P. (Peter); Miller, S. (Scott); Hartwell, W. B. (Ware B.)

    2004-01-01

    Typical papers that discuss lessons learned or quality improvement focus on the challenge for a production facility reaching six sigma (3.4 Defects Per Million Opportunities) from five sigma. This paper discusses lessons learned when the Los Alamos National Laboratory's (LANL) transuranic (TRU) waste management project was challenged to establish a production system to meet the customer's expectations. The target for FY 2003 was set as two shipments of TRU waste per week leaving the site. The average for the four previous years (FY99-02) was about one shipment every two months. LANL recognized that, despite its success in 1999 as the first site to ship TRU waste to open the Waste Isolation Pilot Plant (WIPP), significant changes to the way business was being done were required to move to a production mode. Process improvements began in earnest in April 2002. This paper discusses several of the initiatives LANL took to achieve forty-five shipments in FY03. The paper is organized by topic into five major areas that LANL worked to get the job done.

  3. LANL Contributions to the B61 LIfe Extension Program

    Energy Technology Data Exchange (ETDEWEB)

    Corpion, Juan Carlos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-10

    The Los Alamos National Laboratory (LANL) has a long, proud heritage in science and innovation that extends 70 years. Although the Laboratory’s primary responsibility is assuring the safety and reliability of the nation’s nuclear deterrent, Laboratory staff work on a broad range of advanced technologies to provide the best, most effective scientific and engineering solutions to the nation’s critical security challenges. The world is rapidly changing, but this essential responsibility remains the LANL’s core mission. LANL is the Design Laboratory for the nuclear explosive package of the B61 Air Force bomb. The B61-12 Life Extension Program (LEP) activities at LANL will increase the lifetime of the bomb and provide safety and security options to meet security environments both today and in the future. The B61’s multiple-platform functionality, unique safety features, and large number of components make the B61-12 LEP one of the most complex LEPs ever attempted. Over 230 LANL scientists, engineers, technicians, and support personnel from across the Laboratory are bringing decades of interdisciplinary knowledge, technical expertise, and leading-edge capabilities to LANL’s work on the LEP.

  4. Electrical Safety Program: Nonelectrical Crafts at LANL, Live #12175

    Energy Technology Data Exchange (ETDEWEB)

    Glass, George [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-22

    Los Alamos National Laboratory (LANL) and the federal government require those working with or near electrical equipment to be trained on electrical hazards and how to avoid them. Although you might not be trained to work on electrical systems, your understanding of electricity, how it can hurt you, and what precautions to take when working near electricity could save you or others from injury or death. This course, Electrical Safety Program: Nonelectrical Crafts at LANL (12175), provides knowledge of basic electrical concepts, such as current, voltage, and resistance, and their relationship to each other. You will learn how to apply these concepts to safe work practices while learning about the dangers of electricity—and associated hazards—that you may encounter on the job. The course also discusses what you can do to prevent electrical accidents and what you should do in the event of an electrical emergency. The LANL Electrical Safety Program is defined by LANL Procedure (P) 101-13. An electrical safety officer (ESO) is well versed in this document and should be consulted regarding electrical questions. Appointed by the responsible line manager (RLM), ESOs can tell you if a piece of equipment or an operation is safe or how to make it safe.

  5. MULTICOMPONENT SEISMIC ANALYSIS AND CALIBRATION TO IMPROVE RECOVERY FROM ALGAL MOUNDS: APPLICATION TO THE ROADRUNNER/TOWAOC AREA OF THE PARADOX BASIN, UTE MOUNTAIN UTE RESERVATION, COLORADO

    Energy Technology Data Exchange (ETDEWEB)

    Paul La Pointe; Claudia Rebne; Steve Dobbs

    2004-03-01

    This report describes the results made in fulfillment of contract DE-FG26-02NT15451, ''Multicomponent Seismic Analysis and Calibration to Improve Recovery from Algal Mounds: Application to the Roadrunner/Towaoc Area of the Paradox Basin, Ute Mountain Ute Reservation, Colorado'', for the Second Biennial Report covering the time period May 1, 2003 through October 31, 2003. During this period, the project achieved two significant objectives: completion of the acquisition and processing design and specifications 3D9C seismic acquisition and the 3D VSP log; and completion of the permitting process involving State, Tribal and Federal authorities. Successful completion of these two major milestones pave the way for field acquisition as soon as weather permits in the Spring of 2004. This report primarily describes the design and specifications for the VSP and 3D9C surveys.

  6. Issues for reuse of gloveboxes at LANL TA-55

    Energy Technology Data Exchange (ETDEWEB)

    Cadwallader, L.C.; Pinson, P.A.; Miller, C.F.

    1998-08-01

    This report is a summary of issues that face plutonium glovebox designers and users at the Los Alamos National Laboratory (LANL) Technical Area 55 (TA-55). Characterizing the issues is a step in the task of enhancing the next generation glovebox design to minimize waste streams while providing the other design functions. This report gives an initial assessment of eight important design and operation issues that can benefit from waste minimization.

  7. Issues for reuse of gloveboxes at LANL TA-55

    Energy Technology Data Exchange (ETDEWEB)

    Cadwallader, L.C.; Pinson, P.A.; Miller, C.F.

    1998-08-01

    This report is a summary of issues that face plutonium glovebox designers and users at the Los Alamos National Laboratory (LANL) Technical Area 55 (TA-55). Characterizing the issues is a step in the task of enhancing the next generation glovebox design to minimize waste streams while providing the other design functions. This report gives an initial assessment of eight important design and operation issues that can benefit from waste minimization.

  8. Glass Development for Treatment of LANL Evaporator Bottoms Waste

    Energy Technology Data Exchange (ETDEWEB)

    DE Smith; GF Piepel; GW Veazey; JD Vienna; ML Elliott; RK Nakaoka; RP Thimpke

    1998-11-20

    Vitrification is an attractive treatment option for meeting the stabilization and final disposal requirements of many plutonium (Pu) bearing materials and wastes at the Los Alamos National Laboratory (LANL) TA-55 facility, Rocky Flats Environmental Technology Site (RFETS), Hanford, and other Department of Energy (DOE) sites. The Environmental Protection Agency (EPA) has declared that vitrification is the "best demonstrated available technology" for high- level radioactive wastes (HLW) (Federal Register 1990) and has produced a handbook of vitriilcation technologies for treatment of hazardous and radioactive waste (US EPA, 1992). This technology has been demonstrated to convert Pu-containing materials (Kormanos, 1997) into durable (Lutze, 1988) and accountable (Forsberg, 1995) waste. forms with reduced need for safeguarding (McCulhun, 1996). The composition of the Evaporator Bottoms Waste (EVB) at LANL, like that of many other I%-bearing materials, varies widely and is generally unpredictable. The goal of this study is to optimize the composition of glass for EVB waste at LANL, and present the basic techniques and tools for developing optimized glass compositions for other Pu-bearing materials in the complex. This report outlines an approach for glass formulation with fixed property restrictions, using glass property-composition databases. This approach is applicable to waste glass formulation for many variable waste streams and vitrification technologies.. Also reported are the preliminary property data for simulated evaporator bottom glasses, including glass viscosity and glass leach resistance using the Toxicity Characteristic Leaching Procedure (TCLP).

  9. Overview of LANL short-pulse ion acceleration activities

    Energy Technology Data Exchange (ETDEWEB)

    Flippo, Kirk A. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Schmitt, Mark J. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Offermann, Dustin [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Cobble, James A. [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Gautier, Donald [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Kline, John [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Workman, Jonathan [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Archuleta, Fred [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Gonzales, Raymond [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Hurry, Thomas [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Johnson, Randall [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Letzring, Samuel [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Montgomery, David [Los Alamos National Laboratory; Reid, Sha-Marie [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Shimada, Tsutomu [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Gaillard, Sandrine A. [Univ. of Nevada, Reno, NV (United States); Sentoku, Yasuhiko [Univ. of Nevada, Reno, NV (United States); Bussman, Michael [Forschungszentrum Dresden (Germany); Kluge, Thomas [Forschungszentrum Dresden (Germany); Cowan, Thomas E. [Forschungszentrum Dresden (Germany); Rassuchine, Jenny M. [Forschungszentrum Dresden - Rossendorf (Germany); Lowenstern, Mario E. [Univ. of Michigan, Ann Arbor, MI (United States); Mucino, J. Eduardo [Univ. of Michigan, Ann Arbor, MI (United States); Gall, Brady [Univ. of Missouri, Columbia, MO (United States); Korgan, Grant [Nanolabz, Reno, NV (United States); Malekos, Steven [Nanolabz, Reno, NV (United States); Adams, Jesse [Nanolabz, Reno, NV (United States); Bartal, Teresa [Univ. of California, San Diego, CA (United States); Chawla, Surgreev [Univ. of California, San Diego, CA (United States); Higginson, Drew [Univ. of California, San Diego, CA (United States); Beg, Farhat [Univ. of California, San Diego, CA (United States); Nilson, Phil [Lab. for Laser Energetics, Rochester, NY (United States); Mac Phee, Andrew [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Le Pape, Sebastien [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Hey, Daniel [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Mac Kinnon, Andy [Lawrence Livermore National Laboratory (LLNL), Livermore, CA (United States); Geissel, Mattias [Sandia National Lab. (SNL), Albuquerque, NM (United States); Schollmeier, Marius [Sandia National Lab. (SNL), Albuquerque, NM (United States); Stephens, Rich [General Atomics, San Diego, CA (United States)

    2009-12-02

    An overview of Los Alamos National Laboratory's activities related to short-pulse ion acceleration is presented. LANL is involved is several projects related to Inertial Confinement Fusion (Fast Ignition) and Laser-Ion Acceleration. LANL has an active high energy X-ray backlighter program for radiographing ICF implosions and other High Energy Density Laboratory Physics experiments. Using the Trident 200TW laser we are currently developing high energy photon (>10 keV) phase contrast imaging techniques to be applied on Omega and the NIF. In addition we are engaged in multiple programs in laser ion acceleration to boost the ion energies and efficiencies for various potential applications including Fast Ignition, active material interrogation, and medical applications. Two basic avenues to increase ion performance are currently under study: one involves ultra-thin targets and the other involves changing the target geometry. We have recently had success in boosting proton energies above 65 MeV into the medical application range. Highlights covered in the presentation include: The Trident Laser System; X-ray Phase Contrast Imaging for ICF and HEDLP; Improving TNSA Ion Acceleration; Scaling Laws; Flat Targets; Thin Targets; Cone Targets; Ion Focusing;Trident; Omega EP; Scaling Comparisons; and, Conclusions.

  10. Trails Management at LANL - A Presentation to the Los Alamos County Parks and Recreation Board

    Energy Technology Data Exchange (ETDEWEB)

    Pava, Daniel Seth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-12

    Los Alamos National Laboratory’s (LANL) trail management program goals include reduce risk of damage and injury to property, human life, and health, and sensitive natural and cultural resources from social trail use at LANL, facilitate the establishment of a safe viable network of linked trails, maintain security of LANL operations, and many more, respect the wishes of local Pueblos, adapt trail use to changing conditions in a responsive manner, and maintain the recreational functionality of the DOE lands. There are approximately 30 miles of LANL trails. Some are open to the public and allow bicycles, horses, hikers, and runners. Know the rules of the trails to stay safe.

  11. LANL Safety Conscious Work Environment (SCWE) Self-Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Hargis, Barbara C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-01-29

    On December 21, 2012 Secretary of Energy Chu transmitted to the Defense Nuclear Facilities Safety Board (DNFSB) revised commitments on the implementation plan for Safety Culture at the Waste Treatment and Immobilization Plant. Action 2-5 was revised to require contractors and federal organizations to complete Safety Conscious Work Environment (SCWE) selfassessments and provide reports to the appropriate U.S. Department of Energy (DOE) - Headquarters Program Office by September 2013. Los Alamos National Laboratory (LANL) planned and conducted a Safety Conscious Work Environment (SCWE) Self-Assessment over the time period July through August, 2013 in accordance with the SCWE Self-Assessment Guidance provided by DOE. Significant field work was conducted over the 2-week period August 5-16, 2013. The purpose of the self-assessment was to evaluate whether programs and processes associated with a SCWE are in place and whether they are effective in supporting and promoting a SCWE.

  12. Design-Load Basis for LANL Structures, Systems, and Components

    Energy Technology Data Exchange (ETDEWEB)

    I. Cuesta

    2004-09-01

    This document supports the recommendations in the Los Alamos National Laboratory (LANL) Engineering Standard Manual (ESM), Chapter 5--Structural providing the basis for the loads, analysis procedures, and codes to be used in the ESM. It also provides the justification for eliminating the loads to be considered in design, and evidence that the design basis loads are appropriate and consistent with the graded approach required by the Department of Energy (DOE) Code of Federal Regulation Nuclear Safety Management, 10, Part 830. This document focuses on (1) the primary and secondary natural phenomena hazards listed in DOE-G-420.1-2, Appendix C, (2) additional loads not related to natural phenomena hazards, and (3) the design loads on structures during construction.

  13. LANL OPERATING EXPERIENCE WITH THE WAND AND HERCULES PROTOTYPE SYSTEMS

    Energy Technology Data Exchange (ETDEWEB)

    K. M. GRUETZMACHER; C. L. FOXX; S. C. MYERS

    2000-09-01

    The Waste Assay for Nonradioactive Disposal (WAND) and the High Efficiency Radiation Counters for Ultimate Low Emission Sensitivity (HERCULES) prototype systems have been operating at Los Alamos National Laboratory's (LANL's) Solid Waste Operation's (SWO'S) non-destructive assay (NDA) building since 1997 and 1998, respectively. These systems are the cornerstone of the verification program for low-density Green is Clean (GIC) waste at the Laboratory. GIC waste includes all non-regulated waste generated in radiological controlled areas (RCAS) that has been actively segregated as clean (i.e., nonradioactive) through the use of waste generator acceptable knowledge (AK). The use of this methodology alters LANL's past practice of disposing of all room trash generated in nuclear facilities in radioactive waste landfills. Waste that is verified clean can be disposed of at the Los Alamos County Landfill. It is estimated that 50-90% of the low-density room trash from radioactive material handling areas at Los Alamos might be free of contamination. This approach avoids the high cost of disposal of clean waste at a radioactive waste landfill. It also reduces consumption of precious space in the radioactive waste landfill where disposal of this waste provides no benefit to the public or the environment. Preserving low level waste (LLW) disposal capacity for truly radioactive waste is critical in this era when expanding existing radioactive waste landfills or permitting new ones is resisted by regulators and stakeholders. This paper describes the operating experience with the WAND and HERCULES since they began operation at SWO. Waste for verification by the WAND system has been limited so far to waste from the Plutonium Facility and the Solid Waste Operations Facility. A total of461 ft3 (13.1 m3) of low-density shredded waste and paper have been verified clean by the WAND system. The HERCULES system has been used to verify waste from four Laboratory

  14. Gas loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, Lee; Bartram, Brian; Dattelbaum, Dana; Lang, John; Morris, John

    2015-06-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez and Teflon. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system, and example data from the plate impact experiments will be shown. LA-UR-15-20521

  15. U.S. Department of Energy Report, 2005 LANL Radionuclide Air Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Keith W. Jacobson, David P. Fuehne

    2006-09-01

    Amendments to the Clean Air Act, which added radionuclides to the National Emissions Standards for Hazardous Air Pollutants (NESHAP), went into effect in 1990. Specifically, a subpart (H) of 40 CFR 61 established an annual limit on the impact to the public attributable to emissions of radionuclides from U.S. Department of Energy facilities, such as the Los Alamos National Laboratory (LANL). As part of the new NESHAP regulations, LANL must submit an annual report to the U.S. Environmental Protection Agency headquarters and the regional office in Dallas by June 30. This report includes results of monitoring at LANL and the dose calculations for the calendar year 2006.

  16. The LANL C-NR counting room and fission product yields

    Energy Technology Data Exchange (ETDEWEB)

    Jackman, Kevin Richard [Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)

    2015-09-21

    This PowerPoint presentation focused on the following areas: LANL C-NR counting room; Fission product yields; Los Alamos Neutron wheel experiments; Recent experiments ad NCERC; and Post-detonation nuclear forensics

  17. Study of possibility using LANL PSA-methodology for accident probability RBMK researches

    Energy Technology Data Exchange (ETDEWEB)

    Petrin, S.V.; Yuferev, V.Y.; Zlobin, A.M.

    1995-12-31

    The reactor facility probabilistic safety analysis methodologies are considered which are used at U.S. LANL and RF NIKIET. The methodologies are compared in order to reveal their similarity and differences, determine possibilities of using the LANL technique for RBMK type reactor safety analysis. It is found that at the PSA-1 level the methodologies practically do not differ. At LANL the PHA, HAZOP hazards analysis methods are used for more complete specification of the accounted initial event list which can be also useful at performance of PSA for RBMK. Exchange of information regarding the methodology of detection of dependent faults and consideration of human factor impact on reactor safety is reasonable. It is accepted as useful to make a comparative study result analysis for test problems or PSA fragments using various computer programs employed at NIKIET and LANL.

  18. Preliminary lifetime predictions for 304 stainless steel as the LANL ABC blanket material

    Energy Technology Data Exchange (ETDEWEB)

    Park, J.J.; Buksa, J.J.; Houts, M.G.; Arthur, E.D.

    1997-11-01

    The prediction of materials lifetime in the preconceptual Los Alamos National Laboratory (LANL) Accelerator-Based Conversion of Plutonium (ABC) is of utmost interest. Because Hastelloy N showed good corrosion resistance to the Oak Ridge National Laboratory Molten Salt Reactor Experiment fuel salt that is similar to the LANL ABC fuel salt, Hastelloy N was originally proposed for the LANL ABC blanket material. In this paper, the possibility of using 304 stainless steel as a replacement for the Hastelloy N is investigated in terms of corrosion issues and fluence-limit considerations. An attempt is made, based on the previous Fast Flux Test Facility design data, to predict the preliminary lifetime estimate of the 304 stainless steel used in the blanket region of the LANL ABC.

  19. Seismic Fragility of the LANL Fire Water Distribution System

    Energy Technology Data Exchange (ETDEWEB)

    Greg Mertz

    2007-03-30

    The purpose of this report is to present the results of a site-wide system fragility assessment. This assessment focuses solely on the performance of the water distribution systems that supply Chemical and Metallurgy Research (CMR), Weapons Engineering and Tritium Facility (WETF), Radioactive Liquid Waste Treatment Facility (RLWTF), Waste Characterization, Reduction, Repackaging Facility (WCRRF), and Transuranic Waste Inspectable Storage Project (TWISP). The analysis methodology is based on the American Lifelines Alliance seismic fragility formulations for water systems. System fragilities are convolved with the 1995 LANL seismic hazards to develop failure frequencies. Acceptance is determined by comparing the failure frequencies to the DOE-1020 Performance Goals. This study concludes that: (1) If a significant number of existing isolation valves in the water distribution system are closed to dedicate the entire water system to fighting fires in specific nuclear facilities; (2) Then, the water distribution systems for WETF, RLWTF, WCRRF, and TWISP meet the PC-2 performance goal and the water distribution system for CMR is capable of surviving a 0.06g earthquake. A parametric study of the WETF water distribution system demonstrates that: (1) If a significant number of valves in the water distribution system are NOT closed to dedicate the entire water system to fighting fires in WETF; (2) Then, the water distribution system for WETF has an annual probability of failure on the order of 4 x 10{sup -3} that does not meet the PC-2 performance goal. Similar conclusions are expected for CMR, RLWTF, WCRRF, and TWISP. It is important to note that some of the assumptions made in deriving the results should be verified by personnel in the safety-basis office and may need to be incorporated in technical surveillance requirements in the existing authorization basis documentation if credit for availability of fire protection water is taken at the PC-2 level earthquake levels

  20. Gap Analysis of Storage Conditions between NNSS and LANL for SAVY 4000 Use

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, Kirk Patrick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, Paul Herrick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stone, Timothy Amos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Becker, Chandler Gus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Karns, Tristan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Veirs, Douglas Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-25

    As part of the gap analysis for utilizing the SAVY 4000® at NNSS, the hydrogen gas generation rate and the effect of atmospheric pressure changes on the maximum normal operating pressure (MNOP) of the SAVY container must be evaluated because the nuclear material characteristics and atmospheric conditions will not be the same for NNSS and LANL. This paper documents this analysis and demonstrates that the LANL SAVY Safety Analysis Report (SAR) is bounding with respect to the Nevada facilities.

  1. Screening and Spectral Summing of LANL Empty Waste Drums - 13226

    Energy Technology Data Exchange (ETDEWEB)

    Gruetzmacher, Kathleen M.; Bustos, Roland M.; Ferran, Scott G.; Gallegos, Lucas E. [Los Alamos National Laboratory, Los Alamos, New Mexico 87545 (United States); Lucero, Randy P. [Pajarito Scientific Corporation, Santa Fe, New Mexico 87505 (United States)

    2013-07-01

    Empty 55-gallon drums that formerly held transuranic (TRU) waste (often over-packed in 85- gallon drums) are generated at LANL and require radiological characterization for disposition. These drums are typically measured and analyzed individually using high purity germanium (HPGe) gamma detectors. This approach can be resource and time intensive. For a project requiring several hundred drums to be characterized in a short time frame, an alternative approach was developed. The approach utilizes a combination of field screening and spectral summing that was required to be technically defensible and meet the Nevada Nuclear Security Site (NNSS) Waste Acceptance Criteria (WAC). In the screening phase of the operation, the drums were counted for 300 seconds (compared to 600 seconds for the typical approach) and checked against Low Level (LL)/TRU thresholds established for each drum configuration and detector. Multiple TRU nuclides and multiple gamma rays for each nuclide were evaluated using an automated spreadsheet utility that can process data from up to 42 drums at a time. Screening results were reviewed by an expert analyst to confirm the field LL/TRU determination. The spectral summing analysis technique combines spectral data (channel-by-channel) associated with a group of individual waste containers producing a composite spectrum. The grouped drums must meet specific similarity criteria. Another automated spreadsheet utility was used to spectral sum data from an unlimited number of similar drums grouped together. The composite spectrum represents a virtual combined drum for the group of drums and was analyzed using the SNAP{sup TM}/Radioassay Data Sheet (RDS)/Batch Data Report (BDR) method. The activity results for a composite virtual drum were divided equally amongst the individual drums to generate characterization results for each individual drum in the group. An initial batch of approximately 500 drums were measured and analyzed in less than 2 months in 2011

  2. Mechanistic studies of Ser/Thr dehydration catalyzed by a member of the LanL lanthionine synthetase family.

    Science.gov (United States)

    Goto, Yuki; Okesli, Ayşe; van der Donk, Wilfred A

    2011-02-08

    Members of the LanL family of lanthionine synthetases consist of three catalytic domains, an N-terminal pSer/pThr lyase domain, a central Ser/Thr kinase domain, and a C-terminal lanthionine cyclase domain. The N-terminal lyase domain has sequence homology with members of the OspF family of effector proteins. In this study, the residues in the lyase domain of VenL that are conserved in the active site of OspF proteins were mutated to evaluate their importance for catalysis. In addition, residues that are fully conserved in the LanL family but not in the OspF family were mutated. Activity assays with these mutant proteins are consistent with a model in which Lys80 in VenL deprotonates the α-proton of pSer/pThr residues to initiate the elimination reaction. Lys51 is proposed to activate this proton by coordination to the carbonyl of the pSer/pThr, and His53 is believed to protonate the phosphate leaving group. These functions are very similar to the corresponding homologous residues in OspF proteins. On the other hand, recognition of the phosphate group of pSer/pThr appears to be achieved differently in VenL than in the OspF proteins. Arg156 and Lys103 are thought to interact with the phosphate group on the basis of a structural homology model.

  3. Mechanistic Studies of Ser/Thr Dehydration Catalyzed by a Member of the LanL Lanthionine Synthetase Family†

    Science.gov (United States)

    2011-01-01

    Members of the LanL family of lanthionine synthetases consist of three catalytic domains, an N-terminal pSer/pThr lyase domain, a central Ser/Thr kinase domain, and a C-terminal lanthionine cyclase domain. The N-terminal lyase domain has sequence homology with members of the OspF family of effector proteins. In this study, the residues in the lyase domain of VenL that are conserved in the active site of OspF proteins were mutated to evaluate their importance for catalysis. In addition, residues that are fully conserved in the LanL family but not in the OspF family were mutated. Activity assays with these mutant proteins are consistent with a model in which Lys80 in VenL deprotonates the α-proton of pSer/pThr residues to initiate the elimination reaction. Lys51 is proposed to activate this proton by coordination to the carbonyl of the pSer/pThr, and His53 is believed to protonate the phosphate leaving group. These functions are very similar to the corresponding homologous residues in OspF proteins. On the other hand, recognition of the phosphate group of pSer/pThr appears to be achieved differently in VenL than in the OspF proteins. Arg156 and Lys103 are thought to interact with the phosphate group on the basis of a structural homology model. PMID:21229987

  4. Cost reduction study for the LANL KrF laser-driven LMF design

    Energy Technology Data Exchange (ETDEWEB)

    1989-10-27

    This report is in fulfillment of the deliverable requirements for the optical components portions of the LANL-KrF Laser-Driven LMF Design Cost Reduction Study. This report examines the future cost reductions that may accrue through the use of mass production, innovative manufacturing techniques, and new materials. Results are based on data collection and survey of optical component manufacturers, BDM experience, and existing cost models. These data provide a good representation of current methods and technologies from which future estimates can be made. From these data, a series of scaling relationships were developed to project future costs for a selected set of technologies. The scaling relationships are sensitive to cost driving parameters such as size and surface figure requirements as well as quantity requirements, production rate, materials, and manufacturing processes. In addition to the scaling relationships, descriptions of the selected processes were developed along with graphical representations of the processes. This report provides a useful tool in projecting the costs of advanced laser concepts at the component level of detail. A mix of the most diverse yet comparable technologies was chosen for this study. This yielded a useful, yet manageable number of variables to examine. The study has resulted in a first-order cost model which predicts the relative cost behavior of optical components within different variable constraints.

  5. Integrated solutions to SHM problems: an overview of SHM research at the LANL/UCSD engineering institute

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Farinholt, Kevin [Los Alamos National Laboratory; Todd, Michael [Los Alamos National Laboratory

    2010-12-08

    This seminar will provide an overview of structural health monitoring (SHM) research that is being undertaken at Los Alamos National Laboratory (LANL). The seminar will begin by stating that SHM should be viewed as an important component of the more comprehensive intelligent life-cycle engineering process. Then LANL's statistical pattern recognition paradigm for addressing SHM problems will be introduced and current research that is focused on each part of the paradigm will be discussed. In th is paradigm, the process can be broken down into four parts: (1) Operational Evaluation, (2) Data Acquisition and Cleansing, (3) Feature Extraction, and (4) Statistical Model Development for Feature Discrimination. When one attempts to apply this paradigm to data from real world structures, it quickly becomes apparent that the ability to cleanse, compress, normalize and fuse data to account for operational and environmental variability is a key implementation issue when addressing Parts 2-4 of this paradigm. This discussion will be followed by the introduction a new project entitled 'Intelligent Wind Turbines' which is the focus of much of our current SHM research . This summary will be followed by a discussion of issues that must be addressed if this technology is to make the transition from research to practice and new research directions that are emerging for SHM.

  6. LANL2DZ basis sets recontracted in the framework of density functional theory.

    Science.gov (United States)

    Chiodo, S; Russo, N; Sicilia, E

    2006-09-14

    In this paper we report recontracted LANL2DZ basis sets for first-row transition metals. The valence-electron shell basis functions were recontracted using the PWP86 generalized gradient approximation functional and the hybrid B3LYP one. Starting from the original LANL2DZ basis sets a cyclic method was used in order to optimize variationally the contraction coefficients, while the contraction scheme was held fixed at the original one of the LANL2DZ basis functions. The performance of the recontracted basis sets was analyzed by direct comparison between calculated and experimental excitation and ionization energies. Results reported here compared with those obtained using the original basis sets show clearly an improvement in the reproduction of the corresponding experimental gaps.

  7. Tested by Fire - How two recent Wildfires affected Accelerator Operations at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Spickermann, Thomas [Los Alamos National Laboratory

    2012-08-01

    In a little more than a decade two large wild fires threatened Los Alamos and impacted accelerator operations at LANL. In 2000 the Cerro Grande Fire destroyed hundreds of homes, as well as structures and equipment at the DARHT facility. The DARHT accelerators were safe in a fire-proof building. In 2011 the Las Conchas Fire burned about 630 square kilometers (250 square miles) and came dangerously close to Los Alamos/LANL. LANSCE accelerator operations Lessons Learned during Las Conchas fire: (1) Develop a plan to efficiently shut down the accelerator on short notice; (2) Establish clear lines of communication in emergency situations; and (3) Plan recovery and keep squirrels out.

  8. Gap Analysis of Storage Conditions between NNSS and LANL for SAVY 4000 Use

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, Kirk Patrick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Smith, Paul Herrick [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stone, Timothy Amos [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Becker, Chandler Gus [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Karns, Tristan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Veirs, Douglas Kirk [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-25

    As part of the gap analysis for utilizing the SAVY 4000® at NNSS, the hydrogen gas generation rate and the effect of atmospheric pressure changes on the maximum normal operating pressure (MNOP) of the SAVY container must be evaluated because the nuclear material characteristics and atmospheric conditions will not be the same for NNSS and LANL. This paper documents this analysis and demonstrates that the LANL SAVY Safety Analysis Report (SAR) (1) is bounding with respect to the Nevada facilities.

  9. Gum-compliant uncertainty propagations for Pu and U concentration measurements using the 1st-prototype XOS/LANL hiRX instrument; an SRNL H-Canyon Test Bed performance evaluation project

    Energy Technology Data Exchange (ETDEWEB)

    Holland, Michael K. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, Patrick E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2016-05-04

    An SRNL H-Canyon Test Bed performance evaluation project was completed jointly by SRNL and LANL on a prototype monochromatic energy dispersive x-ray fluorescence instrument, the hiRX. A series of uncertainty propagations were generated based upon plutonium and uranium measurements performed using the alpha-prototype hiRX instrument. Data reduction and uncertainty modeling provided in this report were performed by the SRNL authors. Observations and lessons learned from this evaluation were also used to predict the expected uncertainties that should be achievable at multiple plutonium and uranium concentration levels provided instrument hardware and software upgrades being recommended by LANL and SRNL are performed.

  10. USE OF DRILLING FLUIDS IN MONITORING WELL NETWORK INSTALLATION: LANL AND OPEN DISCUSSION

    Science.gov (United States)

    Personnel at the EPA Ground Water and Ecosystems Restoration Division (GWERD) were requested by EPA Region 6 to provide a technical analysis of the impacts of well drilling practices implemented at the Los Alamos National Laboratory (LANL) as part of the development of their grou...

  11. Ny verdensrekord for DTU Roadrunners

    DEFF Research Database (Denmark)

    Lassen, Lisbeth

    2015-01-01

    Økobilen har slået sin egen verdensrekord fra 2013 ved at køre hvad der svarer til 665 km på en liter benzin til Shell Eco – Marathon i Rotterdam lørdag d. 23. maj i kategorien Urban Concept.......Økobilen har slået sin egen verdensrekord fra 2013 ved at køre hvad der svarer til 665 km på en liter benzin til Shell Eco – Marathon i Rotterdam lørdag d. 23. maj i kategorien Urban Concept....

  12. Waste assay and mass balance for the decontamination and volume reduction system at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Gruetzmacher, Kathleen M.; Ferran, Scott G.; Garner, Scott E.; Romero, Mike J.; Christensen, Davis V.; Bustos, Roland M.

    2003-07-01

    The Decontamination and Volume Reduction System (DVRS) operated by the Solid Waste Operations (SWO) Group at Los Alamos National Laboratory (LANL) processes large volume, legacy radioactive waste items. Waste boxes, in sizes varying from 4 ft x 4 ft x 8 ft to 10 ft x 12 ft x 40 ft, are assayed prior to entry into the processing building. Inside the building, the waste items are removed from their container, decontaminated and/or size reduced if necessary, and repackaged for shipment to the Waste Isolation Pilot Plant (WIPP) or on-site low-level waste disposal. The repackaged items and any secondary waste produced (e.g., personal protective equipment) are assayed again at the end of the process and a mass balance is done to determine whether there is any significant hold-up material left in the DVRS building. The DVRS building is currently classed as a radiological facility, with a building limit of 0.52 Ci of Pu239 and Am241, and 0.62 Ci of Pu238, the most common radionuclides processed. This requires tight controls on the flow of nuclear material. The large volume of the initial waste packages, the (relatively) small amounts of radioactive material in them, and the tight ceiling on the building inventory require accurate field measurements of the nuclear material. This paper describes the radioactive waste measurement techniques, the computer modeling used to determine the amount of nuclear material present in a waste package, the building inventory database, and the DVRS process itself. Future plans include raising the limit on the nuclear material inventory allowed in the building to accommodate higher activity waste packages. All DOE sites performing decontamination and decommissioning of radioactive process equipment face challenges related to waste assay and inventory issues. This paper describes an ongoing operation, incorporating lessons learned over the life of the project to date.

  13. Quarterly Report for LANL Activities: FY12-Q2 National Risk Assessment Partnership (NRAP): Industrial Carbon Capture Program

    Energy Technology Data Exchange (ETDEWEB)

    Pawar, Rajesh J. [Los Alamos National Laboratory

    2012-04-17

    This report summarizes progress of LANL activities related to the tasks performed under the LANL FWP FE102-002-FY10, National Risk Assessment Partnership (NRAP): Industrial Carbon Capture Program. This FWP is funded through the American Recovery and Reinvestment Act (ARRA). Overall, the NRAP activities are focused on understanding and evaluating risks associated with large-scale injection and long-term storage of CO{sub 2} in deep geological formations. One of the primary risks during large-scale injection is due to changes in geomechanical stresses to the storage reservoir, to the caprock/seals and to the wellbores. These changes may have the potential to cause CO{sub 2} and brine leakage and geochemical impacts to the groundwater systems. While the importance of these stresses is well recognized, there have been relatively few quantitative studies (laboratory, field or theoretical) of geomechanical processes in sequestration systems. In addition, there are no integrated studies that allow evaluation of risks to groundwater quality in the context of CO{sub 2} injection-induced stresses. The work performed under this project is focused on better understanding these effects. LANL approach will develop laboratory and computational tools to understand the impact of CO{sub 2}-induced mechanical stress by creating a geomechanical test bed using inputs from laboratory experiments, field data, and conceptual approaches. The Geomechanical Test Bed will be used for conducting sensitivity and scenario analyses of the impacts of CO{sub 2} injection. The specific types of questions will relate to fault stimulation and fracture inducing stress on caprock, changes in wellbore leakage due to evolution of stress in the reservoir and caprock, and the potential for induced seismicity. In addition, the Geomechanical Test Bed will be used to investigate the coupling of stress-induced leakage pathways with impacts on groundwater quality. LANL activities are performed under two tasks

  14. LANL's Role in the U.S. Fissile Material Disposition Program

    Energy Technology Data Exchange (ETDEWEB)

    Whitworth, Julia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kay, Virginia [NA-233

    2015-02-18

    The process of Fissile Material Disposition is in part a result of the Advanced Recovery and Integrated Extraction System (ARIES), which is an agreement between the U.S. and Russia to dispose of excess plutonium used to make weapons. LANL is one sight that aides in the process of dismantling, storage and repurposing of the plutonium gathered from dismantled weapons. Some uses for the repurposed plutonium is fuel for commercial nuclear reactors which will provide energy for citizens.

  15. U.S. Department of Energy Report 1998 LANL Radionuclide Air Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Keith W. Jacobson

    1999-07-01

    Presented is the Laboratory-wide certified report regarding radioactive effluents released into the air by Los Alamos National Laboratory (LANL) in 1998. This information is required under the Clean Air Act and is being reported to the US Environmental Protection Agency (EPA). The highest effective dose equivalent (EDE) to an off-site member of the public was calculated using procedures specified by the EPA and described in this report. For 1998, the dose was 1.72 mrem. Airborne effluents from a 1 mA, 800 MeV proton accelerator contributed about 80% of the EDE; the majority of the total dose contribution was via the air immersion pathway.

  16. Parallel File System I/O Performance Testing On LANL Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Wiens, Isaac Christian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). High Performance Computing Division. Programming and Runtime Environments; Green, Jennifer Kathleen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). High Performance Computing Division. Programming and Runtime Environments

    2016-08-18

    These are slides from a presentation on parallel file system I/O performance testing on LANL clusters. I/O is a known bottleneck for HPC applications. Performance optimization of I/O is often required. This summer project entailed integrating IOR under Pavilion and automating the results analysis. The slides cover the following topics: scope of the work, tools utilized, IOR-Pavilion test workflow, build script, IOR parameters, how parameters are passed to IOR, *run_ior: functionality, Python IOR-Output Parser, Splunk data format, Splunk dashboard and features, and future work.

  17. Analysis of historical delta values for IAEA/LANL NDA training courses

    Energy Technology Data Exchange (ETDEWEB)

    Geist, William [Los Alamos National Laboratory; Santi, Peter [Los Alamos National Laboratory; Swinhoe, Martyn [Los Alamos National Laboratory; Bonner, Elisa [FORMER N-4 STUDENT

    2009-01-01

    The Los Alamos National Laboratory (LANL) supports the International Atomic Energy Agency (IAEA) by providing training for IAEA inspectors in neutron and gamma-ray Nondestructive Assay (NDA) of nuclear material. Since 1980, all new IAEA inspectors attend this two week course at LANL gaining hands-on experience in the application of NDA techniques, procedures and analysis to measure plutonium and uranium nuclear material standards with well known pedigrees. As part of the course the inspectors conduct an inventory verification exercise. This exercise provides inspectors the opportunity to test their abilities in performing verification measurements using the various NDA techniques. For an inspector, the verification of an item is nominally based on whether the measured assay value agrees with the declared value to within three times the historical delta value. The historical delta value represents the average difference between measured and declared values from previous measurements taken on similar material with the same measurement technology. If the measurement falls outside a limit of three times the historical delta value, the declaration is not verified. This paper uses measurement data from five years of IAEA courses to calculate a historical delta for five non-destructive assay methods: Gamma-ray Enrichment, Gamma-ray Plutonium Isotopics, Passive Neutron Coincidence Counting, Active Neutron Coincidence Counting and the Neutron Coincidence Collar. These historical deltas provide information as to the precision and accuracy of these measurement techniques under realistic conditions.

  18. MyLibrary@LANL: proximity and semi-metric networks for a collaborative and recommender web service

    Energy Technology Data Exchange (ETDEWEB)

    Rocha, L. M. [Indiana Univ., Bloomington, IN (United States). School of Informatics and Cognitive Science Program; Simas, T. [Indiana Univ., Bloomington, IN (United States). School of Informatics and Cognitive Science Program; Rechtsteiner, A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); DiGiacomo, M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Research Library; Luce, R. E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States). Research Library

    2005-09-01

    We describe a network approach to building recommendation systems for a WWW service. We employ two different types of weighted graphs in our analysis and development: Proximity graphs, a type of Fuzzy Graphs based on a co-occurrence probability, and semi-metric distance graphs, which do not observe the triangle inequality of Euclidean distances. Both types of graphs are used to develop intelligent recommendation and collaboration systems for the MyLibrary@LANL web service, a user-centered front-end to the Los Alamos National Laboratory's (LANL) digital library collections and WWW resources.

  19. LANL Experience Rolling Zr-Clad LEU-10Mo Foils for AFIP-7

    Energy Technology Data Exchange (ETDEWEB)

    Hammon, Duncan L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Clarke, Kester D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Alexander, David J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kennedy, Patrick K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Edwards, Randall L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Duffield, Andrew N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dombrowski, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-29

    The cleaning, canning, rolling and final trimming of Low Enriched Uranium-10 wt. pct. Molybdenum (LEU-10Mo) foils for ATR (Advanced Test Reactor) fuel plates to be used in the AFIP-7 (ATR Full Size Plate In Center Flux Trap Position) experiments are summarized. Six Zr-clad foils were produced from two LEU-10Mo castings supplied to Los Alamos National Laboratory (LANL) by Y-12 National Security Complex. Details of cleaning and canning procedures are provided. Hot- and cold-rolling results are presented, including rolling schedules, images of foils in-process, metallography and local compositions of regions of interest, and details of final foil dimensions and process yield. This report was compiled from the slides for the presentation of the same name given by Duncan Hammon on May 12, 2011 at the AFIP-7 Lessons Learned meeting in Salt Lake City, UT, with Los Alamos National Laboratory document number LA-UR 11-02898.

  20. U.S. Department of Energy Report 1997 LANL Radionuclide Air Emissions

    Energy Technology Data Exchange (ETDEWEB)

    Jacobson, K.W.

    1998-09-01

    Presented is the Laboratory-wide certified report regarding radioactive effluents released into the air by the Los Alamos National Laboratory (LANL) in 1997. This information is required under the Clean Air Act and is being reported to the U.S. Environmental Protection Agency (EPA). The highest effective dose equivalent (EDE) to an offsite member of the public was calculated using procedures specified by the EPA and described in this report. For 1997, the dose was 3.51 mrem. Airborne effluents from a 1mA, 800 MeV proton accelerator contributed to over 90% of the EDE; more than 86% of the total dose contribution was through the air immersion pathway.

  1. In-Situ Magnetic Gauging Technique Used at LANL -- Method and Shock Information Obtained

    Science.gov (United States)

    Sheffield, Stephen A.

    1999-06-01

    Measuring techniques, including magnetic gauges, quartz gauges, manganin gauges, PVDF gauges, velocity interferometry (VISAR, Fabry-Perot, ORVIS, etc.), piezoelectric pins, shorting pins, flash gaps, etc., have been used over the years to measure shock properties and wave evolution in condensed phase materials. In general, each of these techniques has its own strengths and weaknesses. The use of a particular technique depends on the measured parameter and the sample material properties. This paper will concentrate on in-situ magnetic gauging which is particularly useful in high explosive shock initiation experiments. A short history of this technique will be given but the main discussion will concentrate on the multiple magnetic gauge technique developed at Los Alamos National Lab.(LANL). Vorthman and Wackerle (Vorthman, J.E., ``Facilities for the Study of Shock Induced Decomposition in High Explosive,'' in Shock Waves in Condensed Matter -- 1981, Eds. W. J. Nellis, L. Seaman, and R.A. Graham, AIP Conference Proceedings No. 78 (1982) p. 680.) started developing the technique in 1980, concentrating on particle velocity and ``impulse'' gauges so that Lagrange analysis could be used to map the entire reactive field. Over the years, changes to the gauge design, fabrication, and experimental focus have led to the present LANL capability. During the past two years measurements have tracked the reactive wave evolution resulting from a shock-to-detonation transition in several high explosive materials. The data from a single experiment provides: 1)particle velocity wave profiles from ten to twelve depths in the sample, 2) shock front tracking, 3) an unreacted Hugoniot point (in which both the shock velocity and particle velocity are measured), 4) a ``Pop-plot'' or distance-(time-)to-detonation point, and 5) a 3% measurement of the detonation velocity. Details of the experimental setup and information from several experiments will be discussed.

  2. Simultaneous Thermal Analysis of WIPP and LANL Waste Drum Samples: A Preliminary Report

    Energy Technology Data Exchange (ETDEWEB)

    Wayne, David M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-10-19

    On Friday, February 14, 2014, an incident in P7R7 of the WIPP underground repository released radioactive material into the environment. The direct cause of the event was a breached transuranic (TRU) waste container, subsequently identified as Drum 68660. Photographic and other evidence indicates that the breach of 68660 was caused by an exothermic event. Subsequent investigations (Britt, 2015; Clark and Funk, 2015; Wilson et al., 2015; Clark, 2015) indicate that the combination of nitrate salts, pH neutralizing chemicals, and organic-based adsorbent represented a potentially energetic mixture. The materials inside the breached steel drum consisted of remediated, 30- to 40-year old, Pu processing wastes from LANL. The contents were processed and repackaged in 2014. Processing activities at LANL included: 1) neutralization of acidic liquid contents, 2) sorption of the neutralized liquid, and 3) mixing of acidic nitrate salts with an absorber to meet waste acceptance criteria. The contents of 68660 and its sibling, 68685, were derived from the same parent drum, S855793. Drum S855793 originally contained ten plastic bags of acidic nitrate salts, and four bags of mixed nitrate and oxalate salts generated in 1985 by Pu recovery operations. These salts were predominantly oxalic acid, hydrated nitrate salts of Mg, Ca, and Fe, anhydrous Na(NO3), and minor amounts of anhydrous and hydrous nitrate salts of Pb, Al, K, Cr, and Ni. Other major components include sorbed water, nitric acid, dissolved nitrates, an absorbent (Swheat Scoop®) and a neutralizer (KolorSafe®). The contents of 68660 are described in greater detail in Appendix E of Wilson et al. (2015)

  3. Measurements with the high flux lead slowing-down spectrometer at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Danon, Y. [Rensselaer Polytechnic Intstitute, 110 8th Street, Troy, NY 12180 (United States)]. E-mail: danony@rpi.edu; Romano, C. [Rensselaer Polytechnic Intstitute, 110 8th Street, Troy, NY 12180 (United States); Thompson, J. [Rensselaer Polytechnic Intstitute, 110 8th Street, Troy, NY 12180 (United States); Watson, T. [Rensselaer Polytechnic Intstitute, 110 8th Street, Troy, NY 12180 (United States); Haight, R.C. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Wender, S.A. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Vieira, D.J. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Bond, E. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Wilhelmy, J.B. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); O' Donnell, J.M. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Michaudon, A. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Bredeweg, T.A. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Schurman, T. [Los Alamos National Laboratory, Los Alamos, NM 87545 (United States); Rochman, D. [Brookhaven National Laboratory National Nuclear Data Center (NNDC), Upton, NY 11973-5000 (United States); Granier, T. [CEA-DAM, BP 12, 91680 Bruyeres-le-Chatel (France); Ethvignot, T. [CEA-DAM, BP 12, 91680 Bruyeres-le-Chatel (France); Taieb, J. [CEA-DAM, BP 12, 91680 Bruyeres-le-Chatel (France); Becker, J.A. [Lawrence Livermore National Laboratory, Livermore, CA 94550 (United States)

    2007-08-15

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 {mu}A, pulse widths of 0.05-0.25 {mu}s and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 x 10{sup 9} n/cm{sup 2}/s/{mu}A. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, {alpha}) and (n, p) reactions. Fission cross section measurements were made with samples of {sup 235}U, {sup 236}U, {sup 238}U and {sup 239}Pu. The smallest sample measured was 10 ng of {sup 239}Pu. Measurement of (n, {alpha}) cross section with 760 ng of Li-6 was also

  4. Measurements with the high flux lead slowing-down spectrometer at LANL

    Science.gov (United States)

    Danon, Y.; Romano, C.; Thompson, J.; Watson, T.; Haight, R. C.; Wender, S. A.; Vieira, D. J.; Bond, E.; Wilhelmy, J. B.; O'Donnell, J. M.; Michaudon, A.; Bredeweg, T. A.; Schurman, T.; Rochman, D.; Granier, T.; Ethvignot, T.; Taieb, J.; Becker, J. A.

    2007-08-01

    A Lead Slowing-Down Spectrometer (LSDS) was recently installed at LANL [D. Rochman, R.C. Haight, J.M. O'Donnell, A. Michaudon, S.A. Wender, D.J. Vieira, E.M. Bond, T.A. Bredeweg, A. Kronenberg, J.B. Wilhelmy, T. Ethvignot, T. Granier, M. Petit, Y. Danon, Characteristics of a lead slowing-down spectrometer coupled to the LANSCE accelerator, Nucl. Instr. and Meth. A 550 (2005) 397]. The LSDS is comprised of a cube of pure lead 1.2 m on the side, with a spallation pulsed neutron source in its center. The LSDS is driven by 800 MeV protons with a time-averaged current of up to 1 μA, pulse widths of 0.05-0.25 μs and a repetition rate of 20-40 Hz. Spallation neutrons are created by directing the proton beam into an air-cooled tungsten target in the center of the lead cube. The neutrons slow down by scattering interactions with the lead and thus enable measurements of neutron-induced reaction rates as a function of the slowing-down time, which correlates to neutron energy. The advantage of an LSDS as a neutron spectrometer is that the neutron flux is 3-4 orders of magnitude higher than a standard time-of-flight experiment at the equivalent flight path, 5.6 m. The effective energy range is 0.1 eV to 100 keV with a typical energy resolution of 30% from 1 eV to 10 keV. The average neutron flux between 1 and 10 keV is about 1.7 × 109 n/cm2/s/μA. This high flux makes the LSDS an important tool for neutron-induced cross section measurements of ultra-small samples (nanograms) or of samples with very low cross sections. The LSDS at LANL was initially built in order to measure the fission cross section of the short-lived metastable isotope of U-235, however it can also be used to measure (n, α) and (n, p) reactions. Fission cross section measurements were made with samples of 235U, 236U, 238U and 239Pu. The smallest sample measured was 10 ng of 239Pu. Measurement of (n, α) cross section with 760 ng of Li-6 was also demonstrated. Possible future cross section measurements

  5. Advanced laser particle accelerator development at LANL: from fast ignition to radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Flippo, Kirk A [Los Alamos National Laboratory; Gaillard, Sandrine A [Los Alamos National Laboratory; Offermann, D T [Los Alamos National Laboratory; Cobble, J A [Los Alamos National Laboratory; Schmitt, M J [Los Alamos National Laboratory; Gautier, D C [Los Alamos National Laboratory; Kwan, T J T [Los Alamos National Laboratory; Montgomery, D S [Los Alamos National Laboratory; Kluge, Thomas [FZD-GERMANY; Bussmann, Micheal [FZD-GERMANY; Bartal, T [UCSD; Beg, F N [UCSD; Gall, B [UNIV OF MISSOURI; Geissel, M [SNL; Korgan, G [NANOLABZ; Kovaleski, S [UNIV OF MISSOURI; Lockard, T [UNIV OF NEVADA; Malekos, S [NANOLABZ; Schollmeier, M [SNL; Sentoku, Y [UNIV OF NEVADA; Cowan, T E [FZD-GERMANY

    2010-01-01

    Laser-plasma accelerated ion and electron beam sources are an emerging field with vast prospects, and promise many superior applications in a variety of fields such as hadron cancer therapy, compact radioisotope generation, table-top nuclear physics, laboratory astrophysics, nuclear forensics, waste transmutation, SN M detection, and inertial fusion energy. LANL is engaged in several projects seeking to develop compact high current and high energy ion and electron sources. We are especially interested in two specific applications: ion fast ignition/capsule perturbation and radiation oncology in conjunction with our partners at the ForschungsZentrum Dresden-Rossendorf (FZD). Laser-to-beam conversion efficiencies of over 10% are needed for practical applications, and we have already shown inherent etliciencies of >5% from flat foils, on Trident using only a 5th of the intensity and energy of the Nova Petawatt. With clever target designs, like structured curved cone targets, we have also been able to achieve major ion energy gains, leading to the highest energy laser-accelerated proton beams in the world. These new target designs promise to help usher in the next generation of particle sources realizing the potential of laser-accelerated beams.

  6. LANL Multiyear Strategy Performance Improvement (MYSPI), Fiscal Years 2017–2021

    Energy Technology Data Exchange (ETDEWEB)

    Leasure, Craig Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-03

    Los Alamos National Laboratory (LANL) protects the nation and the world using innovative science, technology, and engineering through an integrated approach that harnesses the strength of our people, capabilities, and operations. The Laboratory’s Strategic Plan and Purpose statement provide the framework for scientific excellence and operational excellence now and in the future. Our Strategic Plan and Purpose help position Los Alamos for continuing mission success that ensures the safety, security, and effectiveness of the nation’s deterrent; protects the nation from nuclear and emerging threats through our larger global security missions; provides energy security to the nation; and ensures that the nation’s scientific reputation and capabilities remain robust enough to assure our allies and deter our adversaries. Moreover, we use these principles and guidance to ensure that Los Alamos is successful in attracting, recruiting, and retaining the next generation of world-class talent, while creating an efficient, environmentally responsible workplace that provides our employees with access to modern scientific tools and resources. Using this guidance and its underlying principles, we are continuing to restore credibility and operational effectiveness to the Laboratory, deliver mission success and continuing scientific excellence, and protect our employees and the nation’s secrets.

  7. A gas-loading system for LANL two-stage gas guns

    Energy Technology Data Exchange (ETDEWEB)

    Gibson, Lloyd Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bartram, Brian Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dattelbaum, Dana Mcgraw [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lang, John Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Morris, John Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  8. The LANL/LLNL Prompt Fission Neutron Spectrum Program at LANSCE and Approach to Uncertainties

    Energy Technology Data Exchange (ETDEWEB)

    Haight, R.C., E-mail: haight@lanl.gov [Los Alamos National Laboratory, Los Alamos, NM 87545,USA (United States); Wu, C.Y. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States); Lee, H.Y.; Taddeucci, T.N.; Perdue, B.A.; O' Donnell, J.M.; Fotiades, N.; Devlin, M.; Ullmann, J.L.; Bredeweg, T.A.; Jandel, M.; Nelson, R.O.; Wender, S.A.; Neudecker, D.; Rising, M.E.; Mosby, S.; Sjue, S.; White, M.C. [Los Alamos National Laboratory, Los Alamos, NM 87545,USA (United States); Bucher, B.; Henderson, R. [Lawrence Livermore National Laboratory, Livermore, CA 94551 (United States)

    2015-01-15

    New data on the prompt fission neutron spectra (PFNS) from neutron-induced fission with higher accuracies are needed to resolve discrepancies in the literature and to address gaps in the experimental data. The Chi-Nu project, conducted jointly by LANL and LLNL, aims to measure the shape of the PFNS for fission of {sup 239}Pu induced by neutrons from 0.5 to 20 MeV with accuracies of 3–5% in the outgoing energy from 0.1 to 9 MeV and 15% from 9 to 12 MeV and to provide detailed experimental uncertainties. Neutrons from the WNR/LANSCE neutron source are being used to induce fission in a Parallel-Plate Avalanche Counter (PPAC). Two arrays of neutron detectors are used to cover the energy range of neutrons emitted promptly in the fission process. Challenges for the present experiment include background reduction, use of {sup 239}Pu in a PPAC, and understanding neutron detector response. Achieving the target accuracies requires the understanding of many systematic uncertainties. The status and plans for the future will be presented.

  9. Advanced Laser Particle Accelerator Development at LANL: From Fast Ignition to Radiation Oncology

    Science.gov (United States)

    Flippo, K. A.; Gaillard, S. A.; Kluge, T.; Bussmann, M.; Offermann, D. T.; Cobble, J. A.; Schmitt, M. J.; Bartal, T.; Beg, F. N.; Cowan, T. E.; Gall, B.; Gautier, D. C.; Geissel, M.; Kwan, T. J.; Korgan, G.; Kovaleski, S.; Lockard, T.; Malekos, S.; Montgomery, D. S.; Schollmeier, M.; Sentoku, Y.

    2010-11-01

    Laser-plasma accelerated ion and electron beam sources are an emerging field with vast prospects, and promise many superior applications in a variety of fields such as hadron cancer therapy, compact radioisotope generation, table-top nuclear physics, laboratory astrophysics, nuclear forensics, waste transmutation, Special Nuclear Material (SNM) detection, and inertial fusion energy. LANL is engaged in several projects seeking to develop compact high-current and high-energy ion and electron sources. We are especially interested in two specific applications: ion fast ignition/capsule perturbation and radiation oncology. Laser-to-beam conversion efficiencies of over 10% are needed for practical applications, and we have already shown inherent efficiencies of >5% from flat foils, on Trident using only a 5th of the intensity [1] and energy of the Nova Petawatt laser [2]. With clever target designs, like structured curved cone targets, we have also been able to achieve major ion energy gains, leading to the highest energy laser-accelerated proton beams in the world [3]. These new target designs promise to help usher in the next generation of particle sources realizing the potential of laser-accelerated beams.

  10. A gas-loading system for LANL two-stage gas guns

    Science.gov (United States)

    Gibson, L. L.; Bartram, B. D.; Dattelbaum, D. M.; Lang, J. M.; Morris, J. S.

    2017-01-01

    A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.

  11. Remote liquid target loading system for LANL two-stage gas gun

    Science.gov (United States)

    Gibson, L. L.; Bartram, B.; Dattelbaum, D. M.; Sheffield, S. A.; Stahl, D. B.

    2009-06-01

    A Remote Liquid Loading System (RLLS) was designed to load high hazard liquid materials into targets for gas-gun driven impact experiments. These high hazard liquids tend to react with confining materials in a short period of time, degrading target assemblies and potentially building up pressure through the evolution of gas in the reactions. Therefore, the ability to load a gas gun target in place immediately prior to firing the gun, provides the most stable and reliable target fielding approach. We present the design and evaluation of a RLLS built for the LANL two-stage gas gun. Targets for the gun are made of PMMA and assembled to form a liquid containment cell with a volume of approximately 25 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with highly concentrated hydrogen peroxide. Teflon and 304-stainless steel were the two most compatible materials with the materials to be tested. Teflon valves and tubing, as well as stainless steel tubing, were used to handle the liquid, along with a stainless steel reservoir. Preliminary testing was done to ensure proper flow rate and safety. The system has been used to successfully load 97.5 percent hydrogen peroxide into a target cell just prior to a successful multiple magnetic gauge experiment. TV cameras on the target verified the bubble-free filling operation.

  12. LANL Multiyear Strategy Performance Improvement (MYSPI), Fiscal Years 2018-2022

    Energy Technology Data Exchange (ETDEWEB)

    Leasure, Craig Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-05-08

    Los Alamos National Laboratory (LANL) protects the nation and the world using innovative science, technology, and engineering through an integrated approach that harnesses the strength of our people, capabilities, and operations. The Laboratory’s Strategic Plan and Purpose statement provide the framework for scientific excellence and operational excellence now and in the future. Our Strategic Plan and Purpose help position Los Alamos for continuing mission success that ensures the safety, security, and effectiveness of the nation’s deterrent; protects the nation from nuclear and emerging threats through our larger global security missions; provides energy security to the nation; and ensures that the nation’s scientific reputation and capabilities remain robust enough to assure our allies and deter our adversaries. Moreover, we use these principles and guidance to ensure that Los Alamos is successful in attracting, recruiting, and retaining the next generation of excellent talent, while creating an efficient, environmentally responsible workplace that provides our employees with access to modern scientific tools and resources. Using this guidance and its underlying principles, we are continuing to restore credibility and operational effectiveness to the Laboratory, deliver mission success and continuing scientific excellence, and protect our employees and the nation’s secrets.

  13. Status of LANL investigations of temperature constraints on clay in repository environments

    Energy Technology Data Exchange (ETDEWEB)

    Caporuscio, Florie A [Los Alamos National Laboratory; Cheshire, Michael C [Los Alamos National Laboratory; Newell, Dennis L [Los Alamos National Laboratory; McCarney, Mary Kate [Los Alamos National Laboratory

    2012-08-22

    The Used Fuel Disposition (UFD) Campaign is presently evaluating various generic options for disposal of used fuel. The focus of this experimental work is to characterize and bound Engineered Barrier Systems (EBS) conditions in high heat load repositories. The UFD now has the ability to evaluate multiple EBS materials, waste containers, and rock types at higher heat loads and pressures (including deep boreholes). The geologic conditions now available to the U.S.A. and the international community for repositories include saturated and reduced water conditions, along with higher pressure and temperature (P, T) regimes. Chemical and structural changes to the clays, in either backfill/buffer or clay-rich host rock, may have significant effects on repository evolution. Reduction of smectite expansion capacity and rehydration potential due to heating could affect the isolation provided by EBS. Processes such as cementation by silica precipitation and authigenic illite could change the hydraulic and mechanical properties of clay-rich materials. Experimental studies of these repository conditions at high P,T have not been performed in the U.S. for decades and little has been done by the international community at high P,T. The experiments to be performed by LANL will focus on the importance of repository chemical and mineralogical conditions at elevated P,T conditions. This will provide input to the assessment of scientific basis for elevating the temperature limits in clay barriers.

  14. SCATHA/SC3 Data Processing and Cross-Calibration with LANL-GEO/CPA for AE9 Development

    Science.gov (United States)

    2014-02-18

    AFRL-RV-PS- AFRL-RV-PS- TR-2014-0015 TR-2014-0015 SCATHA/SC3 DATA PROCESSING AND CROSS- CALIBRATION WITH LANL -GEO/CPA FOR AE9 DEVELOPMENT Yi...SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does...for public release; distribution is unlimited. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this

  15. Hydrogeologic analyses in support of the conceptual model for the LANL Area G LLRW performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Birdsell, K.; Rogers, D.; Springer, E.; Krier, D.; Turin, H.J.

    1996-04-01

    The Los Alamos National Laboratory low level radioactive waste disposal facility at Area G is currently completing a draft of the site Performance Assessment. Results from previous field studies have estimated a range in recharge rate up to 1 cm/yr. Recent estimates of unsaturated hydraulic conductivity for each stratigraphic layer under a unit gradient assumption show a wide range in recharge rate of 10{sup {minus}4} to 1 cm/yr depending upon location. Numerical computations show that a single net infiltration rate at the mesa surface does not match the moisture profile in each stratigraphic layer simultaneously, suggesting local source or sink terms possibly due to surface connected porous regions. The best fit to field data at deeper stratigraphic layers occurs for a net infiltration of about 0.1 cm/yr. A recent detailed analysis evaluated liquid phase vertical moisture flux, based on moisture profiles in several boreholes and van Genuchten fits to the hydraulic properties for each of the stratigraphic units. Results show a near surface infiltration region averages 8m deep, below which is a dry, low moisture content, and low flux region, where liquid phase recharge averages to zero. Analysis shows this low flux region is dominated by vapor movement. Field data from tritium diffusion studies, from pressure fluctuation attenuation studies, and from comparisons of in-situ and core sample permeabilities indicate that the vapor diffusion is enhanced above that expected in the matrix and is presumably due to enhanced flow through the fractures. Below this dry region within the mesa is a moisture spike which analyses show corresponds to a moisture source. The likely physical explanation is seasonal transient infiltration through surface-connected fractures. This anomalous region is being investigated in current field studies, because it is critical in understanding the moisture flux which continues to deeper regions through the unsaturated zone.

  16. LANL MOX fuel lead assemblies data report for the surplus plutonium disposition environmental impact statement

    Energy Technology Data Exchange (ETDEWEB)

    Fisher, S.E.; Holdaway, R.; Ludwig, S.B. [and others

    1998-08-01

    The purpose of this document is to support the US Department of Energy (DOE) Fissile Materials Disposition Program`s preparation of the draft surplus plutonium disposition environmental impact statement. This is one of several responses to data call requests for background information on activities associated with the operation of the lead assembly (LA) mixed-oxide (MOX) fuel fabrication facility. LANL has proposed an LA MOX fuel fabrication approach that would be done entirely inside an S and S Category 1 area. This includes receipt and storage of PuO{sub 2} powder, fabrication of MOX fuel pellets, assembly of fuel rods and bundles, and shipping of the packaged fuel to a commercial reactor site. Support activities will take place within both Category 1 and 2 areas. Technical Area (TA) 55/Plutonium Facility 4 will be used to store the bulk PuO{sub 2} powder, fabricate MOX fuel pellets, assemble rods, and store fuel bundles. Bundles will be assembled at a separate facility, several of which have been identified as suitable for that activity. The Chemistry and Metallurgy Research Building (at TA-3) will be used for analytical chemistry support. Waste operations will be conducted in TA-50 and TA-54. Only very minor modifications will be needed to accommodate the LA program. These modifications consist mostly of minor equipment upgrades. A commercial reactor operator has not been identified for the LA irradiation. Postirradiation examination (PIE) of the irradiated fuel will take place at either Oak Ridge National Laboratory or ANL-W. The only modifications required at either PIE site would be to accommodate full-length irradiated fuel rods. Results from this program are critical to the overall plutonium distribution schedule.

  17. A survey of monitoring and assay systems for release of metals from radiation controlled areas at LANL.

    Energy Technology Data Exchange (ETDEWEB)

    Gruetzmacher, K. M. (Kathleen M.); MacArthur, D. W. (Duncan W.)

    2002-01-01

    At Los Alamos National Laboratory (LANL), a recent effort in waste minimization has focused on scrap metal from radiological controlled areas (RCAs). In particular, scrap metal from RCAs needs to be dispositioned in a reasonable and cost effective manner. Recycling of DOE scrap metals from RCAs is currently under a self-imposed moratorium. Since recycling is not available and reuse is difficult, often metal waste from RCAs, which could otherwise be recycled, is disposed of as low-level waste. Estimates at LANL put the cost of low-level waste disposal at $550 to $4000 per cubic meter, depending on the type of waste and the disposal site. If the waste is mixed, the cost for treatment and disposal can be as high as $50,000 per cubic meter. Disposal of scrap metal as low-level waste uses up valuable space in the low-level waste disposal areas and requires transportation to the disposal site under Department of Transportation (DOT) regulations for low-level waste. In contrast, disposal as non-radioactive waste costs as little as $2 per cubic meter. While recycling is unavailable, disposing of the metal at an industrial waste site could be the best solution for this waste stream. A Green Is Clean (GIC) type verification program needs to be in place to provide the greatest assurance that the waste does not contain DOE added radioactivity. This paper is a review of available and emerging radiation monitoring and assay systems that could be used for scrap metal as part of the LANL GIC program.

  18. R-X Modeling Figures

    Energy Technology Data Exchange (ETDEWEB)

    Goda, Joetta Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Grogan, Brandon [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-10-26

    This document contains figures that will be included in an ORNL final report that details computational efforts to model an irradiation experiment performed on the Godiva IV critical assembly. This experiment was a collaboration between LANL and ORNL.

  19. Sampling and Analysis Plan for Verification Sampling of LANL-Derived Residual Radionuclides in Soils within Tract A-18-2 for Land Conveyance

    Energy Technology Data Exchange (ETDEWEB)

    Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-30

    Public Law 105-119 directs the U.S. Department of Energy (DOE) to convey or transfer parcels of land to the Incorporated County of Los Alamos or their designees and to the Department of Interior, Bureau of Indian Affairs, in trust for the Pueblo de San Ildefonso. Los Alamos National Security is tasked to support DOE in conveyance and/or transfer of identified land parcels no later than September 2022. Under DOE Order 458.1, Radiation Protection of the Public and the Environment (O458.1, 2013) and Los Alamos National Laboratory (LANL or the Laboratory) implementing Policy 412 (P412, 2014), real property with the potential to contain residual radioactive material must meet the criteria for clearance and release to the public. This Sampling and Analysis Plan (SAP) is a second investigation of Tract A-18-2 for the purpose of verifying the previous sampling results (LANL 2017). This sample plan requires 18 projectspecific soil samples for use in radiological clearance decisions consistent with LANL Procedure ENV-ES-TP-238 (2015a) and guidance in the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM, 2000). The sampling work will be conducted by LANL, and samples will be evaluated by a LANL-contracted independent lab. However, there will be federal review (verification) of all steps of the sampling process.

  20. SkyDOT (Sky Database for Objects in the Time Domain) A Virtual Observatory for Variability Studies at LANL

    CERN Document Server

    Wozniak, P R; Galassi, M; Priedhorsky, W; Starr, D; Vestrand, W T; White, R; Wren, J

    2002-01-01

    The mining of Virtual Observatories (VOs) is becoming a powerful new method for discovery in astronomy. Here we report on the development of SkyDOT (Sky Database for Objects in the Time domain), a new Virtual Observatory, which is dedicated to the study of sky variability. The site will confederate a number of massive variability surveys and enable exploration of the time domain in astronomy. We discuss the architecture of the database and the functionality of the user interface. An important aspect of SkyDOT is that it is continuously updated in near real time so that users can access new observations in a timely manner. The site will also utilize high level machine learning tools that will allow sophisticated mining of the archive. Another key feature is the real time data stream provided by RAPTOR (RAPid Telescopes for Optical Response), a new sky monitoring experiment under construction at Los Alamos National Laboratory (LANL).

  1. Needs analysis and project schedule for the Los Alamos National Laboratory (LANL) Health Physics Analysis Laboratory (HPAL) upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Rhea, T.A.; Rucker, T.L. [Science Applications International Corp., Oak Ridge, TN (United States); Stafford, M.W. [NUS Corp., Aiken, SC (US)

    1990-09-28

    This report is a needs assessment and project schedule for the Health Physics Analysis Laboratory (HPAL) upgrade project at Los Alamos National Laboratory (LANL). After reviewing current and projected HPAL operations, two custom-developed laboratory information management systems (LIMS) for similar facilities were reviewed; four commercially available LIMS products were also evaluated. This project is motivated by new regulations for radiation protection and training and by increased emphasis on quality assurance (QA). HPAL data are used to: protect the health of radiation workers; document contamination levels for transportation of radioactive materials and for release of materials to the public for uncontrolled use; and verify compliance with environmental emission regulations. Phase 1 of the HPAL upgrade project concentrates on four types of counting instruments which support in excess of 90% of the sample workload at the existing central laboratories. Phase 2 is a refinement phase and also integrates summary-level databases on the central Health, Safety, and Environment (HSE) VAX. Phase 3 incorporates additional instrument types and integrates satellite laboratories into the HPAL LIMS. Phase 1 will be a multi-year, multimillion dollar project. The temptation to approach the upgrade of the HPAL program in a piece meal fashion should be avoided. This is a major project, with clearly-defined goals and priorities, and should be approached as such. Major programmatic and operational impacts will be felt throughout HSE as a result of this upgrade, so effective coordination with key customer contacts will be critical.

  2. Preliminary report of the comparison of multiple non-destructive assay techniques on LANL Plutonium Facility waste drums

    Energy Technology Data Exchange (ETDEWEB)

    Bonner, C.; Schanfein, M.; Estep, R. [and others

    1999-03-01

    Prior to disposal, nuclear waste must be accurately characterized to identify and quantify the radioactive content. The DOE Complex faces the daunting task of measuring nuclear material with both a wide range of masses and matrices. Similarly daunting can be the selection of a non-destructive assay (NDA) technique(s) to efficiently perform the quantitative assay over the entire waste population. In fulfilling its role of a DOE Defense Programs nuclear User Facility/Technology Development Center, the Los Alamos National Laboratory Plutonium Facility recently tested three commercially built and owned, mobile nondestructive assay (NDA) systems with special nuclear materials (SNM). Two independent commercial companies financed the testing of their three mobile NDA systems at the site. Contained within a single trailer is Canberra Industries segmented gamma scanner/waste assay system (SGS/WAS) and neutron waste drum assay system (WDAS). The third system is a BNFL Instruments Inc. (formerly known as Pajarito Scientific Corporation) differential die-away imaging passive/active neutron (IPAN) counter. In an effort to increase the value of this comparison, additional NDA techniques at LANL were also used to measure these same drums. These are comprised of three tomographic gamma scanners (one mobile unit and two stationary) and one developmental differential die-away system. Although not certified standards, the authors hope that such a comparison will provide valuable data for those considering these different NDA techniques to measure their waste as well as the developers of the techniques.

  3. Compilation of presentations: LANL-NRSS-Institute of Physics: radiological source technical cooperation

    Energy Technology Data Exchange (ETDEWEB)

    Streeper, Charles [Los Alamos National Laboratory; Fanning, Michael [Los Alamos National Laboratory; Feldman, Alex [Los Alamos National Laboratory

    2011-01-20

    A workshop was held in Tibilisi, Republic of Georgia February 7-8,2011 to discuss and train personnel on various instrumentation provided to the Nuclear Radiation Service and the Institute of Physics by the United States Global Threat Reduction Initiative. Instruments provided have been reviewed and approved via the local customs office. The instruments include: (1) Ludlum 3030E Smear Counters; (2) Ludlum 2360 Rate meter/Scalars; (3) Ludlum model 4310 detectors; (4) Arrow Tech Direct Reading Dosimeters and chargers; (5) ThermoFisher Scientific Mk2 Electronic Personal Dosimeter (EPD); (6) ThermoFisher Scientific EASYEPD2 configuration software; and (7) Associated support equipment, cables, planchets, etc. During the course of the training several power point briefs will be delivered. These briefs include theory of operation, operation, maintenance, calibration and configuration of the instruments described above. Several table top scenarios will be conducted during the training to reinforce the training material presented in the slides.

  4. CMI Remedy Selection for HE- and Barium-Contaminated Vadose Zone and Alluvium at LANL

    Science.gov (United States)

    Hickmott, D.; Reid, K.; Pietz, J.; Ware, D.

    2008-12-01

    A high explosives (HE) machining building outfall at Los Alamos National Laboratory's Technical Area 16 discharged millions of gallons of HE- and barium-contaminated water into the Canon de Valle watershed. The effluent contaminated surface soils, the alluvial aquifer, vadose zone waters, and deep-perched and regional groundwaters with HE and barium, frequently at levels greater than regulatory standards. Site characterization studies began in 1995 and included extensive monitoring of surface water, groundwater, soils, and subsurface solid media. Hydrogeologic and geophysical studies were conducted to help understand contaminant transport mechanisms and pathways. Results from the characterization studies were used to develop a site conceptual model. In 2000 the principal source area was removed. The ongoing Corrective Measure Study (CMS) and Corrective Measure Implementation (CMI) focus on residual vadose zone contamination and on the contaminated alluvial system. Regulators recently selected a CMI remedy that combined: 1) augmented source removal; 2) grouting of an HE- contaminated surge bed; 3) deployment of Stormwater Management System (SMS) stormfilters in contaminated springs; and 4) permeable reactive barriers (PRBs) in contaminated alluvium. The hydrogeologic conceptual model for the vadose zone and alluvial system as well as the status of the canyon as habitat for the Mexican Spotted Owl were key factors in selection of these minimal-environmental-impact remedies. The heterogeneous vadose zone, characterized by flow and contaminant transport in fractures and in surge beds, requires contaminant treatment at a point of discharge. The canyon PRB is being installed to capture water and contaminants prior to infiltration into the vadose zone. Pilot-scale testing of the SMS and lab-scale batch and column tests of a range of media suggest that granular activated carbon, zeolite, and gypsum may be effective media for removal of HE and/or barium from contaminated

  5. Materials at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Antoinette J [Los Alamos National Laboratory

    2010-01-01

    Exploring the physics, chemistry, and metallurgy of materials has been a primary focus of Los Alamos National Laboratory since its inception. In the early 1940s, very little was known or understood about plutonium, uranium, or their alloys. In addition, several new ionic, polymeric, and energetic materials with unique properties were needed in the development of nuclear weapons. As the Laboratory has evolved, and as missions in threat reduction, defense, energy, and meeting other emerging national challenges have been added, the role of materials science has expanded with the need for continued improvement in our understanding of the structure and properties of materials and in our ability to synthesize and process materials with unique characteristics. Materials science and engineering continues to be central to this Laboratory's success, and the materials capability truly spans the entire laboratory - touching upon numerous divisions and directorates and estimated to include >1/3 of the lab's technical staff. In 2006, Los Alamos and LANS LLC began to redefine our future, building upon the laboratory's established strengths and promoted by strongly interdependent science, technology and engineering capabilities. Eight Grand Challenges for Science were set forth as a technical framework for bridging across capabilities. Two of these grand challenges, Fundamental Understanding of Materials and Superconductivity and Actinide Science. were clearly materials-centric and were led out of our organizations. The complexity of these scientific thrusts was fleshed out through workshops involving cross-disciplinary teams. These teams refined the grand challenge concepts into actionable descriptions to be used as guidance for decisions like our LDRD strategic investment strategies and as the organizing basis for our external review process. In 2008, the Laboratory published 'Building the Future of Los Alamos. The Premier National Security Science Laboratory,' LA-UR-08-1541. This document introduced three strategic thrusts that crosscut the Grand Challenges and define future laboratory directions and facilities: (1) Information Science and Technology enabl ing integrative and predictive science; (2) Experimental science focused on materials for the future; and (3) Fundamental forensic science for nuclear, biological, and chemical threats. The next step for the Materials Capability was to develop a strategic plan for the second thrust, Materials for the Future. within the context of a capabilities-based Laboratory. This work has involved extending our 2006-2007 Grand Challenge workshops, integrating materials fundamental challenges into the MaRIE definition, and capitalizing on the emerging materials-centric national security missions. Strategic planning workshops with broad leadership and staff participation continued to hone our scientific directions and reinforce our strength through interdependence. By the Fall of 2008, these workshops promoted our primary strength as the delivery of Predictive Performance in applications where Extreme Environments dominate and where the discovery of Emergent Phenomena is a critical. These planning efforts were put into action through the development of our FY10 LDRD Strategic Investment Plan where the Materials Category was defined to incorporate three central thrusts: Prediction and Control of Performance, Extreme Environments and Emergent Phenomena. As with all strategic planning, much of the benefit is in the dialogue and cross-fertilization of ideas that occurs during the process. By winter of 2008/09, there was much agreement on the evolving focus for the Materials Strategy, but there was some lingering doubt over Prediction and Control of Performance as one of the three central thrusts, because it overarches all we do and is, truly, the end goal for materials science and engineering. Therefore, we elevated this thrust within the overarching vision/mission and introduce the concept of Defects and Interfaces as a central thrust that had previously been implied but

  6. Materials at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Antoinette J [Los Alamos National Laboratory

    2010-01-01

    Exploring the physics, chemistry, and metallurgy of materials has been a primary focus of Los Alamos National Laboratory since its inception. In the early 1940s, very little was known or understood about plutonium, uranium, or their alloys. In addition, several new ionic, polymeric, and energetic materials with unique properties were needed in the development of nuclear weapons. As the Laboratory has evolved, and as missions in threat reduction, defense, energy, and meeting other emerging national challenges have been added, the role of materials science has expanded with the need for continued improvement in our understanding of the structure and properties of materials and in our ability to synthesize and process materials with unique characteristics. Materials science and engineering continues to be central to this Laboratory's success, and the materials capability truly spans the entire laboratory - touching upon numerous divisions and directorates and estimated to include >1/3 of the lab's technical staff. In 2006, Los Alamos and LANS LLC began to redefine our future, building upon the laboratory's established strengths and promoted by strongly interdependent science, technology and engineering capabilities. Eight Grand Challenges for Science were set forth as a technical framework for bridging across capabilities. Two of these grand challenges, Fundamental Understanding of Materials and Superconductivity and Actinide Science. were clearly materials-centric and were led out of our organizations. The complexity of these scientific thrusts was fleshed out through workshops involving cross-disciplinary teams. These teams refined the grand challenge concepts into actionable descriptions to be used as guidance for decisions like our LDRD strategic investment strategies and as the organizing basis for our external review process. In 2008, the Laboratory published 'Building the Future of Los Alamos. The Premier National Security Science Laboratory,' LA-UR-08-1541. This document introduced three strategic thrusts that crosscut the Grand Challenges and define future laboratory directions and facilities: (1) Information Science and Technology enabl ing integrative and predictive science; (2) Experimental science focused on materials for the future; and (3) Fundamental forensic science for nuclear, biological, and chemical threats. The next step for the Materials Capability was to develop a strategic plan for the second thrust, Materials for the Future. within the context of a capabilities-based Laboratory. This work has involved extending our 2006-2007 Grand Challenge workshops, integrating materials fundamental challenges into the MaRIE definition, and capitalizing on the emerging materials-centric national security missions. Strategic planning workshops with broad leadership and staff participation continued to hone our scientific directions and reinforce our strength through interdependence. By the Fall of 2008, these workshops promoted our primary strength as the delivery of Predictive Performance in applications where Extreme Environments dominate and where the discovery of Emergent Phenomena is a critical. These planning efforts were put into action through the development of our FY10 LDRD Strategic Investment Plan where the Materials Category was defined to incorporate three central thrusts: Prediction and Control of Performance, Extreme Environments and Emergent Phenomena. As with all strategic planning, much of the benefit is in the dialogue and cross-fertilization of ideas that occurs during the process. By winter of 2008/09, there was much agreement on the evolving focus for the Materials Strategy, but there was some lingering doubt over Prediction and Control of Performance as one of the three central thrusts, because it overarches all we do and is, truly, the end goal for materials science and engineering. Therefore, we elevated this thrust within the overarching vision/mission and introduce the concept of Defects and Interfaces as a central thrust that had previously been implied but not clearly articulated. The Materials Strategy was then developed over the course of the next six months and is articulated in the next section.

  7. LANL HED Programs Overview

    Energy Technology Data Exchange (ETDEWEB)

    Flippo, Kirk Adler [Los Alamos National Laboratory

    2015-04-23

    The Powerpoint presentation provides an overview of High-Energy Density (HED) Physis, ICF and Burning Plasma research programs at Los Alamos National Lab. in New Mexico. Work in nuclear diagnostics is also presented, along with a summary of collaborations and upcoming projects.

  8. IMPACT - Integrated Modeling of Perturbations in Atmospheres for Conjunction Tracking

    Science.gov (United States)

    2013-09-01

    technical correctness. Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated...research and development process at Los Alamos National Laboratory ( LANL ), has the goal to develop an integrated modeling system for addressing current...project. 2. GROUND BASED OBSERVATIONS LANL is using a Raven-class telescope (0.35 m aperture C14 on a Paramount ME mount) to track

  9. Targeted Alpha Therapy: The US DOE Tri-Lab (ORNL, BNL, LANL) Research Effort to Provide Accelerator-Produced 225Ac for Radiotherapy

    Science.gov (United States)

    John, Kevin

    2017-01-01

    Targeted radiotherapy is an emerging discipline of cancer therapy that exploits the biochemical differences between normal cells and cancer cells to selectively deliver a lethal dose of radiation to cancer cells, while leaving healthy cells relatively unperturbed. A broad overview of targeted alpha therapy including isotope production methods, and associated isotope production facility needs, will be provided. A more general overview of the US Department of Energy Isotope Program's Tri-Lab (ORNL, BNL, LANL) Research Effort to Provide Accelerator-Produced 225Ac for Radiotherapy will also be presented focusing on the accelerator-production of 225Ac and final product isolation methodologies for medical applications.

  10. Sampling and Analysis Plan for Assessment of LANL-Derived Residual Radionuclides in Soils within Tract A-16-e for Land Conveyance

    Energy Technology Data Exchange (ETDEWEB)

    Gillis, Jessica [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ruedig, Elizabeth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-08-25

    Public Law 105-119 directs the U.S. Department of Energy (DOE) to convey or transfer parcels of land to the Incorporated County of Los Alamos or their designees and to the Department of Interior, Bureau of Indian Affairs, in trust for the Pueblo de San Ildefonso. Los Alamos National Security is tasked to support DOE in conveyance and/or transfer of identified land parcels no later than September 2022. Under DOE Order 458.1, Radiation Protection of the Public and the Environment (O458.1, 2013), and Los Alamos National Laboratory (LANL) implementing Policy 412 (P412, 2014a), real property with the potential to contain residual radioactive material must meet the criteria for clearance and release to the public. This Sampling and Analysis Plan (SAP) investigates Tract A-16-e and proposes 50 project-specific soil samples for use in radiological clearance decisions consistent with LANL Procedure ENV-ES-TP-238 (2015a) and guidance in the Multi-Agency Radiation Survey and Site Investigation Manual (MARSSIM, 2000).

  11. Sea Ice Sensitivities in the 0.72 degrees and 0.08 degrees Arctic Cap Coupled HYCOM/CICE Models

    Science.gov (United States)

    2013-09-30

    Coordinate Ocean Model (HYCOM) and the Los Alamos National Laboratory ( LANL ) CICE model. OBJECTIVES The objectives of the project are to optimize...together with NRL implement and test new versions of CICE in these coupled model set-ups as they become available from the LANL developers. APPROACH...fields will be compared with independent ice Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of

  12. MEMORANDUM OF UNDERSTANDING between Los Alamos National Laboratory (LANL) and Savannah River National Laboratory (SRNL) for Analytical Chemistry Support for Oxide Production Samples

    Energy Technology Data Exchange (ETDEWEB)

    Lloyd, Jane Alexandria [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-12-01

    This MOU establishes the responsibilities and requirements for the packaging and transport of plutonium dioxide (PuO2) samples for shipment from LANL to SRNL. The scope includes the shipping, packaging, quality assurance (QA), inspection, and documentation requirements to successfully obtain the chemical and isotopic characteristics of the PuO2. The requirements in this document are necessary, but not sufficient to execute this work and do not imply exemption from contractual requirements at either site. This document is not intended to specify all of the processes and procedures necessary to execute this work. This MOU also establishes appropriate requirements, goals, and expectations. Each party will establish a technical point of contact (POC) who will be responsible for addressing issues as they arise

  13. Status and test report on the LANL-Boeing APLE/HPO flying-wire beam-profile monitor. Status report

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, M.; Barlow, D.; Fortgang, C.; Gilpatrick, J.; Meyer, R.; Rendon, A.; Warren, D. [Los Alamos National Lab., NM (United States); Greegor, R. [Boeing Co., Seattle, WA (United States)

    1994-07-01

    The High-Power Oscillator (HPO) demonstration of the Average Power Laser Experiment (APLE) is a collaboration by Los Alamos National Laboratory and Boeing to demonstrate a 10 kW average power, 10 {mu}m free electron laser (FEL). As part of the collaboration, Los Alamos National Laboratory (LANL) is responsible for many of the electron beam diagnostics in the linac, transport, and laser sections. Because of the high duty factor and power of the electron beam, special diagnostics are required. This report describes the flying wire diagnostic required to monitor the beam profile during high-power, high-duty operation. The authors describe the diagnostic and prototype tests on the Los Alamos APLE Prototype Experiment (APEX) FEL. They also describe the current status of the flying wires being built for APLE.

  14. Quality New Mexico Performance Excellence Award Roadrunner Application 2016

    Energy Technology Data Exchange (ETDEWEB)

    Petru, Ernest Frank [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-06

    The Human Resources (HR) Division is a critical part of Los Alamos National Laboratory, an internationally recognized science and R&D facility with a specialized workforce of more than 10,000. The Laboratory’s mission is to solve national security challenges through scientific excellence. The HR Division partners with employees and managers to support the Laboratory in hiring, retaining, and motivating an exceptional workforce. The Laboratory is owned by the U.S. Department of Energy (DOE), with oversight by the DOE’s National Nuclear Security Administration (NNSA). In 2006, NNSA awarded the contract for managing and operating the Laboratory to Los Alamos National Security, LLC (LANS), and a for-profit consortium. This report expounds on performance excellence efforts, presenting a strategic plan and operations.

  15. Pre-test estimates of temperature decline for the LANL Fenton Hill Long-Term Flow Test

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, B.A. [Los Alamos National Lab., NM (United States); Kruger, P. [Stanford Univ., CA (United States). Stanford Geothermal Program

    1992-06-01

    Pre-test predications for the Long-Term Flow Test (LTFT) of the experimental Hot Dry Rock (HDR) reservoir at Fenton Hill were made using two models. Both models are dependent on estimates of the ``effective`` reservoir volume accessed by the fluid and the mean fracture spacing (MFS) of major joints for fluid flow. The effective reservoir volume was estimated using a variety of techniques, and the range of values for the MFS was set through experience in modeling the thermal cooldown of other experimental HDR reservoirs. The two pre-test predictions for cooldown to 210{degrees}C (a value taken to compare the models) from initial temperature of 240{degrees}C are 6.1 and 10.7 years. Assuming that a minimum of 10{degrees}C is required to provide an unequivocal indication of thermal cooldown, both models predict that the reservoir will not exhibit observable cooldown for at least two years.

  16. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  17. Isomeric ratio measurements for the radiative neutron capture 176Lu(n ,γ ) at the LANL DANCE facility

    Science.gov (United States)

    Denis-Petit, D.; Roig, O.; Méot, V.; Morillon, B.; Romain, P.; Jandel, M.; Kawano, T.; Vieira, D. J.; Bond, E. M.; Bredeweg, T. A.; Couture, A. J.; Haight, R. C.; Keksis, A. L.; Rundberg, R. S.; Ullmann, J. L.

    2016-11-01

    The isomeric ratios for the neutron capture reaction 176Lu(n ,γ ) to the Jπ=5 /2- , 761.7 keV, T1 /2=32.8 ns and the Jπ=15 /2+ , 1356.9 keV, T1 /2=11.1 ns levels of 177Lu have been measured for the first time. The experiment was carried out with the Detector for Advanced Neutron Capture Experiments (DANCE) at the Los Alamos National Laboratory. Measured isomeric ratios are compared with talys calculations using different models for photon strength functions, level densities, and optical potentials. In order to reproduce the experimental γ -ray spectra, a low-energy resonance must be added in the photon strength function used in our Hauser-Feshbach calculations.

  18. Non-destructive Preirradiation Assessment of UN / U-Si “LANL1” ATF formulation

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, Sven C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Losko, Adrian Simon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pokharel, Reeju [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ickes, Timothy Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hunter, James F. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Donald William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Voit, Stewart Lancaster [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tremsin, Anton S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bourke, Mark Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); McClellan, Kenneth James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-15

    The goal of the Advanced Non-destructive Fuel Examination (ANDE) work package is the development and application of non-destructive neutron imaging and scattering techniques to ceramic and metallic nuclear fuels, ultimately also to irradiated fuels. The results of these characterizations provide complete pre- and post-irradiation on length scales ranging from mm to nm, guide destructive examination, and inform modelling efforts. Besides technique development and application to samples to be irradiated, the ANDE work package also examines possible technologies to provide these characterization techniques pool-side, e.g. at the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL) using laser-driven intense pulsed neutron and gamma sources. Neutron tomography and neutron diffraction characterizations were performed on nine pellets; four UN/ U-Si composite formulations (two enrichment levels), three pure U3Si5 reference formulations (two enrichment levels), and two reject pellets with visible flaws (to qualify the technique). The 235U enrichments ranged from 0.2 to 8.8 wt. %. The nitride/silicide composites are candidate compositions for use as Accident Tolerant Fuel (ATF). The monophase U3Si5 material was included as a reference. Pellets from the same fabrication batches will be inserted in the Advanced Test Reactor at Idaho during 2016. We have also proposed a data format to build a database for characterization results of individual pellets. Neutron data reported in this report were collected in the LANSCE run cycle that started in September 2015 and ended in March 2016. This report provides the results for the characterized samples and discussion in the context of ANDE and APIE. We quantified the gamma spectra of several samples in their received state as well as after neutron irradiation to ensure that the neutron irradiation does not add significant activation that would complicate shipment and

  19. Youngs-Type Material Strength Model in the Besnard-Harlow-Rauenzahn Turbulence Equations

    Energy Technology Data Exchange (ETDEWEB)

    Denissen, Nicholas Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Plohr, Bradley J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-17

    Youngs [AWE Report Number 96/96, 1992] has augmented a two-phase turbulence model to account for material strength. Here we adapt the model of Youngs to the turbulence model for the mixture developed by Besnard, Harlow, and Rauenzahn [LANL Report LA-10911, 1987].

  20. Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Brian James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yin, Lin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stark, David James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-06

    One of the long-standing problems in the community is the question of how we can model “next-generation” laser-ion acceleration in a computationally tractable way. A new particle tracking capability in the LANL VPIC kinetic plasma modeling code has enabled us to solve this long-standing problem

  1. Sea Ice Sensitivities in the 0.72 deg and 0.08 deg Arctic Cap Coupled HYCOM/CICE Models

    Science.gov (United States)

    2014-09-30

    the Hybrid Coordinate Ocean Model (HYCOM) and the Los Alamos National Laboratory ( LANL ) CICE model. OBJECTIVES The objectives of this project... Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1 hour per response

  2. Transfer of Real-time Dynamic Radiation Environment Assimilation Model; Research to Operation

    Science.gov (United States)

    Cho, K. S. F.; Hwang, J.; Shin, D. K.; Kim, G. J.; Morley, S.; Henderson, M. G.; Friedel, R. H.; Reeves, G. D.

    2015-12-01

    Real-time Dynamic Radiation Environment Assimilation Model (rtDREAM) was developed by LANL for nowcast of energetic electrons' flux at the radiation belt to quantify potential risks from radiation damage at the satellites. Assimilated data are from multiple sources including LANL assets (GEO, GPS). For transfer from research to operation of the rtDREAM code, LANL/KSWC/NOAA makes a Memorandum Of Understanding (MOU) on the collaboration between three parts. By this MOU, KWSC/RRA provides all the support for transitioning the research version of DREAM to operations. KASI is primarily responsible for providing all the interfaces between the current scientific output formats of the code and useful space weather products that can be used and accessed through the web. In the second phase, KASI will be responsible in performing the work needed to transform the Van Allen Probes beacon data into "DREAM ready" inputs. KASI will also provide the "operational" code framework and additional data preparation, model output, display and web page codes back to LANL and SWPC. KASI is already a NASA partnering ground station for the Van Allen Probes' space weather beacon data and can here show use and utility of these data for comparison between rtDREAM and observations by web. NOAA has offered to take on some of the data processing tasks specific to the GOES data.

  3. Applying the LANL Statistical Pattern Recognition Paradigm for Structural Health Monitoring to Data from a Surface-Effect Fast Patrol Boat

    Energy Technology Data Exchange (ETDEWEB)

    Hoon Sohn; Charles Farrar; Norman Hunter; Keith Worden

    2001-01-01

    This report summarizes the analysis of fiber-optic strain gauge data obtained from a surface-effect fast patrol boat being studied by the staff at the Norwegian Defense Research Establishment (NDRE) in Norway and the Naval Research Laboratory (NRL) in Washington D.C. Data from two different structural conditions were provided to the staff at Los Alamos National Laboratory. The problem was then approached from a statistical pattern recognition paradigm. This paradigm can be described as a four-part process: (1) operational evaluation, (2) data acquisition & cleansing, (3) feature extraction and data reduction, and (4) statistical model development for feature discrimination. Given that the first two portions of this paradigm were mostly completed by the NDRE and NRL staff, this study focused on data normalization, feature extraction, and statistical modeling for feature discrimination. The feature extraction process began by looking at relatively simple statistics of the signals and progressed to using the residual errors from auto-regressive (AR) models fit to the measured data as the damage-sensitive features. Data normalization proved to be the most challenging portion of this investigation. A novel approach to data normalization, where the residual errors in the AR model are considered to be an unmeasured input and an auto-regressive model with exogenous inputs (ARX) is then fit to portions of the data exhibiting similar waveforms, was successfully applied to this problem. With this normalization procedure, a clear distinction between the two different structural conditions was obtained. A false-positive study was also run, and the procedure developed herein did not yield any false-positive indications of damage. Finally, the results must be qualified by the fact that this procedure has only been applied to very limited data samples. A more complete analysis of additional data taken under various operational and environmental conditions as well as other

  4. A Linear Viscoelastic Model Calibration of Sylgard 184.

    Energy Technology Data Exchange (ETDEWEB)

    Long, Kevin Nicholas; Brown, Judith Alice

    2017-04-01

    We calibrate a linear thermoviscoelastic model for solid Sylgard 184 (90-10 formulation), a lightly cross-linked, highly flexible isotropic elastomer for use both in Sierra / Solid Mechanics via the Universal Polymer Model as well as in Sierra / Structural Dynamics (Salinas) for use as an isotropic viscoelastic material. Material inputs for the calibration in both codes are provided. The frequency domain master curve of oscillatory shear was obtained from a report from Los Alamos National Laboratory (LANL). However, because the form of that data is different from the constitutive models in Sierra, we also present the mapping of the LANL data onto Sandia’s constitutive models. Finally, blind predictions of cyclic tension and compression out to moderate strains of 40 and 20% respectively are compared with Sandia’s legacy cure schedule material. Although the strain rate of the data is unknown, the linear thermoviscoelastic model accurately predicts the experiments out to moderate strains for the slower strain rates, which is consistent with the expectation that quasistatic test procedures were likely followed. This good agreement comes despite the different cure schedules between the Sandia and LANL data.

  5. Modeling Abrupt Change in Global Sea Level Arising from Ocean - Ice-Sheet Interaction

    Energy Technology Data Exchange (ETDEWEB)

    Holland, David M

    2011-09-24

    It is proposed to develop, validate, and apply a coupled ocean ice-sheet model to simulate possible, abrupt future change in global sea level. This research is to be carried out collaboratively between an academic institute and a Department of Energy Laboratory (DOE), namely, the PI and a graduate student at New York University (NYU) and climate model researchers at the Los Alamos National Laboratory (LANL). The NYU contribution is mainly in the area of incorporating new physical processes into the model, while the LANL efforts are focused on improved numerics and overall model development. NYU and LANL will work together on applying the model to a variety of modeling scenarios of recent past and possible near-future abrupt change to the configuration of the periphery of the major ice sheets. The project's ultimate goal is to provide a robust, accurate prediction of future global sea level change, a feat that no fully-coupled climate model is currently capable of producing. This proposal seeks to advance that ultimate goal by developing, validating, and applying a regional model that can simulate the detailed processes involved in sea-level change due to ocean ice-sheet interaction. Directly modeling ocean ice-sheet processes in a fully-coupled global climate model is not a feasible activity at present given the near-complete absence of development of any such causal mechanism in these models to date.

  6. Summary of studies conducted under LANL subcontracts: 9-X73-4229J-1 and 9-XC3-9739K-1

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz, C.A.; Chandra, S.; Fan, B.; Muyshondt, A.; Parulian, A.; McFarland, A.R.

    1994-10-01

    Under the scope of the ventilation stack study, 1/3 scale models of the TA-55 facility FE- 15 and FE-16 stacks were fabricated and tested. Tests were conducted to achieve stack and plenum configurations that would produce adequate mixing, flow uniformity and particulate concentration uniformity. This should enable effective single point sampling in these exhaust stacks at the existing sampling locations during normal and emergency operating conditions. Velocity profiles were obtained at the current sampling location. Flow mixing was evaluated via a tracer gas technique. Gas concentration profiles were measured with the help of the tracer gas technique while releasing the tracer gas in the different inlet ducts. Finally, aerosol particle concentration profiles were measured at the existing sampling locations. The results indicated that the present field configurations of the two stacks produced poor mixing. Adjustments and modifications of the model configurations were carried out to arrive at uniform velocity, gas and aerosol particle concentrations profiles at the present sampling location. The modified models typically produced profile coefficients of variations that were less than 10%, suggesting that the present sampling locations could effectively be used for single-point sampling. The filter air sampler research is concerned with the development of a filter air sampler to ease filter changing efficiency and to provide a built in flow indicator. Testing was done on the new sampler to find the effect on the entrance efficiency by variations in the parameter-air speed, particle size, flow rate, and orientations. It was found that the aerosol sampling efficiencies of the new unit are no different from the filter air sampler currently used by Los Alamos National Laboratory.

  7. Using the internet in middle schools: A model for success

    Energy Technology Data Exchange (ETDEWEB)

    Addessio, B.; Boorman, M.; Eker, P.; Fletcher, K.; Judd, B.; Trainor, M. [Los Alamos National Lab., NM (United States); Corn, C.; Olsen, J.; Trottier, A. [Los Alamos Middle School, Los Alamos, New Mexico (United States)

    1994-03-01

    Los Alamos National Laboratory (LANL) developed a model for school networking using Los Alamos Middle School as a testbed. The project was a collaborative effort between the school and the Laboratory. The school secured administrative funding for hardware and software; and LANL provided the network architecture, installation, consulting, and training. The model is characterized by a computer classroom linked with two GatorBoxes and a UNIX-based workstation server. Six additional computers have also been networked from a teacher learning center and the library. The model support infrastructure includes: local school system administrators/lead teachers, introductory and intermediate hands-on teacher learning, teacher incentives for involvement and use, opportunities for student training and use, and ongoing LANL consulting. Formative evaluation data reveals that students and teachers alike are finding the Internet to be a tool that crosses disciplines, allowing them to obtain more, timely information and to communicate with others more effectively and efficiently. A lead teacher`s enthusiastic comments indicate some of the value gained: ``We have just scratched the surface. Each day someone seems to find something new and interesting on the Internet. The possibilities seem endless.``

  8. Los Alamos Waste Management Cost Estimation Model; Final report: Documentation of waste management process, development of Cost Estimation Model, and model reference manual

    Energy Technology Data Exchange (ETDEWEB)

    Matysiak, L.M.; Burns, M.L.

    1994-03-01

    This final report completes the Los Alamos Waste Management Cost Estimation Project, and includes the documentation of the waste management processes at Los Alamos National Laboratory (LANL) for hazardous, mixed, low-level radioactive solid and transuranic waste, development of the cost estimation model and a user reference manual. The ultimate goal of this effort was to develop an estimate of the life cycle costs for the aforementioned waste types. The Cost Estimation Model is a tool that can be used to calculate the costs of waste management at LANL for the aforementioned waste types, under several different scenarios. Each waste category at LANL is managed in a separate fashion, according to Department of Energy requirements and state and federal regulations. The cost of the waste management process for each waste category has not previously been well documented. In particular, the costs associated with the handling, treatment and storage of the waste have not been well understood. It is anticipated that greater knowledge of these costs will encourage waste generators at the Laboratory to apply waste minimization techniques to current operations. Expected benefits of waste minimization are a reduction in waste volume, decrease in liability and lower waste management costs.

  9. Incorporation of a Generalized Data Assimilation Module within a Global Photospheric Flux Transport Model

    Science.gov (United States)

    2016-03-31

    National Laboratory (LANL). The main outcome of this research effort is the state-of- the- art data assimilative photospheric flux transport model now...input to WSA. Such comparisons were made with the assistance of a University of New Mexico graduate student/Summer AFRL Space Scholar so that WSA...advance state-of-the- art 3-D MHD CORHEL coronal and solar wind model. In year seven (2014) significant progress was made in is this area. Figure 41

  10. OMEGA FY13 HED requests - LANL

    Energy Technology Data Exchange (ETDEWEB)

    Workman, Jonathan B [Los Alamos National Laboratory; Loomis, Eric N [Los Alamos National Laboratory

    2012-06-25

    This is a summary of scientific work to be performed on the OMEGA laser system located at the Laboratory for Laser Energetics in Rochester New York. The work is funded through Science and ICF Campagins and falls under the category of laser-driven High-Energy Density Physics experiments. This summary is presented to the Rochester scheduling committee on an annual basis for scheduling and planning purposes.

  11. Recent Work on Calorimetry at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Santi, Peter A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hauck, Danielle K. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-01-09

    This report is a briefing to collaborators at the Swedish Nuclear Fuel and Waste Management Company, Oskarshamn, Sweden, January 14, 2014. It describes the way in which calorimetry supports the safegurard mission.

  12. UNM/LANL Volcanology Summer Field Course

    Science.gov (United States)

    van Eaton, A.; Goff, F.; Fischer, T. P.; Baldridge, W.; Semken, S.

    2007-12-01

    The Volcanology Summer Field Course, taught jointly by volcanologists from the University of New Mexico and Los Alamos National Laboratory, has instructed over 140 undergraduate and graduate students from 15 countries since 1992. The course consists of nine graded field exercises conducted in diverse volcanic rocks of the Miocene to Quaternary-age Jemez Volcanic Field and Valles caldera, with excursions to the Miocene Ship Rock dike and plug complex, the Pliocene Mount Taylor composite volcano, and the Quaternary Zuni-Bandera basalt field. Exercises focus on mapping large-scale silicic eruption deposits (e.g., Bandelier Tuff as well as older and younger eruptions), establishing volcanic stratigraphy, understanding the processes of water-magma interaction through detailed mapping, and investigating hydrothermal alteration in an intra-caldera setting. Techniques such as geothermal gas sampling and identification of volcanic rocks and structures form an integral part of the training. Contributing to the success of the course include: 1) its small class size of 16 to 17 students; 2) duration of 3.5 weeks--long enough for sustained focus on the Jemez field area; 3) central lodging arrangement (Young's Ranch Field Station), with meals coordinated by a camp cook; 4) organized course structure supplementing field assignments with evening lectures in a common-room; 5) instructors with a variety of geological/volcanological expertise; and 6) the ability to team up with multi-national students bringing a wide array of approaches and experiences. The course acts as a springboard for students pursuing interests in volcanology, offering an intensive and lively field experience that is difficult to find anywhere else.

  13. The cement solidification systems at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Veazey, G.W.

    1990-01-01

    There are two major cement solidification systems at Los Alamos National Laboratory. Both are focused primarily around treating waste from the evaporator at TA-55, the Plutonium Processing Facility. The evaporator receives the liquid waste stream from TA-55's nitric acid-based, aqueous-processing operations and concentrates the majority of the radionuclides in the evaporator bottoms solution. This is sent to the TA-55 cementation system. The evaporator distillate is sent to the TA-50 facility, where the radionuclides are precipitated and then cemented. Both systems treat TRU-level waste, and so are operated according to the criteria for WIPP-destined waste, but they differ in both cement type and mixing method. The TA-55 systems uses Envirostone, a gypsum-based cement and in-drum prop mixing; the TA-50 systems uses Portland cement and drum tumbling for mixing.

  14. SUPPORT FOR THE COMPLETION OF THE ARM PROJECT AND DEVELOPMENT OF A FIELD DEMONSTRATION OF THE GWIS MODEL FOR A VIRTUAL ENTERPRISE

    Energy Technology Data Exchange (ETDEWEB)

    F. DAVID MARTIN; MARK B. MURPHY - STRATEGIC TECHNOLOGY RESOURCES, LLC

    1999-12-31

    Strategic Technology Resources, L.L.C. (STR) provided work for Los Alamos National Laboratory (LANL) in response to Request for Proposal 005BZ0019-35. The objectives of the work in this project were to: (1) support the completion of the Advanced Reservoir Management (ARM) cooperative research and development agreement (CRADA) LA9502037, and (2) support the development of a field demonstration of the LANL-developed Global Weapons Information System (GWIS) model for virtual enterprises. The second objective was contingent upon DOE approval of the Advanced Information Management (AIM) CRADA. At the request of the LANL Technical Representative, the project was granted a no-cost extension to November 30, 1999. As part of the project, STR provided managerial support for the ARM CRADA by: (1) assessing the data resources of the participating companies, (2) facilitating the transfer of technical data to LANL, (3) preparing reports, (4) managing communications between the parties to the ARM CRADA, and (5) assisting with the dissemination of information between the parties to technical professional societies and trade associations. The first phase of the current project was to continue to engage subcontractors to perform tasks in the ARM CRADA for which LANL expertise was lacking. All of the ARM field studies required of the project were completed, and final reports for all of the project studies are appended to this final report. The second phase of the current project was to support the field demonstration of the GWIS model for virtual enterprises in an oilfield setting. STR developed a hypertext Webpage that describes the concept and implementation of a virtual enterprise for reservoir management in the petroleum industry. Contents of the hypertext document are included in this report on the project.

  15. Influence of low-altitude meteorological conditions on local infrasound propagation investigated by 3-D full-waveform modeling

    Science.gov (United States)

    Kim, Keehoon; Rodgers, Arthur

    2017-08-01

    Vertical stratification in the low atmosphere impacts near-ground sound propagation. On clear days, for example, negative gradients of low-atmospheric temperature can lead to upward refraction of acoustic waves and create a zone of silence near the ground, where no acoustic rays can arrive. We investigate impacts of lower tropospheric temperature and wind-velocity gradient on acoustic wave propagation using numerical simulations. Sound refraction in the atmosphere is a frequency-dependent wave phenomenon, and therefore classical ray methods based on infinite-frequency approximation may not be suitable for modeling acoustic wave amplitudes. In this study, a full-waveform acoustic solver was used to predict amplitudes of acoustic waves taking into account meteorological conditions (temperature, pressure and wind). Local radiosonde sounding data were input into acoustic simulations to characterize the background conditions of the local atmosphere. The results of numerical modeling indicate that acoustic overpressure amplitudes were significantly affected by local atmospheric wind speed and direction near the ground. Local wind changes the effective sound speed profile in the atmosphere and influences overpressure amplitude decay governed by upward refraction. We compared 3-D finite-difference modeling results with acoustic overpressure measurements from the Humming Roadrunner explosion experiments conducted in New Mexico in 2012. The modeling results showed good agreement with the observations in peak amplitudes when a background wind was weak and well characterized by local atmospheric data. However, when a strong wind was present at an explosion and its variability was poorly characterized by local radiosonde sounding, the numerical prediction of local acoustic amplitude agreed poorly with the observations. Additional numerical simulations with the inclusion of surface wind data indicate that local acoustic amplitudes could be significantly variable depending on

  16. Challenges in Integrated Computational Structure - Material Modeling of High Strain-Rate Deformation and Failure in Heterogeneous Materials

    Science.gov (United States)

    2014-10-09

    author(s) and should not contrued as an official Department of the Army position, policy or decision, unless so designated by other documentation . 9...Structure Heterogeneous Material Models REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...Bronkhorst of LANL . This was followed by a 30 min. panel discussion. (iv) Plenary session # 2 on Probabilistic Modeling & Uncertainty

  17. National environmental/economic infrastructure system model

    Energy Technology Data Exchange (ETDEWEB)

    Drake, R.H.; Hardie, R.W.; Loose, V.W.; Booth, S.R.

    1997-08-01

    This is the final report for a one-year Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The ultimate goal was to develop a new methodology for macroeconomic modeling applied to national environmental and economic problems. A modeling demonstration and briefings were produced, and significant internal technical support and program interest has been generated. External contacts with DOE`s Office of Environmental Management (DOE-EM), US State Department, and the US intelligence community were established. As a result of DOE-EM interest and requests for further development, this research has been redirected to national environmental simulations as a new LDRD project.

  18. Using multimedia modeling to expedite site characterization.

    Science.gov (United States)

    Travis, Curtis; Obenshain, Karen R; Gunter, James T; Regens, James L; Whipple, Christopher

    2004-01-01

    This paper uses two case studies of U.S. Department of Energy nuclear weapons complex installations to illustrate the integration of expedited site characterization (ESC) and multimedia modeling in the remedial action decision making process. CONCEPTUAL SITE MODELS, MULTIMEDIA MODELS, AND EXPEDITED SITE CHARACTERIZATION: Conceptual site models outline assumptions about contaminates and the spatial/temporal distribution of potential receptors. Multimedia models simulate contaminant transport and fate through multiple environmental media, estimate potential human exposure via specific exposure pathways, and estimate the risk of cancer and non-cancer health outcomes. ESC relies on using monitoring data to quantify the key components of an initial conceptual site model that is modified iteratively using the multimedia model. Two case studies are presented that used the ESC approach: Los Alamos National Laboratory (LANL) and Pantex. LANL released radionuclides, metals, and organic compounds, into canyons surrounding the facility. The Pantex Plant has past waste management operations which included burning chemical wastes in unlined pits, burying wastes in unlined landfills, and discharging plant wastewaters into on-site surface waters. The case studies indicate that using multimedia models with the ESC approach can inform assessors about what, where, and how much site characterization data needs to be collected to reduce the uncertainty associated with risk assessment. Lowering the degree of uncertainty reduces the time and cost associated with assessing potential risk and increases the confidence that decision makers have in the assessments performed.

  19. Modeling the Corona and Solar Wind using ADAPT Maps that Include Far-Side Observations

    Science.gov (United States)

    2013-11-01

    document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated...Government’s approval or disapproval of its ideas or findings. Approved for public release; distribution is unlimited. REPORT DOCUMENTATION PAGE...Los Alamos National Laboratory ( LANL ) and the National Solar Observatory (NSO), has developed a model that produces more realistic estimates of the

  20. 兰陵地区子宫颈液基细胞学筛查结果分析%Liquid-based cytology screening for cervical cancer in Lanling district

    Institute of Scientific and Technical Information of China (English)

    张丽冉; 王新国; 谢凤祥; 赵东曼; 范波涛; 李欣; 祁德波

    2015-01-01

    目的:收集和分析兰陵地区子宫颈癌筛查结果,为子宫颈癌防治提供科学依据。方法以2015年上半年兰陵地区已婚女性入组研究,收集子宫颈脱落细胞标本,采用液基细胞学制片,巴氏染色,在严格诊断质量控制下按子宫颈细胞学Bethesda报告系统的诊断标准进行判读。结果收集标本13832例,标本满意率达99.96%。微生物检出情况:真菌感染99例(0.72%),滴虫感染120例(0.87%),放线菌感染30例(0.22%),细菌性阴道病770例(5.57%),共1019例(7.37%)。子宫颈液基细胞学检测(liquid-based cytology testing,LCT)情况:不能明确意义的不典型鳞状细胞(atypical squamous cells of unknown significance,ASC-US)479例(3.46%),不除外高级别鳞状上皮内病变的不典型鳞状上皮细胞(atypical squamous cells cannot exclude HSIL,ASC-H)25例(0.18%),低级别鳞状上皮内病变(low grade squamous intraepithelial lesions ,LSIL)235例(1.70%),高级别鳞状上皮内病变(high grade squamous intraepithelial lesions ,HSIL)90例(0.65%),鳞状细胞癌(squamous cell carcinomas,SCC)1例(0.01%),不典型腺细胞(atypical glandular cells,AGC)4例(0.03%)。细胞学检查异常共834例,占6.03%。细胞学检查异常人群集中在25~55岁。诊断质量控制结果:不典型鳞状上皮细胞与上皮内病变的比值(ASC/SIL)为1.546;27例HSIL宫颈活检病理结果显示26例为子宫颈上皮内瘤变( cervical intraepithelial neoplasia,CIN)Ⅱ级、Ⅲ级,符合率为96.3%。结论子宫颈液基细胞学检测能发现子宫颈微生物感染及癌前病变,为子宫颈癌筛查工作提供科学依据。%Objective To screen married women in Lanling District for cervical cancer using liquid-based cytology testing(LCT). Methods Married women in Lanling district were enrolled into this study in the first half of 2015 based

  1. Model Analysis of Complex Systems Behavior using MADS

    Science.gov (United States)

    Vesselinov, V. V.; O'Malley, D.

    2016-12-01

    Evaluation of robustness (reliability) of model predictions is challenging for models representing complex system behavior. Frequently in science and engineering applications related to complex systems, several alternative physics models may describe the available data equally well and are physically reasonable based on the available conceptual understanding. However, these alternative models could give very different predictions about the future states of the analyzed system. Furthermore, in the case of complex systems, we often must do modeling with an incomplete understanding of the underlying physical processes and model parameters. The analyses of model predictions representing complex system behavior are particularly challenging when we are quantifying uncertainties of rare events in the model prediction space that can have major consequences (also called "black swans"). These types of analyses are also computationally challenging. Here, we demonstrate the application of a general high-performance computational tool for Model Analysis & Decision Support (MADS; http://mads.lanl.gov) which can be applied to perform analyses using any external physics or systems model. The coupling between MADS and the external model can be performed using different methods. MADS is implemented in Julia, a high-level, high-performance dynamic programming language for technical computing (http://mads.lanl.gov/, https://github.com/madsjulia/Mads.jl, http://mads.readthedocs.org). MADS has been applied to perform analyses for environmental-management and water-energy-food nexus problems. To demonstrate MADS capabilities and functionalities, we analyze a series of synthetic problems consistent with actual real-world problems.

  2. Documentation of the Ecological Risk Assessment Computer Model ECORSK.5

    Energy Technology Data Exchange (ETDEWEB)

    Anthony F. Gallegos; Gilbert J. Gonzales

    1999-06-01

    The FORTRAN77 ecological risk computer model--ECORSK.5--has been used to estimate the potential toxicity of surficial deposits of radioactive and non-radioactive contaminants to several threatened and endangered (T and E) species at the Los Alamos National Laboratory (LANL). These analyses to date include preliminary toxicity estimates for the Mexican spotted owl, the American peregrine falcon, the bald eagle, and the southwestern willow flycatcher. This work has been performed as required for the Record of Decision for the construction of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility at LANL as part of the Environmental Impact Statement. The model is dependent on the use of the geographic information system and associated software--ARC/INFO--and has been used in conjunction with LANL's Facility for Information Management and Display (FIMAD) contaminant database. The integration of FIMAD data and ARC/INFO using ECORSK.5 allows the generation of spatial information from a gridded area of potential exposure called an Ecological Exposure Unit. ECORSK.5 was used to simulate exposures using a modified Environmental Protection Agency Quotient Method. The model can handle a large number of contaminants within the home range of T and E species. This integration results in the production of hazard indices which, when compared to risk evaluation criteria, estimate the potential for impact from consumption of contaminants in food and ingestion of soil. The assessment is considered a Tier-2 type of analysis. This report summarizes and documents the ECORSK.5 code, the mathematical models used in the development of ECORSK.5, and the input and other requirements for its operation. Other auxiliary FORTRAN 77 codes used for processing and graphing output from ECORSK.5 are also discussed. The reader may refer to reports cited in the introduction to obtain greater detail on past applications of ECORSK.5 and assumptions used in deriving model parameters.

  3. Harnessing Petaflop-Scale Multi-Core Supercomputing for Problems in Space Science

    Science.gov (United States)

    Albright, B. J.; Yin, L.; Bowers, K. J.; Daughton, W.; Bergen, B.; Kwan, T. J.

    2008-12-01

    The particle-in-cell kinetic plasma code VPIC has been migrated successfully to the world's fastest supercomputer, Roadrunner, a hybrid multi-core platform built by IBM for the Los Alamos National Laboratory. How this was achieved will be described and examples of state-of-the-art calculations in space science, in particular, the study of magnetic reconnection, will be presented. With VPIC on Roadrunner, we have performed, for the first time, plasma PIC calculations with over one trillion particles, >100× larger than calculations considered "heroic" by community standards. This allows examination of physics at unprecedented scale and fidelity. Roadrunner is an example of an emerging paradigm in supercomputing: the trend toward multi-core systems with deep hierarchies and where memory bandwidth optimization is vital to achieving high performance. Getting VPIC to perform well on such systems is a formidable challenge: the core algorithm is memory bandwidth limited with low compute-to-data ratio and requires random access to memory in its inner loop. That we were able to get VPIC to perform and scale well, achieving >0.374 Pflop/s and linear weak scaling on real physics problems on up to the full 12240-core Roadrunner machine, bodes well for harnessing these machines for our community's needs in the future. Many of the design considerations encountered commute to other multi-core and accelerated (e.g., via GPU) platforms and we modified VPIC with flexibility in mind. These will be summarized and strategies for how one might adapt a code for such platforms will be shared. Work performed under the auspices of the U.S. DOE by the LANS LLC Los Alamos National Laboratory. Dr. Bowers is a LANL Guest Scientist; he is presently at D. E. Shaw Research LLC, 120 W 45th Street, 39th Floor, New York, NY 10036.

  4. A Global, Multi-Resolution Approach to Regional Ocean Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang

    2013-11-08

    In this collaborative research project between Pennsylvania State University, Colorado State University and Florida State University, we mainly focused on developing multi-resolution algorithms which are suitable to regional ocean modeling. We developed hybrid implicit and explicit adaptive multirate time integration method to solve systems of time-dependent equations that present two signi cantly di erent scales. We studied the e ects of spatial simplicial meshes on the stability and the conditioning of fully discrete approximations. We also studies adaptive nite element method (AFEM) based upon the Centroidal Voronoi Tessellation (CVT) and superconvergent gradient recovery. Some of these techniques are now being used by geoscientists(such as those at LANL).

  5. Facility Modeling Capability Demonstration Summary Report

    Energy Technology Data Exchange (ETDEWEB)

    Key, Brian P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sadasivan, Pratap [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fallgren, Andrew James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demuth, Scott Francis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aleman, Sebastian E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); de Almeida, Valmor F. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Chiswell, Steven R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Hamm, Larry [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Tingey, Joel M. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2017-02-01

    A joint effort has been initiated by Los Alamos National Laboratory (LANL), Oak Ridge National Laboratory (ORNL), Savanah River National Laboratory (SRNL), Pacific Northwest National Laboratory (PNNL), sponsored by the National Nuclear Security Administration’s (NNSA’s) office of Proliferation Detection, to develop and validate a flexible framework for simulating effluents and emissions from spent fuel reprocessing facilities. These effluents and emissions can be measured by various on-site and/or off-site means, and then the inverse problem can ideally be solved through modeling and simulation to estimate characteristics of facility operation such as the nuclear material production rate. The flexible framework called Facility Modeling Toolkit focused on the forward modeling of PUREX reprocessing facility operating conditions from fuel storage and chopping to effluent and emission measurements.

  6. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  7. Data and Model-Driven Decision Support for Environmental Management of a Chromium Plume at Los Alamos National Laboratory - 13264

    Energy Technology Data Exchange (ETDEWEB)

    Vesselinov, Velimir V.; Broxton, David; Birdsell, Kay; Reneau, Steven; Harp, Dylan; Mishra, Phoolendra [Computational Earth Science - EES-16, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Katzman, Danny; Goering, Tim [Environmental Programs (ADEP), Los Alamos National Laboratory, Los Alamos NM 87545 (United States); Vaniman, David; Longmire, Pat; Fabryka-Martin, June; Heikoop, Jeff; Ding, Mei; Hickmott, Don; Jacobs, Elaine [Earth Systems Observations - EES-14, Earth and Environmental Sciences, Los Alamos National Laboratory, Los Alamos NM 87545 (United States)

    2013-07-01

    A series of site investigations and decision-support analyses have been performed related to a chromium plume in the regional aquifer beneath the Los Alamos National Laboratory (LANL). Based on the collected data and site information, alternative conceptual and numerical models representing governing subsurface processes with different complexity and resolution have been developed. The current conceptual model is supported by multiple lines of evidence based on comprehensive analyses of the available data and modeling results. The model is applied for decision-support analyses related to estimation of contaminant- arrival locations and chromium mass flux reaching the regional aquifer, and to optimization of a site monitoring-well network. Plume characterization is a challenging and non-unique problem because multiple models and contamination scenarios are consistent with the site data and conceptual knowledge. To solve this complex problem, an advanced methodology based on model calibration and uncertainty quantification has been developed within the computational framework MADS (http://mads.lanl.gov). This work implements high-performance computing and novel, efficient and robust model analysis techniques for optimization and uncertainty quantification (ABAGUS, Squads, multi-try (multi-start) techniques), which allow for solving problems with large degrees of freedom. (authors)

  8. Collisional-Radiative Modeling of Tungsten at Temperatures of 1200–2400 eV

    Directory of Open Access Journals (Sweden)

    James Colgan

    2015-04-01

    Full Text Available We discuss new collisional-radiative modeling calculations of tungsten at moderate temperatures of 1200 to 2400 eV. Such plasma conditions are relevant to ongoing experimental work at ASDEX Upgrade and are expected to be relevant for ITER. Our calculations are made using the Los Alamos National Laboratory (LANL collisional-radiative modeling ATOMIC code. These calculations formed part of a submission to the recent NLTE-8 workshop that was held in November 2013. This series of workshops provides a forum for detailed comparison of plasma and spectral quantities from NLTE collisional-radiative modeling codes. We focus on the LANL ATOMIC calculations for tungsten that were submitted to the NLTE-8 workshop and discuss different models that were constructed to predict the tungsten emission. In particular, we discuss comparisons between semi-relativistic configuration-average and fully relativistic configuration-average calculations. We also present semi-relativistic calculations that include fine-structure detail, and discuss the difficult problem of ensuring completeness with respect to the number of configurations included in a CR calculation.

  9. Initial CGE Model Results Summary Exogenous and Endogenous Variables Tests

    Energy Technology Data Exchange (ETDEWEB)

    Edwards, Brian Keith [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boero, Riccardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rivera, Michael Kelly [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    The following discussion presents initial results of tests of the most recent version of the National Infrastructure Simulation and Analysis Center Dynamic Computable General Equilibrium (CGE) model developed by Los Alamos National Laboratory (LANL). The intent of this is to test and assess the model’s behavioral properties. The test evaluated whether the predicted impacts are reasonable from a qualitative perspective. This issue is whether the predicted change, be it an increase or decrease in other model variables, is consistent with prior economic intuition and expectations about the predicted change. One of the purposes of this effort is to determine whether model changes are needed in order to improve its behavior qualitatively and quantitatively.

  10. Neutron Multiplicity: LANL W Covariance Matrix for Curve Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Wendelberger, James G. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-12-08

    In neutron multiplicity counting one may fit a curve by minimizing an objective function, χ$2\\atop{n}$. The objective function includes the inverse of an n by n matrix of covariances, W. The inverse of the W matrix has a closed form solution. In addition W-1 is a tri-diagonal matrix. The closed form and tridiagonal nature allows for a simpler expression of the objective function χ$2\\atop{n}$. Minimization of this simpler expression will provide the optimal parameters for the fitted curve.

  11. Design and performance of the LANL 158-channel magnetoencephalography system

    Energy Technology Data Exchange (ETDEWEB)

    Matlachov, A. N. (Andrei N.); Kraus, Robert H., Jr.; Espy, M. A. (Michelle A.); Best, E. D. (Elaine D.); Briles, M. Carolyn; Raby, E. Y. (Eric Y.); Flynn, E. R.

    2002-01-01

    Design and performance for a recently completed whole-head magnetoencephalography (MEG) system using a superconducting imaging-surface (SIS) surrounding an array of SQUID magnetometers is reported. The helmet-like SIS is hemispherical in shape with a brim. The SIS images nearby sources while shields sensors from ambient magnetic noise. The shielding factor depends on magnetometer position and orientation. Typical shielding values of 200 in central sulcus area have been observed. Nine reference channels form three vector magnetometers, which are placed outside SIS. Signal channels consist of 149 SQUID magnetometers with 0.84nT/{Phi}{sub 0} field sensitivity and less then 3 fT/{radical}Hz noise. Typical SQUID - room temperature separations are about 20mm in the cooled state. Twelve 16-channel flux-lock loop units are connected to two 96-channel control units allowing up to 192 total SQUID channels. The control unit includes signal conditioning circuits as well as system test and control circuits. After conditioning all signals are fed to 192-channel, 24-bit data acquisition system capable of sampling up to 48kSa/sec/channel. The SIS-MEG system enables high-quality human functional brain data to be recorded in a one-layer magnetically shielded room.

  12. CH Packaging Operations for High Wattage Waste at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2003-03-21

    This procedure provides instructions for assembling the following contact-handled (CH) packaging payloads: - Drum payload assembly - Standard Waste Box (SWB) assembly - Ten-Drum Overpack (TDOP) In addition, this procedure also provides operating instructions for the TRUPACT-II CH waste packaging. This document also provides instructions for performing ICV and OCV preshipment leakage rate tests on the following packaging seals, using a nondestructive helium (He) leak test: - ICV upper main O-ring seal - ICV outer vent port plug O-ring seal - OCV upper main O-ring seal - OCV vent port plug O-ring seal.

  13. CH Packaging Operations for High Wattage Waste at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2003-05-06

    This procedure provides instructions for assembling the following contact-handled (CH) packaging payloads: - Drum payload assembly - Standard Waste Box (SWB) assembly - Ten-Drum Overpack (TDOP) In addition, this procedure also provides operating instructions for the TRUPACT-II CH waste packaging. This document also provides instructions for performing ICV and OCV preshipment leakage rate tests on the following packaging seals, using a nondestructive helium (He) leak test: - ICV upper main O-ring seal - ICV outer vent port plug O-ring seal - OCV upper main O-ring seal - OCV vent port plug O-ring seal.

  14. CH Packaging Operations for High Wattage Waste at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2002-12-18

    This procedure provides instructions for assembling the following contact-handled (CH) packaging payloads: - Drum payload assembly - Standard Waste Box (SWB) assembly - Ten-Drum Overpack (TDOP) In addition, this procedure also provides operating instructions for the TRUPACT-II CH waste packaging. This document also provides instructions for performing ICV and OCV preshipment leakage rate tests on the following packaging seals, using a nondestructive helium (He) leak test: - ICV upper main O-ring seal - ICV outer vent port plug O-ring seal - OCV upper main O-ring seal - OCV vent port plug O-ring seal.

  15. CH Packaging Operations for High Wattage Waste at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2002-10-17

    This procedure provides instructions for assembling the following contact-handled (CH) packaging payloads: - Drum payload assembly - Standard Waste Box (SWB) assembly - Ten-Drum Overpack (TDOP) In addition, this procedure provides operating instructions for the TRUPACT-II CH waste packaging. This document also provides instructions for performing ICV and OCV preshipment leakage rate tests on the following packaging seals, using a nondestructive helium (He) leak test: - ICV upper main O-ring seal - ICV outer vent port plug O-ring seal - OCV upper main O-ring seal - OCV vent port plug O-ring seal.

  16. CH Packaging Operations for High Wattage Waste at LANL

    Energy Technology Data Exchange (ETDEWEB)

    Washington TRU Solutions LLC

    2003-08-28

    This procedure provides instructions for assembling the following contact-handled (CH) packaging payloads: - Drum payload assembly - Standard Waste Box (SWB) assembly - Ten-Drum Overpack (TDOP) In addition, this procedure also provides operating instructions for the TRUPACT-II CH waste packaging. This document also provides instructions for performing ICV and OCV preshipment leakage rate tests on the following packaging seals, using a nondestructive helium (He) leak test: - ICV upper main O-ring seal - ICV outer vent port plug O-ring seal - OCV upper main O-ring seal - OCV vent port plug O-ring seal.

  17. Antibody binding to p-Si using LANL SAM chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Aaron S [Los Alamos National Laboratory

    2010-12-06

    This NMSBA-sponsored project involves the attachment of antibodies to polymeric silicon (p-Si) surfaces, with the ultimate goal of attaching antibodies to nanowires for Vista Therapeutics, Inc. (Santa Fe, NM). This presentation describes the functionalization of p-Si surfaces. the activation of terminal carboxylates on these surfaces, the conjugation of antibodies, and the analyses undertaken at each step. The results of this work show that antibody conjugation is possible on p-Si coatings using the well-known EDC/NHS activation chemistry.

  18. Multiple encapsulation of LANL waste using polymers. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Schwartz, R.L.

    1994-08-12

    Polymer encapsulation of lead shielding/blasting grit (surrogate) mixed waste was optimized at bench scale using melamine formaldehyde, polyurethane, and butadiene thermosetting polymers. Three pellet-based intermediate waste forms, and a final waste form, were prepared, each providing an additional level of integrity. Encapsulated waste integrity was measured by chemical and physical techniques. Compliance was established using the Toxicity Characteristic Leaching Procedure. Equipment appropriate to pilot-scale demonstration of program techniques was investigated. A preliminary equipment list and layout, and process block flow diagram were prepared.

  19. Schema for the LANL infrasound analysis tool, infrapy

    Energy Technology Data Exchange (ETDEWEB)

    Dannemann, Fransiska Kate [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Marcillo, Omar Eduardo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-04-14

    The purpose of this document is to define the schema used for the operation of the infrasound analysis tool, infrapy. The tables described by this document extend the CSS3.0 or KB core schema to include information required for the operation of infrapy. This document is divided into three sections, the first being this introduction. Section two defines eight new, infrasonic data processing-specific database tables. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams.

  20. Photographic Documentation of Emerald Spreadwing at TA-3, LANL

    Energy Technology Data Exchange (ETDEWEB)

    Foy, Bernard R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-06-20

    Los Alamos National Laboratory has a considerable amount of suitable habitat for odonates, or dragonflies and damselflies. Few of these have been properly documented, however. With photographic documentation, the quality and size of odonate habitat on land owned by the Department of Energy will become more apparent to land managers.

  1. Evaluation of LANL Capabilities for Fabrication of TREAT Conversion Fuel

    Energy Technology Data Exchange (ETDEWEB)

    Luther, Erik Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Leckie, Rafael M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dombrowski, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-03-06

    This report estimates costs and schedule associated with scale up and fabrication of a low-enriched uranium (LEU) core for the Transient Reactor Test Facility (TREAT) reactor. This study considers facilities available at Los Alamos National Laboratory, facility upgrades, equipment, installation and staffing costs. Not included are costs associated with raw materials and off-site shipping. These estimates are considered a rough of magnitude. At this time, no specifications for the LEU core have been made and the final schedule needed by the national program. The estimate range (+/-100%) reflects this large uncertainty and is subject to change as the project scope becomes more defined.

  2. LANL technical progress update for US HJPRR working group

    Energy Technology Data Exchange (ETDEWEB)

    Dombrowski, David E [Los Alamos National Laboratory

    2011-01-06

    The outline of this presentation is: (1) Collaboration on Master Alloy Melting; (2) Data for Safety Analysis, (3) HIP can development status, (4) Bond strength quality, (5) Plasma spraying results, and (6) Bare Rolling Larger Rolling Ingots. Significant near term progress has been made in five areas: (1) Collaboration on Master Alloy Melting; (2) HIP can development status; (3) Bond strength quality; (4) Plasma spraying results; and (5) Bare Rolling Larger Rolling Ingots. Significant progress is expected in the next month on several important areas: (1) Intrinsic bond strength of plasma sprayed Zr (2) Advanced Cleaning; (3) Residual Stress Collaboration with INL; and (4) Cost Metric Assessment.

  3. High-level assessment of LANL ABC Design

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-15

    An annual weapon`s grade Pu disposition goal should be stated and related to the amount of Pu that needs to be disposed of. It needs to be determined to what extent it is possible to destroy Pu without building up any new Pu, i.e., how realistic this goal is. The strong positive Doppler coefficient for a Pu core might require the addition of some fertile material to ensure a negative Doppler coefficient. This in turn will affect the net Pu disposition rate. If a fertile material is required throughout the life of the ABC to ensure a negative Doppler coefficient, the difference between the molten salt ABC and other reactors in regard to Pu disposition is not a principled difference anymore but one of degree. A rationale has then to be developed that explains why {open_quotes}x{close_quotes} kg production of fissile material are acceptable but {open_quotes}y{close_quotes} kg are not. It is important to determine how a requirement for electricity production will impact on the ABC design choices. It is conceivable that DOE will not insist on electricity generation. In this case advantage has to be taken in terms of design simplifications and relaxed operating conditions.

  4. Teaching internet use to adult learners: The LANL experience

    Energy Technology Data Exchange (ETDEWEB)

    Smith, S.; Comstock, D.

    1995-12-01

    The Research library at Los Alamos National Laboratory has been teaching an Internet class to adult learners since May 1994. The class is a team effort, combining lecture/demo with hands-on practice using Gopher and the World Wide Web. What started out as a small short-term project has become a weekly class available to any Lab employee or associate. More than 250 people have been taught to find basic reference materials and to navigate the Internet on the Gopher and World Wide Web. The class is one of the first classes offered by the Research Library to be filled every month, and one Laboratory group has recommended that their staff attend this class in preparation for more advanced Internet and HTML classes as part of their group training. The success of this class spurred development by the Research Library of more specific subject classes using Internet resources, specifically business and general science resources.

  5. Dispersive internal long wave models

    Energy Technology Data Exchange (ETDEWEB)

    Camassa, R.; Choi, W.; Holm, D.D. [Los Alamos National Lab., NM (United States); Levermore, C.D.; Lvov, Y. [Univ. of Arizona, Tucson, AZ (United States)

    1998-11-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). This work is a joint analytical and numerical study of internal dispersive water wave propagation in a stratified two-layer fluid, a problem that has important geophysical fluid dynamics applications. Two-layer models can capture the main density-dependent effects because they can support, unlike homogeneous fluid models, the observed large amplitude internal wave motion at the interface between layers. The authors have derived new model equations using multiscale asymptotics in combination with the method they have developed for vertically averaging velocity and vorticity fields across fluid layers within the original Euler equations. The authors have found new exact conservation laws for layer-mean vorticity that have exact counterparts in the models. With this approach, they have derived a class of equations that retain the full nonlinearity of the original Euler equations while preserving the simplicity of known weakly nonlinear models, thus providing the theoretical foundation for experimental results so far unexplained.

  6. dfnWorks: A HPC Workflow for Discrete Fracture Network Modeling with Subsurface Flow and Transport Applications

    Science.gov (United States)

    Gable, C. W.; Hyman, J.; Karra, S.; Makedonska, N.; Painter, S. L.; Viswanathan, H. S.

    2015-12-01

    dfnWorks generates discrete fracture networks (DFN) of planar polygons, creates a high quality conforming Delaunay triangulation of the intersecting DFN polygons, assigns properties (aperture, permeability) using geostatistics, sets boundary and initial conditions, solves pressure/flow in single or multi-phase fluids (water, air, CO2) using the parallel PFLOTRAN or serial FEHM, and solves for transport using Lagrangian particle tracking. We outline the dfnWorks workflow and present applications from a range of fractured rock systems. dfnWorks (http://www.lanl.gov/expertise/teams/view/dfnworks) is composed of three main components, all of which are freely available. dfnGen generates a distribution of fracture polygons from site characterization data (statistics or deterministic fractures) and utilizes the FRAM (Feature Rejection Algorithm for Meshing) to guarantee the mesh generation package LaGriT (lagrit.lanl.gov) will generate a high quality conforming Delaunay triangular mesh. dfnWorks links the mesh to either PFLOTRAN (pflotran.org) or FEHM (fehm.lanl.gov) for solving flow and transport. The various physics options available in FEHM and PFLOTRAN such as single and multi-phase flow and reactive transport are all available with appropriate initial and boundary conditions and material property models. dfnTrans utilizes explicit Lagrangian particle tracking on the DFN using a velocity field reconstructed from the steady state pressure/flow field solution obtained in PFLOTRAN or FEHM. Applications are demonstrated for nuclear waste repository in fractured granite, CO2 sequestration and extraction of unconventional hydrocarbon resources.

  7. Influence of Sea Ice on Arctic Marine Sulfur Biogeochemistry in the Community Climate System Model

    Energy Technology Data Exchange (ETDEWEB)

    Deal, Clara [Univ. of Alaska, Fairbanks, AL (United States); Jin, Meibing [Univ. of Alaska, Fairbanks, AL (United States)

    2013-06-30

    Global climate models (GCMs) have not effectively considered how responses of arctic marine ecosystems to a warming climate will influence the global climate system. A key response of arctic marine ecosystems that may substantially influence energy exchange in the Arctic is a change in dimethylsulfide (DMS) emissions, because DMS emissions influence cloud albedo. This response is closely tied to sea ice through its impacts on marine ecosystem carbon and sulfur cycling, and the ice-albedo feedback implicated in accelerated arctic warming. To reduce the uncertainty in predictions from coupled climate simulations, important model components of the climate system, such as feedbacks between arctic marine biogeochemistry and climate, need to be reasonably and realistically modeled. This research first involved model development to improve the representation of marine sulfur biogeochemistry simulations to understand/diagnose the control of sea-ice-related processes on the variability of DMS dynamics. This study will help build GCM predictions that quantify the relative current and possible future influences of arctic marine ecosystems on the global climate system. Our overall research objective was to improve arctic marine biogeochemistry in the Community Climate System Model (CCSM, now CESM). Working closely with the Climate Ocean Sea Ice Model (COSIM) team at Los Alamos National Laboratory (LANL), we added 1 sea-ice algae and arctic DMS production and related biogeochemistry to the global Parallel Ocean Program model (POP) coupled to the LANL sea ice model (CICE). Both CICE and POP are core components of CESM. Our specific research objectives were: 1) Develop a state-of-the-art ice-ocean DMS model for application in climate models, using observations to constrain the most crucial parameters; 2) Improve the global marine sulfur model used in CESM by including DMS biogeochemistry in the Arctic; and 3) Assess how sea ice influences DMS dynamics in the arctic marine

  8. NSR&D FY15 Final Report. Modeling Mechanical, Thermal, and Chemical Effects of Impact

    Energy Technology Data Exchange (ETDEWEB)

    Long, Christopher Curtis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ma, Xia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zhang, Duan Zhong [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-02

    The main goal of this project is to develop a computer model that explains and predicts coupled mechanical, thermal and chemical responses of HE under impact and friction insults. The modeling effort is based on the LANL-developed CartaBlanca code, which is implemented with the dual domain material point (DDMP) method to calculate complex and coupled thermal, chemical and mechanical effects among fluids, solids and the transitions between the states. In FY 15, we have implemented the TEPLA material model for metal and performed preliminary can penetration simulation and begun to link with experiment. Currently, we are working on implementing a shock to detonation transition (SDT) model (SURF) and JWL equation of state.

  9. Progress Report 2008: A Scalable and Extensible Earth System Model for Climate Change Science

    Energy Technology Data Exchange (ETDEWEB)

    Drake, John B [ORNL; Worley, Patrick H [ORNL; Hoffman, Forrest M [ORNL; Jones, Phil [Los Alamos National Laboratory (LANL)

    2009-01-01

    This project employs multi-disciplinary teams to accelerate development of the Community Climate System Model (CCSM), based at the National Center for Atmospheric Research (NCAR). A consortium of eight Department of Energy (DOE) National Laboratories collaborate with NCAR and the NASA Global Modeling and Assimilation Office (GMAO). The laboratories are Argonne (ANL), Brookhaven (BNL) Los Alamos (LANL), Lawrence Berkeley (LBNL), Lawrence Livermore (LLNL), Oak Ridge (ORNL), Pacific Northwest (PNNL) and Sandia (SNL). The work plan focuses on scalablity for petascale computation and extensibility to a more comprehensive earth system model. Our stated goal is to support the DOE mission in climate change research by helping ... To determine the range of possible climate changes over the 21st century and beyond through simulations using a more accurate climate system model that includes the full range of human and natural climate feedbacks with increased realism and spatial resolution.

  10. Models

    DEFF Research Database (Denmark)

    Juel-Christiansen, Carsten

    2005-01-01

    Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter......Artiklen fremhæver den visuelle rotation - billeder, tegninger, modeller, værker - som det privilligerede medium i kommunikationen af ideer imellem skabende arkitekter...

  11. Modelling

    CERN Document Server

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H$^-$-sources) together with some remarks on beam transport.

  12. model

    African Journals Online (AJOL)

    trie neural construction oí inoiviouo! unci communal identities in ... occurs, Including models based on Information processing,1 ... Applying the DSM descriptive approach to dissociation in the ... a personal, narrative path lhal connects personal lo ethnic ..... managed the problem in the context of the community, using a.

  13. The Los Alamos National Laboratory Chemistry and Metallurgy Research Facility upgrades project - A model for waste minimization

    Energy Technology Data Exchange (ETDEWEB)

    Burns, M.L.; Durrer, R.E.; Kennicott, M.A.

    1996-07-01

    The Los Alamos National Laboratory (LANL) Chemistry and Metallurgy Research (CMR) Facility, constructed in 1952, is currently undergoing a major, multi-year construction project. Many of the operations required under this project (i.e., design, demolition, decontamination, construction, and waste management) mimic the processes required of a large scale decontamination and decommissioning (D&D) job and are identical to the requirements of any of several upgrades projects anticipated for LANL and other Department of Energy (DOE) sites. For these reasons the CMR Upgrades Project is seen as an ideal model facility - to test the application, and measure the success of - waste minimization techniques which could be brought to bear on any of the similar projects. The purpose of this paper will be to discuss the past, present, and anticipated waste minimization applications at the facility and will focus on the development and execution of the project`s {open_quotes}Waste Minimization/Pollution Prevention Strategic Plan.{close_quotes}

  14. Modeling aeolian transport of soil-bound plutonium: considering infrequent but normal environmental disturbances is critical in estimating future dose.

    Science.gov (United States)

    Michelotti, Erika A; Whicker, Jeffrey J; Eisele, William F; Breshears, David D; Kirchner, Thomas B

    2013-06-01

    Dose assessments typically consider environmental systems as static through time, but environmental disturbances such as drought and fire are normal, albeit infrequent, events that can impact dose-influential attributes of many environmental systems. These phenomena occur over time frames of decades or longer, and are likely to be exacerbated under projected warmer, drier climate. As with other types of dose assessment, the impacts of environmental disturbances are often overlooked when evaluating dose from aeolian transport of radionuclides and other contaminants. Especially lacking are predictions that account for potential changing vegetation cover effects on radionuclide transport over the long time frames required by regulations. A recently developed dynamic wind-transport model that included vegetation succession and environmental disturbance provides more realistic long-term predictability. This study utilized the model to estimate emission rates for aeolian transport, and compare atmospheric dispersion and deposition rates of airborne plutonium-contaminated soil into neighboring areas with and without environmental disturbances. Specifically, the objective of this study was to utilize the model results as input for a widely used dose assessment model (CAP-88). Our case study focused on low levels of residual plutonium found in soils from past operations at Los Alamos National Laboratory (LANL), in Los Alamos, NM, located in the semiarid southwestern USA. Calculations were conducted for different disturbance scenarios based on conditions associated with current climate, and a potential future drier and warmer climate. Known soil and sediment concentrations of plutonium were used to model dispersal and deposition of windblown residual plutonium, as a function of distance and direction. Environmental disturbances that affected vegetation cover included ground fire, crown fire, and drought, with reoccurrence rates for current climate based on site historical

  15. Modeling the dynamic behavior of proton-exchange membrane fuel cell

    Energy Technology Data Exchange (ETDEWEB)

    Llapade, Peter O [Los Alamos National Laboratory; Mukundan, Rangachary [Los Alamos National Laboratory; Davey, John R [Los Alamos National Laboratory; Borup, Rodney L [Los Alamos National Laboratory; Meyers, Jeremy P [UNIV OF TEXAS-AUSTIN

    2010-01-01

    A two-phase transient model that incorporates the permanent hysteresis observed in the experimentally measured capillary pressure of GDL has been developed. The model provides explanation for the difference in time constant between membrane hydration and dehydration observed in the HFR experiment conducted at LANL. When there is liquid water at the cathode catalyst layer, time constant of the water content in the membrane is closely tied to that of liquid water saturation in the CCL, as the vapor is already saturated. The water content in the membrane will not reach steady state as long as the liquid water flow in the CCL is not at steady state. Also, Increased resistance to proton transport in the membrane is observed when the cell voltage is stepped down to a very low value.

  16. Forward model theoretical basis for a superconducting imaging surface magnetoencephalography system

    Energy Technology Data Exchange (ETDEWEB)

    Maharajh, K [University of New Mexico, Albuquerque, NM (United States); Volegov, P L [Los Alamos National Laboratory, Los Alamos, NM (United States); Kraus, R H [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2004-02-21

    A novel magnetoencephalography (MEG) system was designed at Los Alamos National Laboratory (LANL) that incorporates a helmet-shaped superconductor in order to increase the signal to noise ratio. The magnetic field perturbations caused by the superconducting surface must be included in the forward physics for accurate source localization. In this paper, the theoretical basis for the forward model that calculates the field of any magnetic source in the presence of an arbitrarily shaped superconducting surface is presented. Appropriate magnetic field integral equations are derived that provide a description of the physics of the forward model. These equations are derived starting from Maxwell's equations in the presence of inhomogeneous media, with the appropriate boundary conditions for a superconductor. A discretized version of this equation is then compared with known analytic solutions for simple superconducting surface geometries.

  17. Modeling Chemical Detection Sensitivities of Active and Passive Remote Sensing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Scharlemann, E T

    2003-07-28

    During nearly a decade of remote sensing programs under the auspices of the U. S. Department of Energy (DOE), LLNL has developed a set of performance modeling codes--called APRS--for both Active and Passive Remote Sensing systems. These codes emphasize chemical detection sensitivity in the form of minimum detectable quantities with and without background spectral clutter and in the possible presence of other interfering chemicals. The codes have been benchmarked against data acquired in both active and passive remote sensing programs at LLNL and Los Alamos National Laboratory (LANL). The codes include, as an integral part of the performance modeling, many of the data analysis techniques developed in the DOE's active and passive remote sensing programs (e.g., ''band normalization'' for an active system, principal component analysis for a passive system).

  18. The Cerro Grande Fire - From Wildlife Modeling Through the Fire Aftermath

    Energy Technology Data Exchange (ETDEWEB)

    Rudell, T. M. (Theresa M.); Gille, R. W. (Roland W.)

    2001-01-01

    The Cerro Grande Fire developed from a prescribed burn by the National Park Service at Bandelier National Monument near Los Alamos, New Mexico. When the burn went out of control and became a wildfire, it attracted worldwide attention because it threatened the birthplace of the atomic bomb, Los Alamos National Laboratory (LANL). Was LANL prepared for a fire? What lessons have been learned?

  19. A Predictive Model of Geosynchronous Magnetopause Crossings

    CERN Document Server

    Dmitriev, A; Chao, J -K

    2013-01-01

    We have developed a model predicting whether or not the magnetopause crosses geosynchronous orbit at given location for given solar wind pressure Psw, Bz component of interplanetary magnetic field (IMF) and geomagnetic conditions characterized by 1-min SYM-H index. The model is based on more than 300 geosynchronous magnetopause crossings (GMCs) and about 6000 minutes when geosynchronous satellites of GOES and LANL series are located in the magnetosheath (so-called MSh intervals) in 1994 to 2001. Minimizing of the Psw required for GMCs and MSh intervals at various locations, Bz and SYM-H allows describing both an effect of magnetopause dawn-dusk asymmetry and saturation of Bz influence for very large southward IMF. The asymmetry is strong for large negative Bz and almost disappears when Bz is positive. We found that the larger amplitude of negative SYM-H the lower solar wind pressure is required for GMCs. We attribute this effect to a depletion of the dayside magnetic field by a storm-time intensification of t...

  20. Non-adiabatic Excited State Molecule Dynamics Modeling of Photochemistry and Photophysics of Materials

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, Tammie Renee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Tretiak, Sergei [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    Understanding and controlling excited state dynamics lies at the heart of all our efforts to design photoactive materials with desired functionality. This tailor-design approach has become the standard for many technological applications (e.g., solar energy harvesting) including the design of organic conjugated electronic materials with applications in photovoltaic and light-emitting devices. Over the years, our team has developed efficient LANL-based codes to model the relevant photophysical processes following photoexcitation (spatial energy transfer, excitation localization/delocalization, and/or charge separation). The developed approach allows the non-radiative relaxation to be followed on up to ~10 ps timescales for large realistic molecules (hundreds of atoms in size) in the realistic solvent dielectric environment. The Collective Electronic Oscillator (CEO) code is used to compute electronic excited states, and the Non-adiabatic Excited State Molecular Dynamics (NA-ESMD) code is used to follow the non-adiabatic dynamics on multiple coupled Born-Oppenheimer potential energy surfaces. Our preliminary NA-ESMD simulations have revealed key photoinduced mechanisms controlling competing interactions and relaxation pathways in complex materials, including organic conjugated polymer materials, and have provided a detailed understanding of photochemical products and intermediates and the internal conversion process during the initiation of energetic materials. This project will be using LANL-based CEO and NA-ESMD codes to model nonradiative relaxation in organic and energetic materials. The NA-ESMD and CEO codes belong to a class of electronic structure/quantum chemistry codes that require large memory, “long-queue-few-core” distribution of resources in order to make useful progress. The NA-ESMD simulations are trivially parallelizable requiring ~300 processors for up to one week runtime to reach a meaningful restart point.

  1. Improving and Testing Regional Attenuation and Spreading Models Using Well-Constrained Source Terms, Multiple Methods and Datasets

    Science.gov (United States)

    2013-07-03

    and ABKT for event 13117 (right), using Q corrections from fitting source-corrected spectra (labeled MDF ), and from tomography by LANL and LLNL (see...spectra (labeled MDF ), and from tomography by LANL and LLNL (see legends). 2.2. Data Quality Control Second, data quality directly impacts

  2. GASFLOW: A Computational Fluid Dynamics Code for Gases, Aerosols, and Combustion, Volume 1: Theory and Computational Model

    Energy Technology Data Exchange (ETDEWEB)

    Nichols, B.D.; Mueller, C.; Necker, G.A.; Travis, J.R.; Spore, J.W.; Lam, K.L.; Royl, P.; Redlinger, R.; Wilson, T.L.

    1998-10-01

    Los Alamos National Laboratory (LANL) and Forschungszentrum Karlsruhe (FzK) are developing GASFLOW, a three-dimensional (3D) fluid dynamics field code as a best-estimate tool to characterize local phenomena within a flow field. Examples of 3D phenomena include circulation patterns; flow stratification; hydrogen distribution mixing and stratification; combustion and flame propagation; effects of noncondensable gas distribution on local condensation and evaporation; and aerosol entrainment, transport, and deposition. An analysis with GASFLOW will result in a prediction of the gas composition and discrete particle distribution in space and time throughout the facility and the resulting pressure and temperature loadings on the walls and internal structures with or without combustion. A major application of GASFLOW is for predicting the transport, mixing, and combustion of hydrogen and other gases in nuclear reactor containments and other facilities. It has been applied to situations involving transporting and distributing combustible gas mixtures. It has been used to study gas dynamic behavior (1) in low-speed, buoyancy-driven flows, as well as sonic flows or diffusion dominated flows; and (2) during chemically reacting flows, including deflagrations. The effects of controlling such mixtures by safety systems can be analyzed. The code version described in this manual is designated GASFLOW 2.1, which combines previous versions of the United States Nuclear Regulatory Commission code HMS (for Hydrogen Mixing Studies) and the Department of Energy and FzK versions of GASFLOW. The code was written in standard Fortran 90. This manual comprises three volumes. Volume I describes the governing physical equations and computational model. Volume II describes how to use the code to set up a model geometry, specify gas species and material properties, define initial and boundary conditions, and specify different outputs, especially graphical displays. Sample problems are included

  3. Cooling tower and plume modeling for satellite remote sensing applications

    Energy Technology Data Exchange (ETDEWEB)

    Powers, B.J.

    1995-05-01

    It is often useful in nonproliferation studies to be able to remotely estimate the power generated by a power plant. Such information is indirectly available through an examination of the power dissipated by the plant. Power dissipation is generally accomplished either by transferring the excess heat generated into the atmosphere or into bodies of water. It is the former method with which we are exclusively concerned in this report. We discuss in this report the difficulties associated with such a task. In particular, we primarily address the remote detection of the temperature associated with the condensed water plume emitted from the cooling tower. We find that the effective emissivity of the plume is of fundamental importance for this task. Having examined the dependence of the plume emissivity in several IR bands and with varying liquid water content and droplet size distributions, we conclude that the plume emissivity, and consequently the plume brightness temperature, is dependent upon not only the liquid water content and band, but also upon the droplet size distribution. Finally, we discuss models dependent upon a detailed point-by-point description of the hydrodynamics and thermodynamics of the plume dynamics and those based upon spatially integrated models. We describe in detail a new integral model, the LANL Plume Model, which accounts for the evolution of the droplet size distribution. Some typical results obtained from this model are discussed.

  4. The Biosurveillance Analytics Resource Directory (BARD: Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance.

    Directory of Open Access Journals (Sweden)

    Kristen J Margevicius

    Full Text Available Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/, to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.

  5. The Biosurveillance Analytics Resource Directory (BARD): Facilitating the Use of Epidemiological Models for Infectious Disease Surveillance.

    Science.gov (United States)

    Margevicius, Kristen J; Generous, Nicholas; Abeyta, Esteban; Althouse, Ben; Burkom, Howard; Castro, Lauren; Daughton, Ashlynn; Del Valle, Sara Y; Fairchild, Geoffrey; Hyman, James M; Kiang, Richard; Morse, Andrew P; Pancerella, Carmen M; Pullum, Laura; Ramanathan, Arvind; Schlegelmilch, Jeffrey; Scott, Aaron; Taylor-McCabe, Kirsten J; Vespignani, Alessandro; Deshpande, Alina

    2016-01-01

    Epidemiological modeling for infectious disease is important for disease management and its routine implementation needs to be facilitated through better description of models in an operational context. A standardized model characterization process that allows selection or making manual comparisons of available models and their results is currently lacking. A key need is a universal framework to facilitate model description and understanding of its features. Los Alamos National Laboratory (LANL) has developed a comprehensive framework that can be used to characterize an infectious disease model in an operational context. The framework was developed through a consensus among a panel of subject matter experts. In this paper, we describe the framework, its application to model characterization, and the development of the Biosurveillance Analytics Resource Directory (BARD; http://brd.bsvgateway.org/brd/), to facilitate the rapid selection of operational models for specific infectious/communicable diseases. We offer this framework and associated database to stakeholders of the infectious disease modeling field as a tool for standardizing model description and facilitating the use of epidemiological models.

  6. Groundwater Pathway Model for the Los Alamos National Laboratory Technical Area 54, Area G, Revision 1

    Energy Technology Data Exchange (ETDEWEB)

    Stauffer, Philip H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chu, Shaoping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Miller, Terry A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Strobridge, Daniel M. [Neptune Inc., Los Alamos, NM (United States); Cole, Gregory L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Birdsell, Kay H. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Robinson, Bruce Alan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gable, Carl Walter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Broxton, David E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Springer, Everett P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schofield, Tracy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-10

    This report consists of four major sections, including this introductory section. Section 2 provides an overview of previous investigations related to the development of the current sitescale model. The methods and data used to develop the 3-D groundwater model and the techniques used to distill that model into a form suitable for use in the GoldSim models are discussed in Section 3. Section 4 presents the results of the model development effort and discusses some of the uncertainties involved. Eight attachments that provide details about the components and data used in this groundwater pathway model are also included with this report. The groundwater modeling effort reported here is a revision of the work that was conducted in 2005 (Stauffer et al., 2005a) in support of the 2008 Area G performance assessment and composite analysis (LANL, 2008). The revision effort was undertaken primarily to incorporate new geologic information that has been collected since 2003 at, and in the vicinity of, Area G. The new data were used to create a more accurate geologic framework model (GFM) that forms the basis of the numerical modeling of the site’s long-term performance. The groundwater modeling uses mean hydrologic properties of the geologic strata underlying Area G; this revision includes an evaluation of the impacts that natural variability in these properties may have on the model projections.

  7. Site-Scale Saturated Zone Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    G. Zyvoloski

    2003-12-17

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca

  8. Red Storm usage model :Version 1.12.

    Energy Technology Data Exchange (ETDEWEB)

    Jefferson, Karen L.; Sturtevant, Judith E.

    2005-12-01

    Red Storm is an Advanced Simulation and Computing (ASC) funded massively parallel supercomputer located at Sandia National Laboratories (SNL). The Red Storm Usage Model (RSUM) documents the capabilities and the environment provided for the FY05 Tri-Lab Level II Limited Availability Red Storm User Environment Milestone and the FY05 SNL Level II Limited Availability Red Storm Platform Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Los Alamos National Laboratory (LANL), Lawrence Livermore National Laboratory (LLNL), and SNL. Additionally, the Red Storm Usage Model maps the provided capabilities to the Tri-Lab ASC Computing Environment (ACE) requirements. The ACE requirements reflect the high performance computing requirements for the ASC community and have been updated in FY05 to reflect the community's needs. For each section of the RSUM, Appendix I maps the ACE requirements to the Limited Availability User Environment capabilities and includes a description of ACE requirements met and those requirements that are not met in that particular section. The Red Storm Usage Model, along with the ACE mappings, has been issued and vetted throughout the Tri-Lab community.

  9. A framework for hierarchical, object-oriented simulation modeling of a steel manufacturing enterprise

    Energy Technology Data Exchange (ETDEWEB)

    Henriksen, A.D.; Joyce, E.L.; Lally, B.R. [and others

    1997-10-01

    This is the final report of a two-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of the project is to combine detailed physical models of industrial processes with unit operations and business-level models. This would allow global and individual process control schemes to be implemented that would facilitate improved overall system performance. Intelligent decision support that employs expert system concepts (knowledge base and rules) could then also be incorporated. This project is innovative because it attempts to incorporate all levels of production-related activities from atoms to enterprise, and to integrate those activities into one comprehensive decision support tool. This project is an interdisciplinary effort requiring enterprise modeling and simulation model integration; process modeling and control; process control and optimization; chemical process modeling; and detailed molecular-level models. It represents the state of the art in enterprise modeling and simulation and incorporates cutting edge process modeling, process control, and system optimization techniques.

  10. Physical Modelling of Sedimentary Basin

    Energy Technology Data Exchange (ETDEWEB)

    Yuen, David A.

    2003-04-24

    The main goals of the first three years have been achieved, i.e., the development of particle-based and continuum-based algorithms for cross-scaleup-scale analysis of complex fluid flows. The U. Minnesota team has focused on particle-based methods, wavelets (Rustad et al., 2001) and visualization and has had great success with the dissipative and fluid particle dynamics algorithms, as applied to colloidal, polymeric and biological systems, wavelet filtering and visualization endeavors. We have organized two sessions in nonlinear geophysics at the A.G.U. Fall Meeting (2000,2002), which have indeed synergetically stimulated the community and promoted cross-disciplinary efforts in the geosciences. The LANL team has succeeded with continuum-based algorithms, in particular, fractal interpolating functions (fif). These have been applied to 1-D flow and transport equations (Travis, 2000; 2002) as a proof of principle, providing solutions that capture dynamics at all scales. In addition, the fif representations can be integrated to provide sub-grid-scale homogenization, which can be used in more traditional finite difference or finite element solutions of porous flow and transport. Another useful tool for fluid flow problems is the ability to solve inverse problems, that is, given present-time observations of a fluid flow, what was the initial state of that fluid system? We have demonstrated this capability for a large-scale problem of 3-D flow in the Earth's crust (Bunge, Hagelberg & Travis, 2002). Use of the adjoint method for sensitivity analysis (Marchuk, 1995) to compute derivatives of models makes the large-scale inversion feasible in 4-D, , space and time. Further, a framework for simulating complex fluid flow in the Earth's crust has been implemented (Dutrow et al, 2001). The remaining task of the first three-year campaign is to extend the implementation of the fif formalism to our 2-D and 3-D computer codes, which is straightforward, but involved.

  11. Revisions to the hydrogen gas generation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program`s maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model`s predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  12. Revisions to the hydrogen gas generation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Jerrell, J.W.

    1992-08-31

    Waste Management Technology has requested SRTC to maintain and extend a previously developed computer model, TRUGAS, which calculates hydrogen gas concentrations within the transuranic (TRU) waste drums. TRUGAS was written by Frank G. Smith using the BASIC language and is described in the report A Computer Model of gas Generation and Transport within TRU Waste Drums (DP- 1754). The computer model has been partially validated by yielding results similar to experimental data collected at SRL and LANL over a wide range of conditions. The model was created to provide the capability of predicting conditions that could potentially lead to the formation of flammable gas concentrations within drums, and to assess proposed drum venting methods. The model has served as a tool in determining how gas concentrations are affected by parameters such as filter vent sizes, waste composition, gas generation values, the number and types of enclosures, water instrusion into the drum, and curie loading. The success of the TRUGAS model has prompted an interest in the program's maintenance and enhancement. Experimental data continues to be collected at various sites on such parameters as permeability values, packaging arrangements, filter designs, and waste contents. Information provided by this data is used to improve the accuracy of the model's predictions. Also, several modifications to the model have been made to enlarge the scope of problems which can be analyzed. For instance, the model has been used to calculate hydrogen concentrations inside steel cabinets containing retired glove boxes (WSRC-RP-89-762). The revised TRUGAS computer model, H2GAS, is described in this report. This report summarizes all modifications made to the TRUGAS computer model and provides documentation useful for making future updates to H2GAS.

  13. Structural damage detection using ARMAX time series models and cepstral distances

    Indian Academy of Sciences (India)

    K LAKSHMI; A RAMA MOHAN RAO

    2016-09-01

    A novel damage detection algorithm for structural health monitoring using time series model is presented. The proposed algorithm uses output-only acceleration time series obtained from sensors on the structure which are fitted using Auto-regressive moving-average with exogenous inputs (ARMAX) model. The algorithm uses Cepstral distances between the ARMAX models of decorrelated data obtained from healthy and any other current condition of the structure as the damage indicator. A numerical model of a simply supported beam with variations due to temperature and operating conditions along with measurement noise is used to demonstrate the effectiveness of the proposed damage diagnostic technique using the ARMAX time series models and their Cepstral distances with novelty indices. The effectiveness of the proposed method is validatedusing the benchmark data of the 8-DOF system made available to public by the Engineering Institute of LANL and the simulated vibration data obtained from the FEM model of IASC-ASCE 12-DOF steel frame. The results of the studies indicate that the proposed algorithm is robust in identifying the damage from the acceleration datacontaminated with noise under varied environmental and operational conditions.

  14. Efficient Computation of Info-Gap Robustness for Finite Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Stull, Christopher J. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Williams, Brian J. [Los Alamos National Laboratory

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  15. Kinetic study of run-away burn in ICF capsule using a quasi-1D model

    Science.gov (United States)

    Huang, Chengkun; Molvig, K.; Albright, B. J.; Dodd, E. S.; Hoffman, N. M.; Vold, E. L.; Kagan, G.

    2016-10-01

    The effect of reduced fusion reactivity resulting from the loss of fuel ions in the Gamow peak in the ignition, run-away burn and disassembly stages of an inertial confinement fusion D-T capsule is investigated with a quasi-1D hybrid model that includes kinetic ions, fluid electrons and Planckian radiation photons. The fuel ion loss through the Knudsen effect at the fuel-pusher interface is accounted for by a local-loss model developed in Molvig et al.. The tail refilling and relaxation of the fuel ion distribution are evolved with a nonlinear Fokker-Planck solver. The Krokhin & Rozanov model is used for the finite alpha range beyond the fuel region, while alpha heating to the fuel ions and the fluid electrons is modeled kinetically. For an energetic pusher (40kJ), the simulation shows that the reduced fusion reactivity can lead to substantially lower ion temperature during run-away burn, while the final yield decreases more modestly. Possible improvements to the present model, including the non-Planckian radiation emission and alpha-driven fuel disassembly, are discussed. Work performed under the auspices of the U.S. DOE by the LANS, LLC, Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396. Work supported by the ASC TBI project at LANL.

  16. Modeling laser produced plasmas with smoothed particle hydrodynamics for next generation advanced light sources

    Science.gov (United States)

    Holladay, Robert; Griffith, Alec; Murillo, Michael S.

    2016-10-01

    A computational model has been developed to study the evolution of a plasma generated by next-generation advanced light sources such as SLAC's LCLS and LANL's proposed MaRIE. Smoothed Particle Hydrodynamics (SPH) is used to model the plasma evolution because of the ease with which it handles the open boundary conditions and large deformations associated with these experiments. Our work extends the basic SPH method by utilizing a two-fluid model of an electron-ion plasma that also incorporates time dependent ionization and recombination by allowing the SPH fluid particles to have an evolving mass based on the mean ionization state of the plasma. Additionally, inter-species heating, thermal conduction, and electric fields are also accounted for. The effects of various initial conditions and model parameters will be presented, with the goal of using this framework to develop a model that can be used in the design and interpretation of future experiments. This work was supported by the Los Alamos National Laboratory Computational Physics Workshop.

  17. Efficient Computation of Info-Gap Robustness for Finite Element Models

    Energy Technology Data Exchange (ETDEWEB)

    Stull, Christopher J. [Los Alamos National Laboratory; Hemez, Francois M. [Los Alamos National Laboratory; Williams, Brian J. [Los Alamos National Laboratory

    2012-07-05

    A recent research effort at LANL proposed info-gap decision theory as a framework by which to measure the predictive maturity of numerical models. Info-gap theory explores the trade-offs between accuracy, that is, the extent to which predictions reproduce the physical measurements, and robustness, that is, the extent to which predictions are insensitive to modeling assumptions. Both accuracy and robustness are necessary to demonstrate predictive maturity. However, conducting an info-gap analysis can present a formidable challenge, from the standpoint of the required computational resources. This is because a robustness function requires the resolution of multiple optimization problems. This report offers an alternative, adjoint methodology to assess the info-gap robustness of Ax = b-like numerical models solved for a solution x. Two situations that can arise in structural analysis and design are briefly described and contextualized within the info-gap decision theory framework. The treatments of the info-gap problems, using the adjoint methodology are outlined in detail, and the latter problem is solved for four separate finite element models. As compared to statistical sampling, the proposed methodology offers highly accurate approximations of info-gap robustness functions for the finite element models considered in the report, at a small fraction of the computational cost. It is noted that this report considers only linear systems; a natural follow-on study would extend the methodologies described herein to include nonlinear systems.

  18. THE LOS ALAMOS NATIONAL LABORATORY ATMOSPHERIC TRANSPORT AND DIFFUSION MODELS

    Energy Technology Data Exchange (ETDEWEB)

    M. WILLIAMS [and others

    1999-08-01

    The LANL atmospheric transport and diffusion models are composed of two state-of-the-art computer codes. The first is an atmospheric wind model called HOThlAC, Higher Order Turbulence Model for Atmospheric circulations. HOTMAC generates wind and turbulence fields by solving a set of atmospheric dynamic equations. The second is an atmospheric diffusion model called RAPTAD, Random Particle Transport And Diffusion. RAPTAD uses the wind and turbulence output from HOTMAC to compute particle trajectories and concentration at any location downwind from a source. Both of these models, originally developed as research codes on supercomputers, have been modified to run on microcomputers. Because the capability of microcomputers is advancing so rapidly, the expectation is that they will eventually become as good as today's supercomputers. Now both models are run on desktop or deskside computers, such as an IBM PC/AT with an Opus Pm 350-32 bit coprocessor board and a SUN workstation. Codes have also been modified so that high level graphics, NCAR Graphics, of the output from both models are displayed on the desktop computer monitors and plotted on a laser printer. Two programs, HOTPLT and RAPLOT, produce wind vector plots of the output from HOTMAC and particle trajectory plots of the output from RAPTAD, respectively. A third CONPLT provides concentration contour plots. Section II describes step-by-step operational procedures, specifically for a SUN-4 desk side computer, on how to run main programs HOTMAC and RAPTAD, and graphics programs to display the results. Governing equations, boundary conditions and initial values of HOTMAC and RAPTAD are discussed in Section III. Finite-difference representations of the governing equations, numerical solution procedures, and a grid system are given in Section IV.

  19. Artificial Neural Network L* from different magnetospheric field models

    Science.gov (United States)

    Yu, Y.; Koller, J.; Zaharia, S. G.; Jordanova, V. K.

    2011-12-01

    The third adiabatic invariant L* plays an important role in modeling and understanding the radiation belt dynamics. The popular way to numerically obtain the L* value follows the recipe described by Roederer [1970], which is, however, slow and computational expensive. This work focuses on a new technique, which can compute the L* value in microseconds without losing much accuracy: artificial neural networks. Since L* is related to the magnetic flux enclosed by a particle drift shell, global magnetic field information needed to trace the drift shell is required. A series of currently popular empirical magnetic field models are applied to create the L* data pool using 1 million data samples which are randomly selected within a solar cycle and within the global magnetosphere. The networks, trained from the above L* data pool, can thereby be used for fairly efficient L* calculation given input parameters valid within the trained temporal and spatial range. Besides the empirical magnetospheric models, a physics-based self-consistent inner magnetosphere model (RAM-SCB) developed at LANL is also utilized to calculate L* values and then to train the L* neural network. This model better predicts the magnetospheric configuration and therefore can significantly improve the L*. The above neural network L* technique will enable, for the first time, comprehensive solar-cycle long studies of radiation belt processes. However, neural networks trained from different magnetic field models can result in different L* values, which could cause mis-interpretation of radiation belt dynamics, such as where the source of the radiation belt charged particle is and which mechanism is dominant in accelerating the particles. Such a fact calls for attention to cautiously choose a magnetospheric field model for the L* calculation.

  20. Development of an automated core model for nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Mosteller, R.D.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objective of this project was to develop an automated package of computer codes that can model the steady-state behavior of nuclear-reactor cores of various designs. As an added benefit, data produced for steady-state analysis also can be used as input to the TRAC transient-analysis code for subsequent safety analysis of the reactor at any point in its operating lifetime. The basic capability to perform steady-state reactor-core analysis already existed in the combination of the HELIOS lattice-physics code and the NESTLE advanced nodal code. In this project, the automated package was completed by (1) obtaining cross-section libraries for HELIOS, (2) validating HELIOS by comparing its predictions to results from critical experiments and from the MCNP Monte Carlo code, (3) validating NESTLE by comparing its predictions to results from numerical benchmarks and to measured data from operating reactors, and (4) developing a linkage code to transform HELIOS output into NESTLE input.

  1. MCNPX Cosmic Ray Shielding Calculations with the NORMAN Phantom Model

    Science.gov (United States)

    James, Michael R.; Durkee, Joe W.; McKinney, Gregg; Singleterry Robert

    2008-01-01

    The United States is planning manned lunar and interplanetary missions in the coming years. Shielding from cosmic rays is a critical aspect of manned spaceflight. These ventures will present exposure issues involving the interplanetary Galactic Cosmic Ray (GCR) environment. GCRs are comprised primarily of protons (approx.84.5%) and alpha-particles (approx.14.7%), while the remainder is comprised of massive, highly energetic nuclei. The National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) has commissioned a joint study with Los Alamos National Laboratory (LANL) to investigate the interaction of the GCR environment with humans using high-fidelity, state-of-the-art computer simulations. The simulations involve shielding and dose calculations in order to assess radiation effects in various organs. The simulations are being conducted using high-resolution voxel-phantom models and the MCNPX[1] Monte Carlo radiation-transport code. Recent advances in MCNPX physics packages now enable simulated transport over 2200 types of ions of widely varying energies in large, intricate geometries. We report here initial results obtained using a GCR spectrum and a NORMAN[3] phantom.

  2. Improved Intranuclear Cascade Models for the Codes CEM2k and LAQGSM

    CERN Document Server

    Mashnik, S G; Sierk, A J; Prael, R E

    2005-01-01

    An improved version of the Cascade-Exciton Model (CEM) of nuclear reactions implemented in the codes CEM2k and the Los Alamos version of the Quark-Gluon String Model (LAQGSM) has been developed recently at LANL to describe reactions induced by particles and nuclei at energies up to hundreds of GeV/nucleon for a number of applications. We present several improvements to the intranuclear cascade models used in CEM2k and LAQGSM developed recently to better describe the physics of nuclear reactions. First, we incorporate the photonuclear mode from CEM2k into LAQGSM to allow it to describe photonuclear reactions, not previously modeled there. Then, we develop new approximations to describe more accurately experimental elementary energy and angular distributions of secondary particles from hadron-hadron and photon-hadron interactions using available data and approximations published by other authors. Finally, to consider reactions involving very highly excited nuclei (E* > 2-3 MeV/A), we have incorporated into CEM2...

  3. The critical thinking curriculum model

    Science.gov (United States)

    Robertson, William Haviland

    The Critical Thinking Curriculum Model (CTCM) utilizes a multidisciplinary approach that integrates effective learning and teaching practices with computer technology. The model is designed to be flexible within a curriculum, an example for teachers to follow, where they can plug in their own critical issue. This process engages students in collaborative research that can be shared in the classroom, across the country or around the globe. The CTCM features open-ended and collaborative activities that deal with current, real world issues which leaders are attempting to solve. As implemented in the Critical Issues Forum (CIF), an educational program administered by Los Alamos National Laboratory (LANL), the CTCM encompasses the political, social/cultural, economic, and scientific realms in the context of a current global issue. In this way, students realize the importance of their schooling by applying their efforts to an endeavor that ultimately will affect their future. This study measures student attitudes toward science and technology and the changes that result from immersion in the CTCM. It also assesses the differences in student learning in science content and problem solving for students involved in the CTCM. A sample of 24 students participated in classrooms at two separate high schools in New Mexico. The evaluation results were analyzed using SPSS in a MANOVA format in order to determine the significance of the between and within-subjects effects. A comparison ANOVA was done for each two-way MANOVA to see if the comparison groups were equal. Significant findings were validated using the Scheffe test in a Post Hoc analysis. Demographic information for the sample population was recorded and tracked, including self-assessments of computer use and availability. Overall, the results indicated that the CTCM did help to increase science content understanding and problem-solving skills for students, thereby positively effecting critical thinking. No matter if the

  4. Analyzing Electric Field Morphology Through Data-Model Comparisons of the GEM IM/S Assessment Challenge Events

    Science.gov (United States)

    Liemohn, Michael W.; Ridley, Aaron J.; Kozyra, Janet U.; Gallagher, Dennis L.; Thomsen, Michelle F.; Henderson, Michael G.; Denton, Michael H.; Brandt, Pontus C.; Goldstein, Jerry

    2006-01-01

    The storm-time inner magnetospheric electric field morphology and dynamics are assessed by comparing numerical modeling results of the plasmasphere and ring current with many in situ and remote sensing data sets. Two magnetic storms are analyzed, April 22,2001 and October 21-23,2001, which are the events selected for the Geospace Environment Modeling (GEM) Inner Magnetosphere/Storms (IM/S) Assessment Challenge (IMSAC). The IMSAC seeks to quantify the accuracy of inner magnetospheric models as well as synthesize our understanding of this region. For each storm, the ring current-atmosphere interaction model (RAM) and the dynamic global core plasma model (DGCPM) were run together with various settings for the large-scale convection electric field and the nightside ionospheric conductance. DGCPM plasmaspheric parameters were compared with IMAGE-EUV plasmapause extractions and LANL-MPA plume locations and velocities. RAM parameters were compared with Dst*, LANL-MPA fluxes and moments, IMAGE-MENA images, and IMAGE-HENA images. Both qualitative and quantitative comparisons were made to determine the electric field morphology that allows the model results to best fit the plasma data at various times during these events. The simulations with self-consistent electric fields were, in general, better than those with prescribed field choices. This indicates that the time-dependent modulation of the inner magnetospheric electric fields by the nightside ionosphere is quite significant for accurate determination of these fields (and their effects). It was determined that a shielded Volland-Stern field description driven by the 3-hour Kp index yields accurate results much of the time, but can be quite inconsistent. The modified Mcllwain field description clearly lagged in overall accuracy compared to the other fields, but matched some data sets (like Dst*) quite well. The rankings between the simulations varied depending on the storm and the individual data sets, indicating that

  5. Molecular modeling, FTIR spectral characterization and mechanical properties of carbonated-hydroxyapatite prepared by mechanochemical synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Youness, Rasha A. [Spectroscopy Department, National Research Centre, El-Bohouth Str., 12622, Dokki, Giza (Egypt); Taha, Mohammed A. [Solid-State Physics Department, National Research Centre, El-Bohouth Str., 12622, Dokki, Giza (Egypt); Elhaes, Hanan [Physics Department, Faculty of Women for Arts, Science, and Education, Ain Shams University, 11757 Cairo (Egypt); Ibrahim, Medhat, E-mail: medahmed6@yahoo.com [Spectroscopy Department, National Research Centre, El-Bohouth Str., 12622, Dokki, Giza (Egypt)

    2017-04-01

    Nanocrystalline B-type carbonate substituted hydroxyapatite (B-CHA) powder has been successively synthesized by mechanochemical method. The effect of milling times on the formation of B-CHA was investigated by Fourier transform infrared spectroscopy, X-ray diffraction technique and scanning electron microscopy. Moreover, physical as well as mechanical properties were examined as a function of milling time. Furthermore, theoretical model was presented for hydroxyapatite (HA). Semiempirical calculations at PM6 level were used to calculate thermal parameters including entropy; enthalpy; heat capacity; free energy and heat of formation in the temperature range from 200 up to 500 k. The results revealed that single phase B-CHA was successfully formed after 8 h of milling when Ball to Powder Ratio (BPR) equals to 10:1. Results revealed that entropy; enthalpy and heat capacity gradually increased as a function of temperature while, free energy and heat of formation decreased with the increasing of temperature. Comparison with higher level of theory was conducted at HF and DFT using the models HF/3-21g**; B3LYP/6-31G(d,p) and B3LYP/LANL2DZ, respectively and indicated that PM6 could be utilized with appropriate accuracy and time to study physical and thermochemical parameters for HA. - Highlights: • Preparation of Nanocrystalline B-type carbonate substituted hydroxyapatite (B-CHA) powder by mechanochemical method. • Characterization of CHA. • Semiemperical and DFT models for CHA.

  6. A new international geostationary electron model: IGE-2006, from 1 keV to 5.2 MeV

    Science.gov (United States)

    Sicard-Piet, A.; Bourdarie, S.; Boscher, D.; Friedel, R. H. W.; Thomsen, M.; Goka, T.; Matsumoto, H.; Koshiishi, H.

    2008-07-01

    Département Environnement Spatial, Office National d'Etudes et de Recherches Aérospatiales (ONERA) has been developing a model for the geostationary electron environment since 2003. Until now, this model was called Particle ONERA-LANL Environment (POLE), and it is valid from 30 keV up to 5.2 MeV. POLE is based on the full complement of Los Alamos National Laboratory geostationary satellites, covers the period 1976-2005, and takes into account the solar cycle variation. Over the period 1976 to present, four different detectors were flown: charged particle analyzer (CPA), synchronous orbit particle analyzer (SOPA), energetic spectra for particles (ESP), and magnetospheric plasma analyzer (MPA). Only the first three were used to develop the POLE model. Here we extend the energy coverage of the model to low energies using MPA measurements. We further include the data from the Japanese geostationary spacecraft, Data Relay Test Satellite (DRTS). These data are now combined into an extended geostationary electron model which we call IGE-2006.

  7. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  8. Test Plan for Godiva Move from LANL TA-18 to Nevada Test Site Device Assembly Facility

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, M

    2005-08-01

    Godiva is an unshielded, pulsed nuclear reactor, used to produce bursts of neutrons and gamma rays for irradiating test samples. The Godiva reactor is part of the TA-18 Facility at Los Alamos National Laboratory. The Godiva reactor is to be moved to the Device Assembly Facility (DAF) at the Nevada Test Site, northwest of Las Vegas, Nevada. Bursts of ionizing radiation from Godiva have been found to produce radio waves and electrical interference in circuits and electrical equipment (e.g., alarm systems, interlocks, recording devices) near Godiva. Safety and security concerns regarding Godiva at the DAF are: (1) Can Godiva pulses induce detonators elsewhere in the DAF to explode?, (2) What is the expected strength of the electrical signal from Godiva elsewhere in the DAF? (3) Will Godiva pulses trigger security alarms, requiring additional administrative controls? This report addresses these issues, and describes a brief set of electrical measurements intended to verify that electromagnetic emissions from Godiva are unchanged by its relocation, and below a threshold of safety for detonators that are outside the actual room Godiva resides in. The following points will be described: the nature of Godiva electrical emissions, predicted electric field at a given distance, electromagnetic frequency, safety threshold for detonators, recommended ''stay out'' zone around Godiva for detonators, and recommended measurements to be made once Godiva has been installed at DAF.

  9. Lithology and Stratigraphy of Holes Drilled in LANL-Use Areas of the Nevada Test Site

    Energy Technology Data Exchange (ETDEWEB)

    Lance B. Prothro; Sigmund L. Drellack, Jr.; Brian M. Allen

    1999-07-01

    Geologic data for ten holes drilled in areas used by Los Alamos National Laboratory at the Nevada Test Site are presented in this report. The holes include emplacement holes, instrumentation holes, and Underground Test Area wells drilled during calendar years 1991 through 1995. For each hole a stratigraphic log, a detailed lithologic log, and one or two geologic cross sections are presented, along with a supplemental data sheet containing information about the drilling operations, geology, or references. For three of the holes, graphic data summary sheets with geologic and geophysical data are provided as plates.

  10. Performance of the prototype LANL solid deuterium ultra-cold neutron source

    CERN Document Server

    Hill, R E; Bowles, T J; Greene, G L; Hogan, G; Lamoreaux, S; Marek, L; Mortenson, R; Morris, C L; Saunders, A; Seestrom, S J; Teasdale, W A; Hoedl, S; Liu, C Y; Smith, D A; Young, A; Filippone, B W; Hua, J; Ito, T; Pasyuk, E A; Geltenbort, P; García, A; Fujikawa, B; Baessler, S; Serebrov, A

    2000-01-01

    A prototype of a solid deuterium (SD sub 2) source of Ultra-Cold Neutrons (UCN) is currently being tested at LANSCE. The source is contained within an assembly consisting of a 4 K polyethylene moderator surrounded by a 77 K beryllium flux trap in which is embedded a spallation target. Time-of-flight measurements have been made of the cold neutron spectrum emerging directly from the flux trap assembly. A comparison is presented of these measurements with results of Monte Carlo (LAHET/MCNP) calculations of the cold neutron fluxes produced in the prototype assembly by a beam of 800 MeV protons incident on the tungsten target. A UCN detector was coupled to the assembly through a guide system with a critical velocity of 8 m/s ( sup 5 sup 8 Ni). The rates and time-of-flight data from this detector are compared with calculated values. Measurements of UCN production as a function of SD sub 2 volume (thickness) are compared with predicted values. The dependence of UCN production on SD sub 2 temperature and proton beam...

  11. A history of neutrons in biology: the development of neutron protein crystallography at BNL and LANL.

    Science.gov (United States)

    Schoenborn, Benno P

    2010-11-01

    The first neutron diffraction data were collected from crystals of myoglobin almost 42 years ago using a step-scan diffractometer with a single detector. Since then, major advances have been made in neutron sources, instrumentation and data collection and analysis, and in biochemistry. Fundamental discoveries about enzyme mechanisms, biological complex structures, protein hydration and H-atom positions have been and continue to be made using neutron diffraction. The promise of neutrons has not changed since the first crystal diffraction data were collected. Today, with the developments of beamlines at spallation neutron sources and the use of the Laue method for data collection, the field of neutrons in structural biology has renewed vitality.

  12. Robotics for Nuclear Material Handling at LANL:Capabilities and Needs

    Energy Technology Data Exchange (ETDEWEB)

    Harden, Troy A [Los Alamos National Laboratory; Lloyd, Jane A [Los Alamos National Laboratory; Turner, Cameron J [CO SCHOOL OF MINES/PMT-4

    2009-01-01

    Nuclear material processing operations present numerous challenges for effective automation. Confined spaces, hazardous materials and processes, particulate contamination, radiation sources, and corrosive chemical operations are but a few of the significant hazards. However, automated systems represent a significant safety advance when deployed in place of manual tasks performed by human workers. The replacement of manual operations with automated systems has been desirable for nearly 40 years, yet only recently are automated systems becoming increasingly common for nuclear materials handling applications. This paper reviews several automation systems which are deployed or about to be deployed at Los Alamos National Laboratory for nuclear material handling operations. Highlighted are the current social and technological challenges faced in deploying automated systems into hazardous material handling environments and the opportunities for future innovations.

  13. Targetry at the LANL 100 MeV isotope production facility: lessons learned from facility commissioning

    Energy Technology Data Exchange (ETDEWEB)

    Nortier, F. M. (Francois M.); Fassbender, M. E. (Michael E.); DeJohn, M.; Hamilton, V. T. (Virginia T.); Heaton, R. C. (Richard C.); Jamriska, David J.; Kitten, J. J. (Jason J.); Lenz, J. W.; Lowe, C. E.; Moddrell, C. F.; McCurdy, L. M. (Lisa M.); Peterson, E. J. (Eugene J.); Pitt, L. R. (Lawrence R.); Phillips, D. R. (Dennis R.); Salazar, L. L. (Louie L.); Smith, P. A. (Paul A.); Valdez, Frank O.

    2004-01-01

    The new Isotope Production Facility (IPF) at Los Alamos National Laboratory has been commissioned during the spring of 2004. Commissioning activities focused on the establishment of a radionuclide database, the review and approval of two specific target stack designs, and four trial runs with subsequent chemical processing and data analyses. This paper highlights some aspects of the facility and the targetry of the two approved target stacks used during the commissioning process. Since one niobium encapsulated gallium target developed a blister after the extended irradiation of 4 days, a further evaluation of the gallium targets is required. Beside this gallium target, no other target showed any sign of thermal failure. Considering the uncertainties involved, the production yields obtained for targets irradiated in the same energy slot are consistent for all three 'Prototype' stacks. A careful analysis of the temperature profile in the RbCl targets shows that energy shifts occur in the RbCl and Ga targets. Energy shifts are a result of density variations in the RbCl disk under bombardment. Thickness adjustments of targets in the prototype stack are required to ensure maximum production yields of {sup 82}Sr and {sup 68}Ge in the design energy windows. The {sup 68}Ge yields obtained are still consistently lower than the predicted yield value, which requires further investigation. After recalculation of the energy windows for the RbCl and Ga targets, the measured {sup 82}Sr production yields compare rather well with values predicted on the basis of evaluated experimental excitation function data.

  14. Summaries of FY16 LANL experimental campaigns at the OMEGA and EP Laser Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Loomis, Eric Nicholas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Merritt, Elizabeth Catherine [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Montgomery, David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kim, Yong Ho [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Murphy, Thomas Joseph [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Johns, Heather Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kline, John L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shah, Rahul C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zylstra, Alex [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Herrmann, Hans W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Schmitt, Mark J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Flippo, Kirk Adler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rasmus, Alexander Martin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-25

    In FY16, Los Alamos National Laboratory carried out 22 shot days on the OMEGA and OMEGA- EP laser facilities in the areas of High Energy Density (HED) Science and Inertial Confinement Fusion (ICF). In HED our focus areas were on radiation flow, hydrodynamic turbulent mix and burn, warm dense matter equations of state, and coupled Kelvin-­Helmholtz (KH)/Richtmyer-­ Meshkov (RM) instability growth. For ICF our campaigns focused on the Priority Research Directions (PRD) of implosion phase mix and stagnation and burn, specifically as they pertain to Laser Direct Drive (LDD). We also had several focused shot days on transport properties in the kinetic regime. We continue to develop advanced diagnostics such as Neutron Imaging, Gamma Reaction History, and Gas Cherenkov Detectors. Below are a summary of our campaigns, their motivation, and main results from this year.

  15. LANL Q2 2016 Quarterly Progress Report. Science Campaign and ICF

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, Melissa Rae [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-04-07

    This progress report includes highlights for the Science Campaign and ICF about Advanced Certification and Assessment Methodologies, Implosion Hydrodynamics (C-1, SCE), Materials and Nuclear Science (C-1, C-2), Capabilities for Nuclear Intelligence, and High Energy Density Science (C-1, C-4, C-10). Upcoming meetings, briefings, and experiments are then listed for April and May.

  16. LANL/Green Star Tests of the Green Star SBS-60 Spectrometer

    Energy Technology Data Exchange (ETDEWEB)

    T. E. Sampson; D. T. Vo; T. L. Cremers; P. A. Hypes; Y. P. Seldiakov; A. B. Dorin; M. V. Kondrashov; V. I. Timoshin

    2001-06-01

    We report on joint testing of the Russian-designed and manufactured single board spectrometer SBS-60 from Green Star Ltd. of Moscow. The SBS-60 will be used to make material control and accountability measurements on plutonium in the Russian plutonium disposition program. We compared three SBS-60 units of two different designs with three commonly used commercial US data acquisition systems by making measurements with three different HPGe detector systems. The measurements were performed to test if the gamma-ray spectral data of plutonium samples from the SBS-60 was suitable for analysis for the isotopic composition of plutonium using the Los Alamos FRAM isotopic analysis software. Each detector fed its signal to two data acquisition systems, one SBS-60 and one commercial US system. The data from the two systems were analyzed by FRAM and compared. In addition, we characterized the throughput, resolution, and stability of the SBS-60 data acquisition system in comparison with the commercial US systems. This report presents detailed results of all the tests performed.

  17. Neutron-Induced Fission Measurements at the Dance and Lsds Facilities at Lanl

    Science.gov (United States)

    Jandel, M.; Bredeweg, T. A.; Bond, E. M.; Chadwick, M. B.; Couture, A.; O'Donnell, J. M.; Fowler, M. M.; Haight, R. C.; Hayes-Sterbenz, A. C.; Rundberg, R. S.; Rusev, G. Y.; Ullmann, J. L.; Vieira, D. J.; Wilhelmy, J. B.; Wu, C. Y.; Becker, J. A.; Alexander, C. W.; Belier, G.

    2014-09-01

    New results from neutron-induced fission measurements performed at the Detector for Advanced Neutron Capture Experiments (DANCE) and Lead Slowing Down Spectrometer (LSDS) are presented. New correlated data on promptfission γ-ray (PFG) distributions were measured using the DANCE array for resonant neutron-induced fission of 233U, 235U and 239Pu. The deduced properties of PFG emission are presented using a simple parametrization. An accurate knowledge of fission γ-ray spectra enables us to analyze the isomeric states of 236U created after neutron capture on 235U. We briefly discuss these new results. Finally, we review details and preliminary results of the challenging 237U(n,f) cross section measurement at the LSDS facility.

  18. RADIONUCLIDE INVENTORY MANAGEMENT AT THE NEW 100 MeV ISOTOPE PRODUCTION FACILITY AT LANL

    Energy Technology Data Exchange (ETDEWEB)

    Fassbender, M.E.; Phillips, D.R.; Nortier, F.M.; Trellue, H.R.; Hamilton, V.T.; Heaton, R.C.; Jamriska, D.J.; Kitten, J.J.; Lowe, C.E.; McCurdy, L.M.; Pitt, L.R.; Salazar, L.L.; Sullivan, J.W.; Valdez, F.O.; Peterson, E.J.

    2004-10-03

    The Isotope Production Facility (IPF) at Los Alamos is operated on the authorization basis of a radiological facility with an inventory limit of a Category 3 Nuclear Facility. For the commissioning of IPF, a ''dummy'' target stack containing Zn, Nb and Al disks, and a ''prototype'' stack were irradiated with a proton beam. The ''prototype'' stack contained two pressed RbCl disks, encapsulated in stainless steel, and a Ga metal target. Typical ''prototype'' stack beam parameters were 88.9 {micro}A, 101.3 h. Operation procedures require the projection of all generated radionuclide activities. This is mandatory in order to determine both maximum beam current and maximum beam exposure time. The Monte Carlo code MCNPX and the burn-up code CINDER90 were used to determine maximum beam parameters prior to irradiation. After irradiation, activity estimates were calculated assuming actual average beam parameters. They were entered into an online inventory database, and were later, after chemical separation and radioactive assays, replaced by experimental values. A comparison of ''prototype'' stack experimental yield data to Monte Carlo calculation results showed that the computer codes provide realistic, conservative estimates.

  19. Technical Basis for the Removal of Unremediated Nitrate Salt Sampling (UNS) to Support LANL Treatment Studies

    Energy Technology Data Exchange (ETDEWEB)

    Funk, David John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    The sampling of unremediated nitrate salts (UNS) was originally proposed by the U.S. Department of Energy (DOE) and Los Alamos National Security, LLC (LANS) (collectively, the Permittees) as a means to ensure adequate understanding and characterization of the problematic waste stream created when the Permittees remediated these nitrate salts-bearing waste with an organic absorbent. The proposal to sample the UNS was driven by a lack of understanding with respect to the radioactive contamination release that occurred within the underground repository at the Waste Isolation Pilot Plant (WIPP) in February 14, 2014, as well as recommendations made by a Peer Review Team. As discussed, the Permittees believe that current knowledge and understanding of the waste has sufficiently matured such that this additional sampling is not required. Perhaps more importantly, the risk of both chemical and radiological exposure to the workers sampling the UNS drum material is unwarranted. This memo provides the technical justification and rationale for excluding the UNS sampling from the treatment studies.

  20. Systematic approach to verification and validation: High explosive burn models

    Energy Technology Data Exchange (ETDEWEB)

    Menikoff, Ralph [Los Alamos National Laboratory; Scovel, Christina A. [Los Alamos National Laboratory

    2012-04-16

    Most material models used in numerical simulations are based on heuristics and empirically calibrated to experimental data. For a specific model, key questions are determining its domain of applicability and assessing its relative merits compared to other models. Answering these questions should be a part of model verification and validation (V and V). Here, we focus on V and V of high explosive models. Typically, model developers implemented their model in their own hydro code and use different sets of experiments to calibrate model parameters. Rarely can one find in the literature simulation results for different models of the same experiment. Consequently, it is difficult to assess objectively the relative merits of different models. This situation results in part from the fact that experimental data is scattered through the literature (articles in journals and conference proceedings) and that the printed literature does not allow the reader to obtain data from a figure in electronic form needed to make detailed comparisons among experiments and simulations. In addition, it is very time consuming to set up and run simulations to compare different models over sufficiently many experiments to cover the range of phenomena of interest. The first difficulty could be overcome if the research community were to support an online web based database. The second difficulty can be greatly reduced by automating procedures to set up and run simulations of similar types of experiments. Moreover, automated testing would be greatly facilitated if the data files obtained from a database were in a standard format that contained key experimental parameters as meta-data in a header to the data file. To illustrate our approach to V and V, we have developed a high explosive database (HED) at LANL. It now contains a large number of shock initiation experiments. Utilizing the header information in a data file from HED, we have written scripts to generate an input file for a hydro code

  1. B3LYP study of water adsorption on cluster models of Pt(1 1 1), Pt(1 0 0) and Pt(1 1 0): Effect of applied electric field

    Energy Technology Data Exchange (ETDEWEB)

    Blanco, Raquel; Orts, Jose Manuel [Departamento de Quimica Fisica e Instituto Universitario de Electroquimica, Universidad de Alicante, Apartado 99, E-03080 Alicante (Spain)

    2008-11-01

    A density functional theory (DFT) study of the adsorption of a water molecule on Pt(1 1 1), Pt(1 0 0) and Pt(1 1 0) surfaces has been carried out using cluster models, at the B3LYP/LANL2DZ,6-311++G(d,p) level. The water molecule binds preferentially at the top site on Pt(1 1 1) and Pt(1 0 0) with adsorption energy around -27 kJ mol{sup -1}, and is oriented with the molecular plane nearly parallel to the metal surface and the H atoms pointing away from it. On Pt(1 1 0) a hollow site is preferred, with adsorption energy of -32 kJ mol{sup -1}. Potential energy barriers for the rotation around an axis normal to the surface have been estimated to be below 1 kJ mol{sup -1} for Pt(1 1 1) and Pt(1 0 0) when water is adsorbed on top. Upon application of an external electric field (inducing positive charge density on the metal) adsorbed water is additionally stabilized on the three surfaces, especially at the top adsorption site, and adsorption on Pt(1 1 1) and Pt(1 0 0) becomes more favoured than on Pt(1 1 0). Good agreement has been found between harmonic vibrational frequencies calculated at the B3LYP/LANL2DZ,6-311++G(d,p) level and experimental frequencies for adsorbed water monomers on Pt(h k l) surfaces. (author)

  2. The Ahuachapan geothermal field, El Salvador: Exploitation model, performance predictions, economic analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ripperda, M.; Bodvarsson, G.S.; Lippmann, M.J.; Witherspoon, P.A.; Goranson, C.

    1991-05-01

    The Earth Sciences Division of Lawrence Berkeley Laboratory (LBL) is conducting a reservoir evaluation study of the Ahuachapan geothermal field in El Salvador. This work is being performed in cooperation with the Comision Ejecutiva Hidroelectrica del Rio Lempa (CEL) and the Los Alamos National Laboratory (LANL) with funding from the US Agency for International Development (USAID). This report describes the work done during the second year of the study (FY89--90). The first year's report included (1) the development of geological and conceptual models of the field, (2) the evaluation of the reservoir's initial thermodynamic and chemical conditions and their changes during exploitation, (3) the evaluation of interference test data and the observed reservoir pressure decline and (4) the development of a natural state model for the field. In the present report the results of reservoir engineering studies to evaluate different production-injection scenarios for the Ahuachapan geothermal field are discussed. The purpose of the work was to evaluate possible reservoir management options to enhance as well as to maintain the productivity of the field during a 30-year period (1990--2020). The ultimate objective was to determine the feasibility of increasing the electrical power output at Ahuachapan from the current level of about 50 MW{sub e} to the total installed capacity of 95 MW{sub e}. 20 refs., 75 figs., 10 tabs.

  3. Development of Extended Period Pressure-Dependent Demand Water Distribution Models

    Energy Technology Data Exchange (ETDEWEB)

    Judi, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Mcpherson, Timothy N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-03-20

    Los Alamos National Laboratory (LANL) has used modeling and simulation of water distribution systems for N-1 contingency analyses to assess criticality of water system assets. Critical components considered in these analyses include pumps, tanks, and supply sources, in addition to critical pipes or aqueducts. A contingency represents the complete removal of the asset from system operation. For each contingency, an extended period simulation (EPS) is run using EPANET. An EPS simulates water system behavior over a time period, typically at least 24 hours. It assesses the ability of a system to respond and recover from asset disruption through distributed storage in tanks throughout the system. Contingencies of concern are identified as those in which some portion of the water system has unmet delivery requirements. A delivery requirement is defined as an aggregation of water demands within a service area, similar to an electric power demand. The metric used to identify areas of unmet delivery requirement in these studies is a pressure threshold of 15 pounds per square inch (psi). This pressure threshold is used because it is below the required pressure for fire protection. Any location in the model with pressure that drops below this threshold at any time during an EPS is considered to have unmet service requirements and is used to determine cascading consequences. The outage area for a contingency is the aggregation of all service areas with a pressure below the threshold at any time during the EPS.

  4. A kinetic model for corrosion and precipitation in non-isothermal LBE flow loop

    Science.gov (United States)

    He, By Xiaoyi; Li, Ning; Mineev, Mark

    2001-08-01

    A kinetic model was developed to estimate the corrosion/precipitation rate in a non-isothermal liquid lead-bismuth eutectic (LBE) flow loop. The model was based on solving the mass transport equation with the assumptions that convective transport dominates in the longitudinal flow direction and diffusion dominates in the transverse direction. The species concentration at wall is assumed to be determined either by the solubility of species in LBE in the absence of oxygen or by the reduction reaction of the protective oxide film when active oxygen control is applied. Analyses show that the corrosion/precipitation rate depends on the flow velocity, the species diffusion rate, the oxygen concentration in LBE, as well as the temperature distribution along a loop. Active oxygen control can significantly reduce the corrosion/precipitation of the structural materials. It is shown that the highest corrosion/precipitation does not necessarily locate at places with the highest/lowest temperature. For a material testing loop being constructed at the Los Alamos National Laboratory (LANL), the highest corrosion occurs at the end of the heater zone, while the highest precipitation occurs in the return flow in the recuperator.

  5. Effectiveness and Utility of a Case-Based Model for Delivering Engineering Ethics Professional Development Units

    Directory of Open Access Journals (Sweden)

    Heidi Ann Hahn

    2015-04-01

    Full Text Available This article describes an action research project conducted at Los Alamos National Laboratory (LANL to resolve a problem with the ability of licensed and/or certified engineers to obtain the ethics-related professional development units or hours (PDUs or PDHs needed to maintain their credentials. Because of the recurring requirement and the static nature of the information, an initial, in-depth training followed by annually updated refresher training was proposed. A case model approach, with online delivery, was selected as the optimal pedagogical model for the refresher training. In the first two years, the only data that was collected was throughput and information retention. Response rates indicated that the approach was effective in helping licensed professional engineers obtain the needed PDUs. The rates of correct responses suggested that knowledge transfer regarding ethical reasoning had occurred in the initial training and had been retained in the refresher. In FY13, after completing the refresher, learners received a survey asking their opinion of the effectiveness and utility of the course, as well as their impressions of the case study format vs. the typical presentation format. Results indicate that the courses have been favorably received and that the case study method supports most of the pedagogical needs of adult learners as well as, if not better than, presentation-based instruction. Future plans for improvement are focused on identifying and evaluating methods for enriching online delivery of the engineering ethics cases.

  6. Preliminary Modeling of Accident Tolerant Fuel Concepts under Accident Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Gamble, Kyle A.; Hales, Jason D.

    2016-12-01

    The catastrophic events that occurred at the Fukushima-Daiichi nuclear power plant in 2011 have led to widespread interest in research of alternative fuels and claddings that are proposed to be accident tolerant. Thus, the United States Department of Energy through its NEAMS (Nuclear Energy Advanced Modeling and Simulation) program has funded an Accident Tolerant Fuel (ATF) High Impact Problem (HIP). The ATF HIP is funded for a three-year period. The purpose of the HIP is to perform research into two potential accident tolerant concepts and provide an in-depth report to the Advanced Fuels Campaign (AFC) describing the behavior of the concepts, both of which are being considered for inclusion in a lead test assembly scheduled for placement into a commercial reactor in 2022. The initial focus of the HIP is on uranium silicide fuel and iron-chromium-aluminum (FeCrAl) alloy cladding. Utilizing the expertise of three national laboratory participants (INL, LANL, and ANL) a comprehensive mulitscale approach to modeling is being used including atomistic modeling, molecular dynamics, rate theory, phase-field, and fuel performance simulations. In this paper, we present simulations of two proposed accident tolerant fuel systems: U3Si2 fuel with Zircaloy-4 cladding, and UO2 fuel with FeCrAl cladding. The simulations investigate the fuel performance response of the proposed ATF systems under Loss of Coolant and Station Blackout conditions using the BISON code. Sensitivity analyses are completed using Sandia National Laboratories’ DAKOTA software to determine which input parameters (e.g., fuel specific heat) have the greatest influence on the output metrics of interest (e.g., fuel centerline temperature). Early results indicate that each concept has significant advantages as well as areas of concern. Further work is required prior to formulating the proposition report for the Advanced Fuels Campaign.

  7. Model-driven decision support for monitoring network design: methods and applications

    Science.gov (United States)

    Vesselinov, V. V.; Harp, D. R.; Mishra, P. K.; Katzman, D.

    2012-12-01

    A crucial aspect of any decision-making process for environmental management of contaminated sites and protection of groundwater resources is the identification of scientifically defensible remediation scenarios. The selected scenarios are ranked based on both their protective and cost effectiveness. The decision-making process is facilitated by implementation of site-specific data- and model-driven analyses for decision support (DS) taking into account existing uncertainties to evaluate alternative characterization and remedial activities. However, due to lack of data and/or complex interdependent uncertainties (conceptual elements, model parameters, measurement/computational errors, etc.), the DS optimization problem is ill posed (non unique) and the model-prediction uncertainties are difficult to quantify. Recently, we have developed and implemented several novel theoretical approaches and computational algorithms for model-driven decision support. New and existing DS tools have been employed for model analyses of the fate and extent of a chromium plume in the regional aquifer at Sandia Canyon Site, LANL. Since 2007, we have performed three iterations of DS analyses implementing different models, decision-making tools, and data sets providing guidance on design of a subsurface monitoring network for (1) characterization of the flow and transports processes, and (2) protection of the water users. The monitoring network is augmented by new wells at locations where acquired new data can effectively reduce uncertainty in model predicted contaminant concentrations. A key component of the DS analyses is contaminant source identification. Due to data and conceptual uncertainties, subsurface processes controlling the contaminant arrival at the top of the regional aquifer are not well defined. Nevertheless, the model-based analyses of the existing data and conceptual knowledge, including respective uncertainties, provide constrained probabilistic estimates of the

  8. Interpretation of Urinary Excretion Data From Plutonium Wound Cases Treated With DTPA: Application of Different Models and Approaches.

    Science.gov (United States)

    Poudel, Deepesh; Bertelli, Luiz; Klumpp, John A; Waters, Tom L

    2017-07-01

    After a chelation treatment, assessment of intake and doses is the primary concern of an internal dosimetrist. Using the urinary excretion data from two actual wound cases encountered at Los Alamos National Laboratory (LANL), this paper discusses several methods that can be used to interpret intakes from the urinary data collected after one or multiple chelation treatments. One of the methods uses only the data assumed to be unaffected by chelation (data collected beyond 100 d after the last treatment). This method, used by many facilities for official dose records, was implemented by employing maximum likelihood analysis and Bayesian analysis methods. The impacts of an improper assumption about the physicochemical behavior of a radioactive material and the importance of the use of a facility-specific biokinetic model when available have also been demonstrated. Another method analyzed both the affected and unaffected urinary data using an empirical urinary excretion model. This method, although case-specific, was useful in determining the actual intakes and the doses averted or the reduction in body burdens due to chelation treatments. This approach was important in determining the enhancement factors, the behavior of the chelate, and other observations that may be pertinent to several DTPA compartmental modeling approaches being conducted by the health physics community.

  9. Development of Improved Algorithms and Multiscale Modeling Capability with SUNTANS

    Science.gov (United States)

    2013-09-30

    continuity of volume and mass are guaranteed for the hydrostatic solver. The theta Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...internal tides over a submerged ridge”, Ocean Turbulence Conference Abstract, CNLS, LANL , Santa Fe, NM, June 2013. Vitousek, S., and Fringer, O.B

  10. Dynamic Experiments and Constitutive Model Performance for Polycarbonate

    Science.gov (United States)

    2014-07-01

    are not to be construed as an official Department of the Army position unless so designated by other authorized documents . Citation of...Approved for public release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No...Carl Trujillo, and Daniel T. Martinez, LANL MST-8. It is my honor to work and collaborate with such talented and highly intelligent individuals. viii

  11. Development of a multimedia radionuclide exposure model for low-level waste management

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Y.; Whelan, G.; Skaggs, R.L.

    1982-03-01

    A method is being developed for assessing exposures of the air, water, and plants to low-level waste (LLW) as a part of an overall development effort of a LLW site evaluation methodology. The assessment methodology will predict LLW exposure levels in the environment by simulating dominant mechanisms of LLW migration and fate. The methodology consists of a series of physics-based models with proven histories of success; the models interact with each other to simulate LLW transport in the ecosystem. A scaled-down version of the methodology was developed first by combining the terrestrial ecological model, BIOTRAN; the overland transport model, ARM; the instream hydrodynamic model, DKWAV; and the instream sediment-contaminant transport model, TODAM (a one-dimensional version of SERATRA). The methodology was used to simulate the migration of /sup 239/Pu from a shallow-land disposal site (known as Area C) located near the head of South Mortandad Canyon on the LANL site in New Mexico. The scenario assumed that /sup 239/Pu would be deposited on the land surface through the natural processes of plant growth, LLW uptake, dryfall, and litter decomposition. Runoff events would then transport /sup 239/Pu to and in the canyon. The model provided sets of simulated LLW levels in soil, water and terrestrial plants in the region surrounding the site under a specified land-use and a waste management option. Over a 100-yr simulation period, only an extremely small quantity (6 x 10/sup -9/ times the original concentration) of buried /sup 239/Pu was taken up by plants and deposited on the land surface. Only a small fraction (approximately 1%) of that contamination was further removed by soil erosion from the site and carried to the canyon, where it remained. Hence, the study reveals that the environment around Area C has integrity high enough to curtail LLW migration under recreational land use.

  12. Magnetic Local Time dependency in modeling of the Earth radiation belts

    Science.gov (United States)

    Herrera, Damien; Maget, Vincent; Bourdarie, Sébastien; Rolland, Guy

    2017-04-01

    For many years, ONERA has been at the forefront of the modeling of the Earth radiation belts thanks to the Salammbô model, which accurately reproduces their dynamics over a time scale of the particles' drift period. This implies that we implicitly assume an homogeneous repartition of the trapped particles along a given drift shell. However, radiation belts are inhomogeneous in Magnetic Local Time (MLT). So, we need to take this new coordinate into account to model rigorously the dynamical structures, particularly induced during a geomagnetic storm. For this purpose, we are working on both the numerical resolution of the Fokker-Planck diffusion equation included in the model and on the MLT dependency of physic-based processes acting in the Earth radiation belts. The aim of this talk is first to present the 4D-equation used and the different steps we used to build Salammbô 4D model before focusing on physical processes taken into account in the Salammbô code, specially transport due to convection electric field. Firstly, we will briefly introduce the Salammbô 4D code developped by talking about its numerical scheme and physic-based processes modeled. Then, we will focus our attention on the impact of the outer boundary condition (localisation and spectrum) at lower L∗ shell by comparing modeling performed with geosynchronous data from LANL-GEO satellites. Finally, we will discuss the prime importance of the convection electric field to the radial and drift transport of low energy particles around the Earth.

  13. First principles justification of a ``single wave model'' for a general electrostatic instability

    Science.gov (United States)

    Crawford, J. D.; Jayaraman, A.

    1997-11-01

    The coefficients in the amplitude equation for an unstable mode in a multi-species Vlasov plasma are singular as the growth rate γ approaches zero. Rescaling the mode amplitude |A(t)|=γ^5/2r(γ t) cancels these singularities to all orders. (J.D. Crawford and A. Jayaraman, submitted to J. Math. Phys.; available from http://xxx.lanl.gov/abs/patt-sol/9706001) In addition, singularities arise in the asymptotic form of f(x,v,t); there are poles in the complex-velocity plane that approach the real velocity axis at the phase velocity vp as γarrow0^+. However the numerators contain factors of A(t), and we analyze the resulting product by introducing a singular velocity variable u=(v-v_p)/γ. In an \\cal O(γ) neighborhood of v_p, the weighted coefficients have finite, non-zero limits; outside this neighborhood, the coefficients vanish at γ=0. The complete asymptotic description of the instability contains non-resonant particles driven linearly by a monochromatic electric field E while the resonant particles at vp remain strongly nonlinear and yield a density spectrum with many wavenumbers. This picture recalls the single wave model of O'Neil et al. introduced for a cold beam-plasma instability with fixed ions.

  14. Evaluation of the Regional Arctic System Model (RASM) - Process-resolving Arctic Climate Simulation

    Science.gov (United States)

    Maslowski, Wieslaw

    2016-04-01

    The Regional Arctic System Model (RASM) has been developed to better understand the past and present operation of Arctic System at process scale and to predict its change at time scales from days to decades, in support of the US environmental assessment and prediction needs. RASM is a limited-area, fully coupled ice-ocean-atmosphere-land model that uses the Community Earth System Model (CESM) framework. It includes the Weather Research and Forecasting (WRF) model, the LANL Parallel Ocean Program (POP) and Community Ice Model (CICE) and the Variable Infiltration Capacity (VIC) land hydrology model. The ocean and sea ice models used in RASM are regionally configured versions of those used in CESM, while WRF replaces the Community Atmospheric Model (CAM). In addition, a streamflow routing (RVIC) model was recently implemented in RASM to transport the freshwater flux from the land surface to the Arctic Ocean. The model domain is configured at an eddy-permitting resolution of 1/12° (or ~9km) for the ice-ocean and 50 km for the atmosphere-land model components. It covers the entire Northern Hemisphere marine cryosphere, terrestrial drainage to the Arctic Ocean and its major inflow and outflow pathways, with optimal extension into the North Pacific / Atlantic to model the passage of cyclones into the Arctic. In addition, a 1/48° (or ~2.4km) grid for the ice-ocean model components has been recently configured. All RASM components are coupled at high frequency (currently at 20-minute intervals) to allow realistic representation of inertial interactions among the model components. In addition to an overview of RASM technical details, model results are presented from both fully coupled and subsets of RASM, where the atmospheric and land components are replaced with prescribed realistic atmospheric reanalysis data to evaluate model skill in representing seasonal climatology as well as interannual and multidecadal climate variability. Selected physical processes and resulting

  15. Realistic modeling and analysis of synchronization dynamics in power-grid networks

    Science.gov (United States)

    Nishikawa, Takashi

    2015-03-01

    An imperative condition for the functioning of a power-grid network is that its power generators remain synchronized. Disturbances can prompt desynchronization, which is a process that has been involved in large power outages. In this talk I will first give a comparative review of three leading models of synchronization in power-grid networks. Each of these models can be derived from first principles under a common framework and represents a power grid as a complex network of coupled second-order phase oscillators with both forcing and damping terms. Since these models require dynamical parameters that are unavailable in typical power-grid datasets, I will discuss an approach to estimate these parameters. The models will be used to show that if the network structure is not homogeneous, generators with identical parameters need to be treated as non-identical oscillators in general. For one of the models, which describes the dynamics of coupled generators through a network of effective interactions, I will derive a condition under which the desired synchronous state is stable. This condition gives rise to a methodology to specify parameter assignments that can enhance synchronization of any given network, which I will demonstrate for a selection of both test systems and real power grids. These parameter assignments can be realized through very fast control loops, and this may help devise new control schemes that offer an additional layer of protection, thus contributing to the development of smart grids that can recover from failures in real time. Funded by ISEN, NSF, and LANL LDRD.

  16. Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1

    Energy Technology Data Exchange (ETDEWEB)

    Vigil,Benny Manuel [Los Alamos National Laboratory; Ballance, Robert [SNL; Haskell, Karen [SNL

    2012-08-09

    Cielo is a massively parallel supercomputer funded by the DOE/NNSA Advanced Simulation and Computing (ASC) program, and operated by the Alliance for Computing at Extreme Scale (ACES), a partnership between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL). The primary Cielo compute platform is physically located at Los Alamos National Laboratory. This Cielo Computational Environment Usage Model documents the capabilities and the environment to be provided for the Q1 FY12 Level 2 Cielo Capability Computing (CCC) Platform Production Readiness Milestone. This document describes specific capabilities, tools, and procedures to support both local and remote users. The model is focused on the needs of the ASC user working in the secure computing environments at Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory, or Sandia National Laboratories, but also addresses the needs of users working in the unclassified environment. The Cielo Computational Environment Usage Model maps the provided capabilities to the tri-Lab ASC Computing Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment capabilities of the ASC community. A description of ACE requirements met, and those requirements that are not met, are included in each section of this document. The Cielo Computing Environment, along with the ACE mappings, has been issued and reviewed throughout the tri-Lab community.

  17. The engineering institute of Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles R [Los Alamos National Laboratory; Park, Gyuhae [Los Alamos National Laboratory; Cornwell, Phillip J [Los Alamos National Laboratory; Todd, Michael D [UCSD

    2008-01-01

    Los Alamos National Laboratory (LANL) and the University of California, San Diego (UCSD) have taken the unprecedented step of creating a collaborative, multi-disciplinary graduate education program and associated research agenda called the Engineering Institute. The mission of the Engineering Institute is to develop a comprehensive approach for conducting LANL mission-driven, multidisciplinary engineering research and to improve recruiting, revitalization, and retention of the current and future staff necessary to support the LANL' s national security responsibilities. The components of the Engineering Institute are (1) a joint LANL/UCSD degree program, (2) joint LANL/UCSD research projects, (3) the Los Alamos Dynamic Summer School, (4) an annual workshop, and (5) industry short courses. This program is a possible model for future industry/government interactions with university partners.

  18. Multiscale Analysis, Modeling, and Processing of Higher-Dimensional Geometric Data

    Science.gov (United States)

    2007-08-31

    presentations at IMA, TI Developers Conference, Google, Michigan State, Boston U., Toledo , LANL, and the AMD Global Vision Conference. 2005 IMA, Fashion...Connexions (cnx.org) and the George R. Brown Award for Superior Teaching at Rice (third time) in 2006. 2004: Richard Baraniuk was elevated to the Victor E

  19. Spacecraft Charging Modeling - NASCAP-2K 2013 Annual Report

    Science.gov (United States)

    2013-09-20

    documented in References 4 and 5. The measurements used in the study were taken by the LANL (Los Alamos National Laboratory) MPA (Magnetospheric Plasma...5776 DTIC COPY NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose...of its ideas or findings. Approved for public release; distribution is unlimited. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188

  20. Leadership Models.

    Science.gov (United States)

    Freeman, Thomas J.

    This paper discusses six different models of organizational structure and leadership, including the scalar chain or pyramid model, the continuum model, the grid model, the linking pin model, the contingency model, and the circle or democratic model. Each model is examined in a separate section that describes the model and its development, lists…

  1. Calculations in Support of JAEA ZEUS Experiments

    Energy Technology Data Exchange (ETDEWEB)

    James, Michael R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-21

    A comparison of Los Alamos National Laboratory’s model for measuring Pb void reactivity with the JAEA. Comparison: •Stacking of HEU/Pbslightly off based on different “unit stack”composition.•LANL model has no “top plate”•LANL model does not have•Small difference in Pb/Al plates: 1.212 (JAEA) cm vs 1.2cm (LANL).•Material uncertainties•Composition of Pb is the largest uncertainty. Other issues could be in composition of upper reflector, lower reflector and corner and side reflectors

  2. Numerical modeling of fluid flow in a fault zone: a case of study from Majella Mountain (Italy).

    Science.gov (United States)

    Romano, Valentina; Battaglia, Maurizio; Bigi, Sabina; De'Haven Hyman, Jeffrey; Valocchi, Albert J.

    2017-04-01

    The study of fluid flow in fractured rocks plays a key role in reservoir management, including CO2 sequestration and waste isolation. We present a numerical model of fluid flow in a fault zone, based on field data acquired in Majella Mountain, in the Central Apennines (Italy). This fault zone is considered a good analogue for the massive presence of fluid migration in the form of tar. Faults are mechanical features and cause permeability heterogeneities in the upper crust, so they strongly influence fluid flow. The distribution of the main components (core, damage zone) can lead the fault zone to act as a conduit, a barrier, or a combined conduit-barrier system. We integrated existing information and our own structural surveys of the area to better identify the major fault features (e.g., type of fractures, statistical properties, geometrical and petro-physical characteristics). In our model the damage zones of the fault are described as discretely fractured medium, while the core of the fault as a porous one. Our model utilizes the dfnWorks code, a parallelized computational suite, developed at Los Alamos National Laboratory (LANL), that generates three dimensional Discrete Fracture Network (DFN) of the damage zones of the fault and characterizes its hydraulic parameters. The challenge of the study is the coupling between the discrete domain of the damage zones and the continuum one of the core. The field investigations and the basic computational workflow will be described, along with preliminary results of fluid flow simulation at the scale of the fault.

  3. Model Transformations? Transformation Models!

    NARCIS (Netherlands)

    Bézivin, J.; Büttner, F.; Gogolla, M.; Jouault, F.; Kurtev, I.; Lindow, A.

    2006-01-01

    Much of the current work on model transformations seems essentially operational and executable in nature. Executable descriptions are necessary from the point of view of implementation. But from a conceptual point of view, transformations can also be viewed as descriptive models by stating only the

  4. Modelling business models

    NARCIS (Netherlands)

    Simonse, W.L.

    2014-01-01

    Business model design does not always produce a “design” or “model” as the expected result. However, when designers are involved, a visual model or artifact is produced. To assist strategic managers in thinking about how they can act, the designers’ challenge is to combine both strategy and design n

  5. A case study testing the cavity mode model of the magnetosphere

    Directory of Open Access Journals (Sweden)

    D. V. Sarafopoulos

    2005-07-01

    Full Text Available Based on a case study we test the cavity mode model of the magnetosphere, looking for eigenfrequencies via multi-satellite and multi-instrument measurements. Geotail and ACE provide information on the interplanetary medium that dictates the input parameters of the system; the four Cluster satellites monitor the magnetopause surface waves; the POLAR (L=9.4 and LANL 97A (L=6.6 satellites reveal two in-situ monochromatic field line resonances (FLRs with T=6 and 2.5 min, respectively; and the IMAGE ground magnetometers demonstrate latitude dependent delays in signature arrival times, as inferred by Sarafopoulos (2004b. Similar dispersive structures showing systematic delays are also extensively scrutinized by Sarafopoulos (2005 and interpreted as tightly associated with the so-called pseudo-FLRs, which show almost the same observational characteristics with an authentic FLR. In particular for this episode, successive solar wind pressure pulses produce recurring ionosphere twin vortex Hall currents which are identified on the ground as pseudo-FLRs. The BJN ground magnetometer records the pseudo-FLR (alike with the other IMAGE station responses associated with an intense power spectral density ranging from 8 to 12 min and, in addition, two discrete resonant lines with T=3.5 and 7 min. In this case study, even though the magnetosphere is evidently affected by a broad-band compressional wave originated upstream of the bow shock, nevertheless, we do not identify any cavity mode oscillation within the magnetosphere. We fail, also, to identify any of the cavity mode frequencies proposed by Samson (1992.

    Keywords. Magnetospheric physics (Magnetosphereionosphere interactions; Solar wind-magnetosphere interactions; MHD waves and instabilities

  6. Statistical Analysis of Demographic and Temporal Differences in LANL's 2014 Voluntary Protection Program Survey

    Energy Technology Data Exchange (ETDEWEB)

    Davis, Adam Christopher [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Booth, Steven Richard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-08-20

    Voluntary Protection Program (VPP) surveys were conducted in 2013 and 2014 to assess the degree to which workers at Los Alamos National Laboratory feel that their safety is valued by their management and peers. The goal of this analysis is to determine whether the difference between the VPP survey scores in 2013 and 2014 is significant, and to present the data in a way such that it can help identify either positive changes or potential opportunities for improvement. Data for several questions intended to identify the demographic groups of the respondent are included in both the 2013 and 2014 VPP survey results. These can be used to identify any significant differences among groups of employees as well as to identify any temporal trends in these cohorts.

  7. Effects of Various Blowout Panel Configurations on the Structural Response of LANL Building 16-340 to Internal Explosions

    Energy Technology Data Exchange (ETDEWEB)

    Wilke, Jason P. [New Mexico Inst. of Mining and Technology, Socorro, NM (United States)

    2005-09-01

    The risk of accidental detonation is present whenever any type of high explosives processing activity is performed. These activities are typically carried out indoors to protect processing equipment from the weather and to hide possibly secret processes from view. Often, highly strengthened reinforced concrete buildings are employed to house these activities. These buildings may incorporate several design features, including the use of lightweight frangible blowout panels, to help mitigate blast effects. These panels are used to construct walls that are durable enough to withstand the weather, but are of minimal weight to provide overpressure relief by quickly moving outwards and creating a vent area during an accidental explosion. In this study the behavior of blowout panels under various blast loading conditions was examined. External loadings from explosions occurring in nearby rooms were of primary interest. Several reinforcement systems were designed to help blowout panels resist failure from external blast loads while still allowing them to function as vents when subjected to internal explosions. The reinforcements were studied using two analytical techniques, yield-line analysis and modal analysis, and the hydrocode AUTODYN. A blowout panel reinforcement design was created that could prevent panels from being blown inward by external explosions. This design was found to increase the internal loading of the building by 20%, as compared with nonreinforced panels. Nonreinforced panels were found to increase the structural loads by 80% when compared to an open wall at the panel location.

  8. LANL Virtual Center for Chemical Hydrogen Storage: Chemical Hydrogen Storage Using Ultra-high Surface Area Main Group Materials

    Energy Technology Data Exchange (ETDEWEB)

    Susan M. Kauzlarich; Phillip P. Power; Doinita Neiner; Alex Pickering; Eric Rivard; Bobby Ellis, T. M.; Atkins, A. Merrill; R. Wolf; Julia Wang

    2010-09-05

    The focus of the project was to design and synthesize light element compounds and nanomaterials that will reversibly store molecular hydrogen for hydrogen storage materials. The primary targets investigated during the last year were amine and hydrogen terminated silicon (Si) nanoparticles, Si alloyed with lighter elements (carbon (C) and boron (B)) and boron nanoparticles. The large surface area of nanoparticles should facilitate a favorable weight to volume ratio, while the low molecular weight elements such as B, nitrogen (N), and Si exist in a variety of inexpensive and readily available precursors. Furthermore, small NPs of Si are nontoxic and non-corrosive. Insights gained from these studies will be applied toward the design and synthesis of hydrogen storage materials that meet the DOE 2010 hydrogen storage targets: cost, hydrogen capacity and reversibility. Two primary routes were explored for the production of nanoparticles smaller than 10 nm in diameter. The first was the reduction of the elemental halides to achieve nanomaterials with chloride surface termination that could subsequently be replaced with amine or hydrogen. The second was the reaction of alkali metal Si or Si alloys with ammonium halides to produce hydrogen capped nanomaterials. These materials were characterized via X-ray powder diffraction, TEM, FTIR, TG/DSC, and NMR spectroscopy.

  9. Improved Technologies for Decontamination of Crated Large Metal Objects LANL Release No: LA-UR-02-0072

    Energy Technology Data Exchange (ETDEWEB)

    McFee, J.; Stallings, E.; Barbour, K.

    2002-02-26

    The Los Alamos Large Scale Demonstration and Deployment Project (LSDDP) in support of the US Department of Energy (DOE) Deactivation and Decommissioning Focus Area (DDFA) is identifying and demonstrating technologies to reduce the cost and risk of management of transuranic element contaminated large metal objects, i.e. gloveboxes. The previously conducted demonstrations supported characterization and ''front end'' aspects of the Los Alamos Decontamination and Volume Reduction System (DVRS) project. The first demonstration was shown to save the DVRS project approximately $200,000 per year and characterization technologies have been estimated to save DVRS a month of DVRS operation per year. In FY01 demonstrations for decontamination technologies, communication systems, and waste data collection systems have provided additional savings equivalent to another $200K per year of operation. The Los Alamos Large Scale demonstration and Deployment Project continues to provide substantial cost savings to the DVRS process in this second round of demonstrations. DVRS cost savings of $400K per year can now be counted, with additional efficiency savings of up to 30% on many tasks.

  10. Multi-Decadal Variability in the Bering Sea: A Synthesis of Model Results and Observations from 1948 to the Present

    Science.gov (United States)

    2013-12-01

    Co-Advisors: Wieslaw Maslowski Jaclyn Clement Kinney THIS PAGE INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704...Laboratory LANL Los Alamos National Laboratory MDA maritime domain awareness METOC meteorology and oceanography MIZ marginal ice zone...Arctic (USCG 2013). In order to address the United States involvement in the region, numerous guiding documents have been published and subsequently

  11. Modelling SDL, Modelling Languages

    Directory of Open Access Journals (Sweden)

    Michael Piefel

    2007-02-01

    Full Text Available Today's software systems are too complex to implement them and model them using only one language. As a result, modern software engineering uses different languages for different levels of abstraction and different system aspects. Thus to handle an increasing number of related or integrated languages is the most challenging task in the development of tools. We use object oriented metamodelling to describe languages. Object orientation allows us to derive abstract reusable concept definitions (concept classes from existing languages. This language definition technique concentrates on semantic abstractions rather than syntactical peculiarities. We present a set of common concept classes that describe structure, behaviour, and data aspects of high-level modelling languages. Our models contain syntax modelling using the OMG MOF as well as static semantic constraints written in OMG OCL. We derive metamodels for subsets of SDL and UML from these common concepts, and we show for parts of these languages that they can be modelled and related to each other through the same abstract concepts.

  12. Actant Models

    DEFF Research Database (Denmark)

    Poulsen, Helle

    1996-01-01

    This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants.......This paper presents a functional modelling method called Actant Modelling rooted in linguistics and semiotics. Actant modelling can be integrated with Multilevel Flow Modelling (MFM) in order to give an interpretation of actants....

  13. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  14. Modelling Practice

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter deals with the practicalities of building, testing, deploying and maintaining models. It gives specific advice for each phase of the modelling cycle. To do this, a modelling framework is introduced which covers: problem and model definition; model conceptualization; model data...... requirements; model construction; model solution; model verification; model validation and finally model deployment and maintenance. Within the adopted methodology, each step is discussedthrough the consideration of key issues and questions relevant to the modelling activity. Practical advice, based on many...... years of experience is providing in directing the reader in their activities.Traps and pitfalls are discussed and strategies also given to improve model development towards “fit-for-purpose” models. The emphasis in this chapter is the adoption and exercise of a modelling methodology that has proven very...

  15. Summary Report of Working Group 2: Computation

    Science.gov (United States)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  16. Groundwater Annual Status Report for Fiscal Year 1998

    Energy Technology Data Exchange (ETDEWEB)

    A. K. Stoker; A. S. Johnson; B. D. Newman; B. M. Gallaher; C. L. Nylander; D. B. Rogers; D. E. Broxton; D. Katzman; E. H. Keating; G. L. Cole; K. A. Bitner; K. I. Mullen; P. Longmire; S. G. McLin; W. J. Stone

    1999-04-01

    Groundwater protection activities and hydrogeologic characterization studies are conducted at LANL annually. A summary of fiscal year 1998 results and findings shows increased understanding of the hydrogeologic environment beneath the Pajarito Plateau and significant refinement to elements of the LANL Hydrogeologic Conceptual Model pertaining to areas and sources of recharge to the regional aquifer. Modeling, drilling, monitoring, and data collection activities are proposed for fiscal year 1999.

  17. Promoting Models

    Science.gov (United States)

    Li, Qin; Zhao, Yongxin; Wu, Xiaofeng; Liu, Si

    There can be multitudinous models specifying aspects of the same system. Each model has a bias towards one aspect. These models often override in specific aspects though they have different expressions. A specification written in one model can be refined by introducing additional information from other models. The paper proposes a concept of promoting models which is a methodology to obtain refinements with support from cooperating models. It refines a primary model by integrating the information from a secondary model. The promotion principle is not merely an academic point, but also a reliable and robust engineering technique which can be used to develop software and hardware systems. It can also check the consistency between two specifications from different models. A case of modeling a simple online shopping system with the cooperation of the guarded design model and CSP model illustrates the practicability of the promotion principle.

  18. Cadastral Modeling

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares to...

  19. ITM-Related Data and Model Services at the Sun Earth Connection Active Archive (SECAA)

    Science.gov (United States)

    McGuire, R.; Bilitza, D.; Kovalick, T.; Papitashvili, N.; Candey, R.; Han, D.

    2004-12-01

    NASA's Sun Earth Connection Active Archive (SECAA) provides access to a large volume of data and models that are of relevance to Ionospheric, Thermospheric and Mesospheric (ITM) physics. SECAA has developed a number of web systems to facilitate user access to this important data source and is making these services available through Web Services (or Application Programming Interfaces, API) directly to applications such as VxOs. The Coordinated Data Analysis web (CDAWeb) lets user plot data using a wide range of parameter display options including mapped images and movies. Capabilities also include parameter listings and data downloads in CDF and ASCII format. CDAWeb provides access to data from most of NASA's currently operating space science satellites and many of the earlier missions; of special ITM interest are DE-2, ISIS, FAST, Equator-S, and TIMED. SECAA maintains and supports the Common Data Format (CDF) including software to read and write CDF files. Most recently translator services have been added for CDF translations to/from netCDF, FITS, CDFXML, and ASCII. The SSCWeb interface enables users to plot orbits for the majority of space physics satellites (including TIMED, UARS, DMSP, NOAA, LANL etc.) and to query for magnetic field line conjunctions between multiple spacecraft and ground stations and for magnetic region occupancy. Recently an Interactive 3-D orbit viewer was added to SSCWeb. Access to legacy data from older ITM satellite missions is provided through the ATMOWeb system with the ability to generate plots and download data subsets in ASCII format. Recently added capabilities include the option to filter the data using an upper and lower boundary for any one of the data set parameters. We will also present the newest version of the web portal to SECAA's models catalog, ftp archive, and web interfaces. The web interfaces (Fortran, C, Java) let users compute, list, plot, and download model parameters for selected models (IRI, IGRF, MSIS/CIRA, AE

  20. Integration experiences and performance studies of A COTS parallel archive systems

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsing-bung [Los Alamos National Laboratory; Scott, Cody [Los Alamos National Laboratory; Grider, Bary [Los Alamos National Laboratory; Torres, Aaron [Los Alamos National Laboratory; Turley, Milton [Los Alamos National Laboratory; Sanchez, Kathy [Los Alamos National Laboratory; Bremer, John [Los Alamos National Laboratory

    2010-01-01

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf(COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, ls, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petaflop/s computing system, LANL's Roadrunner, and demonstrated its capability to address requirements of

  1. Integration experiments and performance studies of a COTS parallel archive system

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsing-bung [Los Alamos National Laboratory; Scott, Cody [Los Alamos National Laboratory; Grider, Gary [Los Alamos National Laboratory; Torres, Aaron [Los Alamos National Laboratory; Turley, Milton [Los Alamos National Laboratory; Sanchez, Kathy [Los Alamos National Laboratory; Bremer, John [Los Alamos National Laboratory

    2010-06-16

    Current and future Archive Storage Systems have been asked to (a) scale to very high bandwidths, (b) scale in metadata performance, (c) support policy-based hierarchical storage management capability, (d) scale in supporting changing needs of very large data sets, (e) support standard interface, and (f) utilize commercial-off-the-shelf (COTS) hardware. Parallel file systems have been asked to do the same thing but at one or more orders of magnitude faster in performance. Archive systems continue to move closer to file systems in their design due to the need for speed and bandwidth, especially metadata searching speeds such as more caching and less robust semantics. Currently the number of extreme highly scalable parallel archive solutions is very small especially those that will move a single large striped parallel disk file onto many tapes in parallel. We believe that a hybrid storage approach of using COTS components and innovative software technology can bring new capabilities into a production environment for the HPC community much faster than the approach of creating and maintaining a complete end-to-end unique parallel archive software solution. In this paper, we relay our experience of integrating a global parallel file system and a standard backup/archive product with a very small amount of additional code to provide a scalable, parallel archive. Our solution has a high degree of overlap with current parallel archive products including (a) doing parallel movement to/from tape for a single large parallel file, (b) hierarchical storage management, (c) ILM features, (d) high volume (non-single parallel file) archives for backup/archive/content management, and (e) leveraging all free file movement tools in Linux such as copy, move, Is, tar, etc. We have successfully applied our working COTS Parallel Archive System to the current world's first petafiop/s computing system, LANL's Roadrunner machine, and demonstrated its capability to address

  2. Model Warehouse

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    This paper puts forward a new conception:model warehouse,analyzes the reason why model warehouse appears and introduces the characteristics and architecture of model warehouse.Last,this paper points out that model warehouse is an important part of WebGIS.

  3. Constitutive Models

    DEFF Research Database (Denmark)

    2011-01-01

    procedure is introduced for the analysis and solution of property models. Models that capture and represent the temperature dependent behaviour of physical properties are introduced, as well as equation of state models (EOS) such as the SRK EOS. Modelling of liquid phase activity coefficients are also......This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...... covered, illustrating several models such as the Wilson equation and NRTL equation, along with their solution strategies. A section shows how to use experimental data to regress the property model parameters using a least squares approach. A full model analysis is applied in each example that discusses...

  4. Model cities

    OpenAIRE

    Batty, M.

    2007-01-01

    The term ?model? is now central to our thinking about how weunderstand and design cities. We suggest a variety of ways inwhich we use ?models?, linking these ideas to Abercrombie?sexposition of Town and Country Planning which represented thestate of the art fifty years ago. Here we focus on using models asphysical representations of the city, tracing the development ofsymbolic models where the focus is on simulating how functiongenerates form, to iconic models where the focus is on representi...

  5. Model theory

    CERN Document Server

    Chang, CC

    2012-01-01

    Model theory deals with a branch of mathematical logic showing connections between a formal language and its interpretations or models. This is the first and most successful textbook in logical model theory. Extensively updated and corrected in 1990 to accommodate developments in model theoretic methods - including classification theory and nonstandard analysis - the third edition added entirely new sections, exercises, and references. Each chapter introduces an individual method and discusses specific applications. Basic methods of constructing models include constants, elementary chains, Sko

  6. A theoretical quantum study on the distribution of electrophilic and nucleophilic active sites on Cu(100) surfaces modeled as finite clusters; Un estudio teorico cuantico sobre la distribucion de sitios activos electrofilicos y nucleofilicos sobre superficies de Cu(100) modeladas como cumulos finitos

    Energy Technology Data Exchange (ETDEWEB)

    Rios R, C.H.; Romero R, M. [Universidad Autonoma Metropolitana-Azcapotzalco, Departamento de Materiales, Av. San Pablo 180, Col. Reynosa Tamaulipas, 02200 Mexico D.F. (Mexico); Ponce R, A.; Mendoza H, L.H. [Universidad Autonoma del Estado de Hidalgo, Centro de Investigaciones Quimicas, Carretera Pachuca-Tulancingo km. 4.5, 42181 Pachuca, Hidalgo (Mexico)]. e-mail: clara_hrr@yahoo.es

    2008-07-01

    In this work, it is shown a theoretical quantum study of the active sites distribution on a monocrystalline surface of Cu(100). The copper surface was modeled as finite clusters of 14, 23, 38 and 53 atoms. We performed Hartree-Fock and Density Functional Theory (B3LYP) ab initio calculations employing the pseudopotentials of Hay and Wadt (LANLlMB y LANL2DZ). From calculations, we found a work function value of 4.1 eV. The mapping of the HOMO and LUMO in the frozen core approximation, allowed us finding the electrophilic and nucleophilic active sites distribution, respectively. The results indicated that electrophilic sites on the Cu(100) surface were located on hollow position and its numerical density was 8.6 x 10{sup 16} sites cm{sup -2}. From the nucleophilic local softness study, it was found that the nucleophilic sites were formed by a group of atoms and it had a numerical density of 2.4x 10{sup 16} sitescm{sup -2} . Last results indicated that adsorptions with 2 x 2 and 3 x 3 distributions can be favored onto a Cu(100) surface for the electrophilic and nucleophilic cases, respectively. (Author)

  7. Early Student Support to Investigate the Role of Sea Ice-Albedo Feedback in Sea Ice Predictions

    Science.gov (United States)

    2014-09-30

    all its versions employs the Los Alamos National Laboratory ( LANL ) sea ice model, known as CICE. The sea ice in CESM1 has been documented in a...their method so successful and yet a nonlocal relationship exists between sea ice meltponds and the location Report Documentation Page Form ApprovedOMB... LANL , who is the chief developer of CICE. Dr. Hunke is a partner with the sea ice prediction network and has a postdoc working with her to improve CICE

  8. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics. We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  9. Event Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2001-01-01

    The purpose of this chapter is to discuss conceptual event modeling within a context of information modeling. Traditionally, information modeling has been concerned with the modeling of a universe of discourse in terms of information structures. However, most interesting universes of discourse...... are dynamic and we present a modeling approach that can be used to model such dynamics.We characterize events as both information objects and change agents (Bækgaard 1997). When viewed as information objects events are phenomena that can be observed and described. For example, borrow events in a library can...

  10. Finite Element Modeling of Transient Head Field Associated with Partially Penetrating, Slug Tests in a Heterogeneous Aquifer with Low Permeability, Stratigraphic Zones and Faults

    Science.gov (United States)

    Cheng, J.; Johnson, B.; Everett, M.

    2003-12-01

    Preliminary field work shows slug interference tests using an array of multilevel active and monitoring wells have potential of permitting enhanced aquifer characterization. Analysis of these test data, however, ultimately will rely on numerical geophysical inverse models. In order to gain insight as well as to provide synthetic data sets, we use a 3-D finite element analysis (code:FEHM-LANL) to explore the effect of idealized, low permeability, stratigraphical and structural (faults) heterogeneities on the transient head field associated with a slug test in a packer-isolated interval of an open borehole. The borehole and packers are modeled explicitly; wellbore storage is selected to match values of field tests. The homogeneous model exhibits excellent agreement with that of the semi-analytical model of Liu and Butler (1995). Models are axisymmetric with a centrally located slugged interval within a homogenous, isotropic, confined aquifer with embedded, horizontal or vertical zones of lower permeability that represent low permeability strata or faults, respectively. Either one or two horizontal layers are located opposite the borehole packers, which is a common situation at the field site; layer thickness (0.15-0.75 m), permeability contrast (up to 4 orders of magnitude contrast) and lateral continuity of layers are varied between models. The effect of a "hole" in a layer also is assessed. Fault models explore effects of thickness (0.05-0.75 m) and permeability contrast as well as additional effects associated with the offset of low permeability strata. Results of models are represented most clearly by contour maps of time of arrival and normalized amplitude of peak head perturbation, but transient head histories at selected locations provide additional insight. Synthesis of the models is on-going but a few points can be made at present. Spatial patterns are distinctive and allow easy discrimination between stratigraphic and structural impedance features. Time

  11. Modeling Aeolian Transport of Contaminated Sediments at Los Alamos National Laboratory, Technical Area 54, Area G: Sensitivities to Succession, Disturbance, and Future Climate

    Energy Technology Data Exchange (ETDEWEB)

    Whicker, Jeffrey J. [Los Alamos National Laboratory; Kirchner, Thomas B. [New Mexico State University; Breshears, David D. [University of Arizona; Field, Jason P. [University of Arizona

    2012-03-27

    The Technical Area 54 (TA-54) Area G disposal facility is used for the disposal of radioactive waste at Los Alamos National Laboratory (LANL). U.S. Department of Energy (DOE) Order 435.1 (DOE, 2001) requires that radioactive waste be managed in a manner that protects public health and safety and the environment. In compliance with that requirement, DOE field sites must prepare and maintain site-specific radiological performance assessments for facilities that receive waste after September 26, 1988. Sites are also required to conduct composite analyses for facilities that receive waste after this date; these analyses account for the cumulative impacts of all waste that has been (and will be) disposed of at the facilities and other sources of radioactive material that may interact with these facilities. LANL issued Revision 4 of the Area G performance assessment and composite analysis in 2008. In support of those analyses, vertical and horizontal sediment flux data were collected at two analog sites, each with different dominant vegetation characteristics, and used to estimate rates of vertical resuspension and wind erosion for Area G. The results of that investigation indicated that there was no net loss of soil at the disposal site due to wind erosion, and suggested minimal impacts of wind on the long-term performance of the facility. However, that study did not evaluate the potential for contaminant transport caused by the horizontal movement of soil particles over long time frames. Since that time, additional field data have been collected to estimate wind threshold velocities for initiating sediment transport due to saltation and rates of sediment transport once those thresholds are reached. Data such as these have been used in the development of the Vegetation Modified Transport (VMTran) model. This model is designed to estimate patterns and long-term rates of contaminant redistribution caused by winds at the site, taking into account the impacts of plant

  12. Numerical models

    Digital Repository Service at National Institute of Oceanography (India)

    Unnikrishnan, A; Manoj, N.T.

    Various numerical models used to study the dynamics and horizontal distribution of salinity in Mandovi-Zuari estuaries, Goa, India is discussed in this chapter. Earlier, a one-dimensional network model was developed for representing the complex...

  13. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  14. Transforming How Climate System Models are Used: A Global, Multi-Resolution Approach to Regional Ocean Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max

    2013-03-14

    We review the results obtained under grant support. Details are given in the publications listed at the end of the review. We also provide lists of the personnel funded by the grant and of other collaborators on grant-related research and of the talks delivered, also under grant related research. We collaborated closely with geophysicists at the Los Alamos National Laboratory and the National Center for Atmospheric Research; especially noteworthy is our collaboration with Todd Ringler of LANL who was an active partner on much of our work.

  15. MODELING CONSCIOUSNESS

    OpenAIRE

    Taylor, J G

    2009-01-01

    We present tentative answers to three questions: firstly, what is to be assumed about the structure of the brain in attacking the problem of modeling consciousness; secondly, what is it about consciousness that is attempting to be modeled; and finally, what is taken on board the modeling enterprise, if anything, from the vast works by philosophers about the nature of mind.

  16. Zeebrugge Model

    DEFF Research Database (Denmark)

    Sclütter, Flemming; Frigaard, Peter; Liu, Zhou

    This report presents the model test results on wave run-up on the Zeebrugge breakwater under the simulated prototype storms. The model test was performed in January 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University. The detailed description of the model is given...

  17. Interface models

    DEFF Research Database (Denmark)

    Ravn, Anders P.; Staunstrup, Jørgen

    1994-01-01

    This paper proposes a model for specifying interfaces between concurrently executing modules of a computing system. The model does not prescribe a particular type of communication protocol and is aimed at describing interfaces between both software and hardware modules or a combination of the two....... The model describes both functional and timing properties of an interface...

  18. Constitutive Models

    DEFF Research Database (Denmark)

    2011-01-01

    This chapter presents various types of constitutive models and their applications. There are 3 aspects dealt with in this chapter, namely: creation and solution of property models, the application of parameter estimation and finally application examples of constitutive models. A systematic...

  19. Model Experiments and Model Descriptions

    Science.gov (United States)

    Jackman, Charles H.; Ko, Malcolm K. W.; Weisenstein, Debra; Scott, Courtney J.; Shia, Run-Lie; Rodriguez, Jose; Sze, N. D.; Vohralik, Peter; Randeniya, Lakshman; Plumb, Ian

    1999-01-01

    The Second Workshop on Stratospheric Models and Measurements Workshop (M&M II) is the continuation of the effort previously started in the first Workshop (M&M I, Prather and Remsberg [1993]) held in 1992. As originally stated, the aim of M&M is to provide a foundation for establishing the credibility of stratospheric models used in environmental assessments of the ozone response to chlorofluorocarbons, aircraft emissions, and other climate-chemistry interactions. To accomplish this, a set of measurements of the present day atmosphere was selected. The intent was that successful simulations of the set of measurements should become the prerequisite for the acceptance of these models as having a reliable prediction for future ozone behavior. This section is divided into two: model experiment and model descriptions. In the model experiment, participant were given the charge to design a number of experiments that would use observations to test whether models are using the correct mechanisms to simulate the distributions of ozone and other trace gases in the atmosphere. The purpose is closely tied to the needs to reduce the uncertainties in the model predicted responses of stratospheric ozone to perturbations. The specifications for the experiments were sent out to the modeling community in June 1997. Twenty eight modeling groups responded to the requests for input. The first part of this section discusses the different modeling group, along with the experiments performed. Part two of this section, gives brief descriptions of each model as provided by the individual modeling groups.

  20. Modeling mesoscopic phenomena in extended dynamical systems

    Energy Technology Data Exchange (ETDEWEB)

    Bishop, A.; Lomdahl, P.; Jensen, N.G.; Cai, D.S. [Los Alamos National Lab., NM (United States); Mertenz, F. [Bayreuth Univ. (Germany); Konno, Hidetoshi [Tsukuba Univ., Ibaraki (Japan); Salkola, M. [Stanford Univ., CA (United States)

    1997-08-01

    This is the final report of a three-year, Laboratory-Directed Research and Development project at the Los Alamos National Laboratory (LANL). We have obtained classes of nonlinear solutions on curved geometries that demonstrate a novel interplay between topology and geometric frustration relevant for nanoscale systems. We have analyzed the nature and stability of localized oscillatory nonlinear excitations (multi-phonon bound states) on discrete nonlinear chains, including demonstrations of successful perturbation theories, existence of quasiperiodic excitations, response to external statistical time-dependent fields and point impurities, robustness in the presence of quantum fluctuations, and effects of boundary conditions. We have demonstrated multi-timescale effects for nonlinear Schroedinger descriptions and shown the success of memory function approaches for going beyond these approximations. In addition we have developed a generalized rate-equation framework that allows analysis of the important creation/annihilation processes in driven nonlinear, nonequilibiium systems.

  1. Scalable Models Using Model Transformation

    Science.gov (United States)

    2008-07-13

    and the following companies: Agilent, Bosch, HSBC , Lockheed-Martin, National Instruments, and Toyota. Scalable Models Using Model Transformation...parametrization, and workflow automation. (AFRL), the State of California Micro Program, and the following companies: Agi- lent, Bosch, HSBC , Lockheed

  2. Cadastral Modeling

    DEFF Research Database (Denmark)

    Stubkjær, Erik

    2005-01-01

    to the modeling of an industrial sector, as it aims at rendering the basic concepts that relate to the domain of real estate and the pertinent human activities. The palpable objects are pieces of land and buildings, documents, data stores and archives, as well as persons in their diverse roles as owners, holders...... to land. The paper advances the position that cadastral modeling has to include not only the physical objects, agents, and information sets of the domain, but also the objectives or requirements of cadastral systems.......Modeling is a term that refers to a variety of efforts, including data and process modeling. The domain to be modeled may be a department, an organization, or even an industrial sector. E-business presupposes the modeling of an industrial sector, a substantial task. Cadastral modeling compares...

  3. Modelling in Business Model design

    NARCIS (Netherlands)

    Simonse, W.L.

    2013-01-01

    It appears that business model design might not always produce a design or model as the expected result. However when designers are involved, a visual model or artefact is produced. To assist strategic managers in thinking about how they can act, the designers challenge is to combine strategy and

  4. Climate Models

    Science.gov (United States)

    Druyan, Leonard M.

    2012-01-01

    Climate models is a very broad topic, so a single volume can only offer a small sampling of relevant research activities. This volume of 14 chapters includes descriptions of a variety of modeling studies for a variety of geographic regions by an international roster of authors. The climate research community generally uses the rubric climate models to refer to organized sets of computer instructions that produce simulations of climate evolution. The code is based on physical relationships that describe the shared variability of meteorological parameters such as temperature, humidity, precipitation rate, circulation, radiation fluxes, etc. Three-dimensional climate models are integrated over time in order to compute the temporal and spatial variations of these parameters. Model domains can be global or regional and the horizontal and vertical resolutions of the computational grid vary from model to model. Considering the entire climate system requires accounting for interactions between solar insolation, atmospheric, oceanic and continental processes, the latter including land hydrology and vegetation. Model simulations may concentrate on one or more of these components, but the most sophisticated models will estimate the mutual interactions of all of these environments. Advances in computer technology have prompted investments in more complex model configurations that consider more phenomena interactions than were possible with yesterday s computers. However, not every attempt to add to the computational layers is rewarded by better model performance. Extensive research is required to test and document any advantages gained by greater sophistication in model formulation. One purpose for publishing climate model research results is to present purported advances for evaluation by the scientific community.

  5. Reduced-Order Model for the Geochemical Impacts of Carbon Dioxide, Brine and Trace Metal Leakage into an Unconfined, Oxidizing Carbonate Aquifer, Version 2.1

    Energy Technology Data Exchange (ETDEWEB)

    Bacon, Diana H.

    2013-03-31

    The National Risk Assessment Partnership (NRAP) consists of 5 U.S DOE national laboratories collaborating to develop a framework for predicting the risks associated with carbon sequestration. The approach taken by NRAP is to divide the system into components, including injection target reservoirs, wellbores, natural pathways including faults and fractures, groundwater and the atmosphere. Next, develop a detailed, physics and chemistry-based model of each component. Using the results of the detailed models, develop efficient, simplified models, termed reduced order models (ROM) for each component. Finally, integrate the component ROMs into a system model that calculates risk profiles for the site. This report details the development of the Groundwater Geochemistry ROM for the Edwards Aquifer at PNNL. The Groundwater Geochemistry ROM for the Edwards Aquifer uses a Wellbore Leakage ROM developed at LANL as input. The detailed model, using the STOMP simulator, covers a 5x8 km area of the Edwards Aquifer near San Antonio, Texas. The model includes heterogeneous hydraulic properties, and equilibrium, kinetic and sorption reactions between groundwater, leaked CO2 gas, brine, and the aquifer carbonate and clay minerals. Latin Hypercube sampling was used to generate 1024 samples of input parameters. For each of these input samples, the STOMP simulator was used to predict the flux of CO2 to the atmosphere, and the volume, length and width of the aquifer where pH was less than the MCL standard, and TDS, arsenic, cadmium and lead exceeded MCL standards. In order to decouple the Wellbore Leakage ROM from the Groundwater Geochemistry ROM, the response surface was transformed to replace Wellbore Leakage ROM input parameters with instantaneous and cumulative CO2 and brine leakage rates. The most sensitive parameters proved to be the CO2 and brine leakage rates from the well, with equilibrium coefficients for calcite and dolomite, as well as the number of illite and kaolinite

  6. Mathematical modelling

    CERN Document Server

    2016-01-01

    This book provides a thorough introduction to the challenge of applying mathematics in real-world scenarios. Modelling tasks rarely involve well-defined categories, and they often require multidisciplinary input from mathematics, physics, computer sciences, or engineering. In keeping with this spirit of modelling, the book includes a wealth of cross-references between the chapters and frequently points to the real-world context. The book combines classical approaches to modelling with novel areas such as soft computing methods, inverse problems, and model uncertainty. Attention is also paid to the interaction between models, data and the use of mathematical software. The reader will find a broad selection of theoretical tools for practicing industrial mathematics, including the analysis of continuum models, probabilistic and discrete phenomena, and asymptotic and sensitivity analysis.

  7. Turbulence Model

    DEFF Research Database (Denmark)

    Nielsen, Mogens Peter; Shui, Wan; Johansson, Jens

    2011-01-01

    In this report a new turbulence model is presented.In contrast to the bulk of modern work, the model is a classical continuum model with a relatively simple constitutive equation. The constitutive equation is, as usual in continuum mechanics, entirely empirical. It has the usual Newton or Stokes...... term with stresses depending linearly on the strain rates. This term takes into account the transfer of linear momentum from one part of the fluid to another. Besides there is another term, which takes into account the transfer of angular momentum. Thus the model implies a new definition of turbulence....... The model is in a virgin state, but a number of numerical tests have been carried out with good results. It is published to encourage other researchers to study the model in order to find its merits and possible limitations....

  8. Mathematical modelling

    DEFF Research Database (Denmark)

    Blomhøj, Morten

    2004-01-01

    Developing competences for setting up, analysing and criticising mathematical models are normally seen as relevant only from and above upper secondary level. The general belief among teachers is that modelling activities presuppose conceptual understanding of the mathematics involved. Mathematical...... modelling, however, can be seen as a practice of teaching that place the relation between real life and mathematics into the centre of teaching and learning mathematics, and this is relevant at all levels. Modelling activities may motivate the learning process and help the learner to establish cognitive...... roots for the construction of important mathematical concepts. In addition competences for setting up, analysing and criticising modelling processes and the possible use of models is a formative aim in this own right for mathematics teaching in general education. The paper presents a theoretical...

  9. Spherical models

    CERN Document Server

    Wenninger, Magnus J

    2012-01-01

    Well-illustrated, practical approach to creating star-faced spherical forms that can serve as basic structures for geodesic domes. Complete instructions for making models from circular bands of paper with just a ruler and compass. Discusses tessellation, or tiling, and how to make spherical models of the semiregular solids and concludes with a discussion of the relationship of polyhedra to geodesic domes and directions for building models of domes. "". . . very pleasant reading."" - Science. 1979 edition.

  10. Zeebrugge Model

    DEFF Research Database (Denmark)

    Liu, Zhou; Frigaard, Peter

    This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University.......This report presents the model on wave run-up and run-down on the Zeebrugge breakwater under short-crested oblique wave attacks. The model test was performed in March-April 2000 at the Hydraulics & Coastal Engineering Laboratory, Aalborg University....

  11. Stream Modelling

    DEFF Research Database (Denmark)

    Vestergaard, Kristian

    the engineers, but as the scale and the complexity of the hydraulic works increased, the mathematical models became so complex that a mathematical solution could not be obtained. This created a demand for new methods and again the experimental investigation became popular, but this time as measurements on small......-scale models. But still the scale and complexity of hydraulic works were increasing, and soon even small-scale models reached a natural limit for some applications. In the mean time the modern computer was developed, and it became possible to solve complex mathematical models by use of computer-based numerical...

  12. Ventilation Model

    Energy Technology Data Exchange (ETDEWEB)

    V. Chipman

    2002-10-05

    The purpose of the Ventilation Model is to simulate the heat transfer processes in and around waste emplacement drifts during periods of forced ventilation. The model evaluates the effects of emplacement drift ventilation on the thermal conditions in the emplacement drifts and surrounding rock mass, and calculates the heat removal by ventilation as a measure of the viability of ventilation to delay the onset of peak repository temperature and reduce its magnitude. The heat removal by ventilation is temporally and spatially dependent, and is expressed as the fraction of heat carried away by the ventilation air compared to the fraction of heat produced by radionuclide decay. One minus the heat removal is called the wall heat fraction, or the remaining amount of heat that is transferred via conduction to the surrounding rock mass. Downstream models, such as the ''Multiscale Thermohydrologic Model'' (BSC 2001), use the wall heat fractions as outputted from the Ventilation Model to initialize their post-closure analyses. The Ventilation Model report was initially developed to analyze the effects of preclosure continuous ventilation in the Engineered Barrier System (EBS) emplacement drifts, and to provide heat removal data to support EBS design. Revision 00 of the Ventilation Model included documentation of the modeling results from the ANSYS-based heat transfer model. The purposes of Revision 01 of the Ventilation Model are: (1) To validate the conceptual model for preclosure ventilation of emplacement drifts and verify its numerical application in accordance with new procedural requirements as outlined in AP-SIII-10Q, Models (Section 7.0). (2) To satisfy technical issues posed in KTI agreement RDTME 3.14 (Reamer and Williams 2001a). Specifically to demonstrate, with respect to the ANSYS ventilation model, the adequacy of the discretization (Section 6.2.3.1), and the downstream applicability of the model results (i.e. wall heat fractions) to

  13. Final Report on Institutional Computing Project s15_hilaserion, “Kinetic Modeling of Next-Generation High-Energy, High-Intensity Laser-Ion Accelerators as an Enabling Capability”

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Brian James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Yin, Lin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stark, David James [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-06

    This proposal sought of order 1M core-hours of Institutional Computing time intended to enable computing by a new LANL Postdoc (David Stark) working under LDRD ER project 20160472ER (PI: Lin Yin) on laser-ion acceleration. The project was “off-cycle,” initiating in June of 2016 with a postdoc hire.

  14. Modeling Documents with Event Model

    Directory of Open Access Journals (Sweden)

    Longhui Wang

    2015-08-01

    Full Text Available Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.

  15. Model Selection for Geostatistical Models

    Energy Technology Data Exchange (ETDEWEB)

    Hoeting, Jennifer A.; Davis, Richard A.; Merton, Andrew A.; Thompson, Sandra E.

    2006-02-01

    We consider the problem of model selection for geospatial data. Spatial correlation is typically ignored in the selection of explanatory variables and this can influence model selection results. For example, the inclusion or exclusion of particular explanatory variables may not be apparent when spatial correlation is ignored. To address this problem, we consider the Akaike Information Criterion (AIC) as applied to a geostatistical model. We offer a heuristic derivation of the AIC in this context and provide simulation results that show that using AIC for a geostatistical model is superior to the often used approach of ignoring spatial correlation in the selection of explanatory variables. These ideas are further demonstrated via a model for lizard abundance. We also employ the principle of minimum description length (MDL) to variable selection for the geostatistical model. The effect of sampling design on the selection of explanatory covariates is also explored.

  16. Comprehensive Approaches to Multiphase Flows in Geophysics - Application to nonisothermal, nonhomogenous, unsteady, large-scale, turbulent dusty clouds I. Hydrodynamic and Thermodynamic RANS and LES Models

    Energy Technology Data Exchange (ETDEWEB)

    S. Dartevelle

    2005-09-05

    The objective of this manuscript is to fully derive a geophysical multiphase model able to ''accommodate'' different multiphase turbulence approaches; viz., the Reynolds Averaged Navier-Stokes (RANS), the Large Eddy Simulation (LES), or hybrid RANSLES. This manuscript is the first part of a larger geophysical multiphase project--lead by LANL--that aims to develop comprehensive modeling tools for large-scale, atmospheric, transient-buoyancy dusty jets and plume (e.g., plinian clouds, nuclear ''mushrooms'', ''supercell'' forest fire plumes) and for boundary-dominated geophysical multiphase gravity currents (e.g., dusty surges, diluted pyroclastic flows, dusty gravity currents in street canyons). LES is a partially deterministic approach constructed on either a spatial- or a temporal-separation between the large and small scales of the flow, whereas RANS is an entirely probabilistic approach constructed on a statistical separation between an ensemble-averaged mean and higher-order statistical moments (the so-called ''fluctuating parts''). Within this specific multiphase context, both turbulence approaches are built up upon the same phasic binary-valued ''function of presence''. This function of presence formally describes the occurrence--or not--of any phase at a given position and time and, therefore, allows to derive the same basic multiphase Navier-Stokes model for either the RANS or the LES frameworks. The only differences between these turbulence frameworks are the closures for the various ''turbulence'' terms involving the unknown variables from the fluctuating (RANS) or from the subgrid (LES) parts. Even though the hydrodynamic and thermodynamic models for RANS and LES have the same set of Partial Differential Equations, the physical interpretations of these PDEs cannot be the same, i.e., RANS models an averaged field, while LES simulates a

  17. Didactical modelling

    DEFF Research Database (Denmark)

    Højgaard, Tomas; Hansen, Rune

    2016-01-01

    The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to...

  18. Didactical modelling

    OpenAIRE

    Højgaard, Tomas; Hansen, Rune

    2016-01-01

    The purpose of this paper is to introduce Didactical Modelling as a research methodology in mathematics education. We compare the methodology with other approaches and argue that Didactical Modelling has its own specificity. We discuss the methodological “why” and explain why we find it useful to construct this approach in mathematics education research.

  19. Animal models

    DEFF Research Database (Denmark)

    Gøtze, Jens Peter; Krentz, Andrew

    2014-01-01

    In this issue of Cardiovascular Endocrinology, we are proud to present a broad and dedicated spectrum of reviews on animal models in cardiovascular disease. The reviews cover most aspects of animal models in science from basic differences and similarities between small animals and the human...

  20. Martingale Model

    OpenAIRE

    Giandomenico, Rossano

    2006-01-01

    The model determines a stochastic continuous process as continuous limit of a stochastic discrete process so to show that the stochastic continuous process converges to the stochastic discrete process such that we can integrate it. Furthermore, the model determines the expected volatility and the expected mean so to show that the volatility and the mean are increasing function of the time.

  1. Dispersion Modeling.

    Science.gov (United States)

    Budiansky, Stephen

    1980-01-01

    This article discusses the need for more accurate and complete input data and field verification of the various models of air pollutant dispension. Consideration should be given to changing the form of air quality standards based on enhanced dispersion modeling techniques. (Author/RE)

  2. Education models

    NARCIS (Netherlands)

    Poortman, Sybilla; Sloep, Peter

    2006-01-01

    Educational models describes a case study on a complex learning object. Possibilities are investigated for using this learning object, which is based on a particular educational model, outside of its original context. Furthermore, this study provides advice that might lead to an increase in

  3. Battery Modeling

    NARCIS (Netherlands)

    Jongerden, M.R.; Haverkort, Boudewijn R.H.M.

    2008-01-01

    The use of mobile devices is often limited by the capacity of the employed batteries. The battery lifetime determines how long one can use a device. Battery modeling can help to predict, and possibly extend this lifetime. Many different battery models have been developed over the years. However,

  4. Centering in-the-large Computing referential discourse segments

    CERN Document Server

    Hahn, U; Hahn, Udo; Strube, Michael

    1997-01-01

    We specify an algorithm that builds up a hierarchy of referential discourse segments from local centering data. The spatial extension and nesting of these discourse segments constrain the reachability of potential antecedents of an anaphoric expression beyond the local level of adjacent center pairs. Thus, the centering model is scaled up to the level of the global referential structure of discourse. An empirical evaluation of the algorithm is supplied. From no-reply@xxx.lanl.gov Tue Oct 12 10:01 MET 1999 Received: from newmint.cern.ch (newmint.cern.ch [137.138.26.94]) by sundh98.cern.ch (8.8.5/8.8.5) with ESMTP id KAA04927 for ; Tue, 12 Oct 1999 10:01:50 +0200 (MET DST) Received: from uuu.lanl.gov (root@uuu.lanl.gov [204.121.6.59]) by newmint.cern.ch (8.9.3/8.9.3) with ESMTP id KAA17757 for ; Tue, 12 Oct 1999 10:01:49 +0200 (MET DST) Received: from xxx.lanl.gov (xxx.lanl.gov [204.121.6.57]) by uuu.lanl.gov (x.x.x/x.x.x) with ESMTP id CAA11514; Tue, 12 Oct 1999 02:01:42 -0600 (MDT) Received: (from e-prints@lo...

  5. Linguistic models and linguistic modeling.

    Science.gov (United States)

    Pedryez, W; Vasilakos, A V

    1999-01-01

    The study is concerned with a linguistic approach to the design of a new category of fuzzy (granular) models. In contrast to numerically driven identification techniques, we concentrate on budding meaningful linguistic labels (granules) in the space of experimental data and forming the ensuing model as a web of associations between such granules. As such models are designed at the level of information granules and generate results in the same granular rather than pure numeric format, we refer to them as linguistic models. Furthermore, as there are no detailed numeric estimation procedures involved in the construction of the linguistic models carried out in this way, their design mode can be viewed as that of a rapid prototyping. The underlying algorithm used in the development of the models utilizes an augmented version of the clustering technique (context-based clustering) that is centered around a notion of linguistic contexts-a collection of fuzzy sets or fuzzy relations defined in the data space (more precisely a space of input variables). The detailed design algorithm is provided and contrasted with the standard modeling approaches commonly encountered in the literature. The usefulness of the linguistic mode of system modeling is discussed and illustrated with the aid of numeric studies including both synthetic data as well as some time series dealing with modeling traffic intensity over a broadband telecommunication network.

  6. OSPREY Model

    Energy Technology Data Exchange (ETDEWEB)

    Veronica J. Rutledge

    2013-01-01

    The absence of industrial scale nuclear fuel reprocessing in the U.S. has precluded the necessary driver for developing the advanced simulation capability now prevalent in so many other countries. Thus, it is essential to model complex series of unit operations to simulate, understand, and predict inherent transient behavior and feedback loops. A capability of accurately simulating the dynamic behavior of advanced fuel cycle separation processes will provide substantial cost savings and many technical benefits. The specific fuel cycle separation process discussed in this report is the off-gas treatment system. The off-gas separation consists of a series of scrubbers and adsorption beds to capture constituents of interest. Dynamic models are being developed to simulate each unit operation involved so each unit operation can be used as a stand-alone model and in series with multiple others. Currently, an adsorption model has been developed within Multi-physics Object Oriented Simulation Environment (MOOSE) developed at the Idaho National Laboratory (INL). Off-gas Separation and REcoverY (OSPREY) models the adsorption of off-gas constituents for dispersed plug flow in a packed bed under non-isothermal and non-isobaric conditions. Inputs to the model include gas, sorbent, and column properties, equilibrium and kinetic data, and inlet conditions. The simulation outputs component concentrations along the column length as a function of time from which breakthrough data is obtained. The breakthrough data can be used to determine bed capacity, which in turn can be used to size columns. It also outputs temperature along the column length as a function of time and pressure drop along the column length. Experimental data and parameters were input into the adsorption model to develop models specific for krypton adsorption. The same can be done for iodine, xenon, and tritium. The model will be validated with experimental breakthrough curves. Customers will be given access to

  7. Model hydrographs

    Science.gov (United States)

    Mitchell, W.D.

    1972-01-01

    Model hydrographs are composed of pairs of dimensionless ratios, arrayed in tabular form, which, when modified by the appropriate values of rainfall exceed and by the time and areal characteristics of the drainage basin, satisfactorily represent the flood hydrograph for the basin. Model bydrographs are developed from a dimensionless translation hydrograph, having a time base of T hours and appropriately modified for storm duration by routing through reservoir storage, S=kOx. Models fall into two distinct classes: (1) those for which the value of x is unity and which have all the characteristics of true unit hydrographs and (2) those for which the value of x is other than unity and to which the unit-hydrograph principles of proportionality and superposition do not apply. Twenty-six families of linear models and eight families of nonlinear models in tabular form from the principal subject of this report. Supplemental discussions describe the development of the models and illustrate their application. Other sections of the report, supplemental to the tables, describe methods of determining the hydrograph characteristics, T, k, and x, both from observed hydrograph and from the physical characteristics of the drainage basin. Five illustrative examples of use show that the models, when properly converted to incorporate actual rainfall excess and the time and areal characteristics of the drainage basins, do indeed satisfactorily represent the observed flood hydrographs for the basins.

  8. Stereometric Modelling

    Science.gov (United States)

    Grimaldi, P.

    2012-07-01

    These mandatory guidelines are provided for preparation of papers accepted for publication in the series of Volumes of The The stereometric modelling means modelling achieved with : - the use of a pair of virtual cameras, with parallel axes and positioned at a mutual distance average of 1/10 of the distance camera-object (in practice the realization and use of a stereometric camera in the modeling program); - the shot visualization in two distinct windows - the stereoscopic viewing of the shot while modelling. Since the definition of "3D vision" is inaccurately referred to as the simple perspective of an object, it is required to add the word stereo so that "3D stereo vision " shall stand for "three-dimensional view" and ,therefore, measure the width, height and depth of the surveyed image. Thanks to the development of a stereo metric model , either real or virtual, through the "materialization", either real or virtual, of the optical-stereo metric model made visible with a stereoscope. It is feasible a continuous on line updating of the cultural heritage with the help of photogrammetry and stereometric modelling. The catalogue of the Architectonic Photogrammetry Laboratory of Politecnico di Bari is available on line at: http://rappresentazione.stereofot.it:591/StereoFot/FMPro?-db=StereoFot.fp5&-lay=Scheda&-format=cerca.htm&-view

  9. Modeling complexes of modeled proteins.

    Science.gov (United States)

    Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A

    2017-03-01

    Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C(α) RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  10. A Model for Math Modeling

    Science.gov (United States)

    Lin, Tony; Erfan, Sasan

    2016-01-01

    Mathematical modeling is an open-ended research subject where no definite answers exist for any problem. Math modeling enables thinking outside the box to connect different fields of studies together including statistics, algebra, calculus, matrices, programming and scientific writing. As an integral part of society, it is the foundation for many…

  11. Modelling survival

    DEFF Research Database (Denmark)

    Ashauer, Roman; Albert, Carlo; Augustine, Starrlight

    2016-01-01

    The General Unified Threshold model for Survival (GUTS) integrates previously published toxicokinetic-toxicodynamic models and estimates survival with explicitly defined assumptions. Importantly, GUTS accounts for time-variable exposure to the stressor. We performed three studies to test...... the ability of GUTS to predict survival of aquatic organisms across different pesticide exposure patterns, time scales and species. Firstly, using synthetic data, we identified experimental data requirements which allow for the estimation of all parameters of the GUTS proper model. Secondly, we assessed how...

  12. Modelling Constructs

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2009-01-01

    There are many different notations and formalisms for modelling business processes and workflows. These notations and formalisms have been introduced with different purposes and objectives. Later, influenced by other notations, comparisons with other tools, or by standardization efforts, these no...

  13. Linear Models

    CERN Document Server

    Searle, Shayle R

    2012-01-01

    This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.

  14. Modeling Arcs

    CERN Document Server

    Insepov, Zeke; Veitzer, Seth; Mahalingam, Sudhakar

    2011-01-01

    Although vacuum arcs were first identified over 110 years ago, they are not yet well understood. We have since developed a model of breakdown and gradient limits that tries to explain, in a self-consistent way: arc triggering, plasma initiation, plasma evolution, surface damage and gra- dient limits. We use simple PIC codes for modeling plasmas, molecular dynamics for modeling surface breakdown, and surface damage, and mesoscale surface thermodynamics and finite element electrostatic codes for to evaluate surface properties. Since any given experiment seems to have more variables than data points, we have tried to consider a wide variety of arcing (rf structures, e beam welding, laser ablation, etc.) to help constrain the problem, and concentrate on common mechanisms. While the mechanisms can be comparatively simple, modeling can be challenging.

  15. Paleoclimate Modeling

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Computer simulations of past climate. Variables provided as model output are described by parameter keyword. In some cases the parameter keywords are a subset of all...

  16. Anchor Modeling

    Science.gov (United States)

    Regardt, Olle; Rönnbäck, Lars; Bergholtz, Maria; Johannesson, Paul; Wohed, Petia

    Maintaining and evolving data warehouses is a complex, error prone, and time consuming activity. The main reason for this state of affairs is that the environment of a data warehouse is in constant change, while the warehouse itself needs to provide a stable and consistent interface to information spanning extended periods of time. In this paper, we propose a modeling technique for data warehousing, called anchor modeling, that offers non-destructive extensibility mechanisms, thereby enabling robust and flexible management of changes in source systems. A key benefit of anchor modeling is that changes in a data warehouse environment only require extensions, not modifications, to the data warehouse. This ensures that existing data warehouse applications will remain unaffected by the evolution of the data warehouse, i.e. existing views and functions will not have to be modified as a result of changes in the warehouse model.

  17. Model theory

    CERN Document Server

    Hodges, Wilfrid

    1993-01-01

    An up-to-date and integrated introduction to model theory, designed to be used for graduate courses (for students who are familiar with first-order logic), and as a reference for more experienced logicians and mathematicians.

  18. Accelerated life models modeling and statistical analysis

    CERN Document Server

    Bagdonavicius, Vilijandas

    2001-01-01

    Failure Time DistributionsIntroductionParametric Classes of Failure Time DistributionsAccelerated Life ModelsIntroductionGeneralized Sedyakin's ModelAccelerated Failure Time ModelProportional Hazards ModelGeneralized Proportional Hazards ModelsGeneralized Additive and Additive-Multiplicative Hazards ModelsChanging Shape and Scale ModelsGeneralizationsModels Including Switch-Up and Cycling EffectsHeredity HypothesisSummaryAccelerated Degradation ModelsIntroductionDegradation ModelsModeling the Influence of Explanatory Varia

  19. Do stroke models model stroke?

    Directory of Open Access Journals (Sweden)

    Philipp Mergenthaler

    2012-11-01

    Full Text Available Stroke is one of the leading causes of death worldwide and the biggest reason for long-term disability. Basic research has formed the modern understanding of stroke pathophysiology, and has revealed important molecular, cellular and systemic mechanisms. However, despite decades of research, most translational stroke trials that aim to introduce basic research findings into clinical treatment strategies – most notably in the field of neuroprotection – have failed. Among other obstacles, poor methodological and statistical standards, negative publication bias, and incomplete preclinical testing have been proposed as ‘translational roadblocks’. In this article, we introduce the models commonly used in preclinical stroke research, discuss some of the causes of failed translational success and review potential remedies. We further introduce the concept of modeling ‘care’ of stroke patients, because current preclinical research models the disorder but does not model care or state-of-the-art clinical testing. Stringent statistical methods and controlled preclinical trials have been suggested to counteract weaknesses in preclinical research. We conclude that preclinical stroke research requires (1 appropriate modeling of the disorder, (2 appropriate modeling of the care of stroke patients and (3 an approach to preclinical testing that is similar to clinical testing, including Phase 3 randomized controlled preclinical trials as necessary additional steps before new therapies enter clinical testing.

  20. Persistent Modelling

    DEFF Research Database (Denmark)

    2012-01-01

    The relationship between representation and the represented is examined here through the notion of persistent modelling. This notion is not novel to the activity of architectural design if it is considered as describing a continued active and iterative engagement with design concerns – an evident...... characteristic of architectural practice. But the persistence in persistent modelling can also be understood to apply in other ways, reflecting and anticipating extended roles for representation. This book identifies three principle areas in which these extensions are becoming apparent within contemporary....... It also provides critical insight into the use of contemporary modelling tools and methods, together with an examination of the implications their use has within the territories of architectural design, realisation and experience....

  1. Mathematical modeling

    CERN Document Server

    Eck, Christof; Knabner, Peter

    2017-01-01

    Mathematical models are the decisive tool to explain and predict phenomena in the natural and engineering sciences. With this book readers will learn to derive mathematical models which help to understand real world phenomena. At the same time a wealth of important examples for the abstract concepts treated in the curriculum of mathematics degrees are given. An essential feature of this book is that mathematical structures are used as an ordering principle and not the fields of application. Methods from linear algebra, analysis and the theory of ordinary and partial differential equations are thoroughly introduced and applied in the modeling process. Examples of applications in the fields electrical networks, chemical reaction dynamics, population dynamics, fluid dynamics, elasticity theory and crystal growth are treated comprehensively.

  2. Inflatable Models

    Institute of Scientific and Technical Information of China (English)

    Ling Li; Vasily Volkov

    2006-01-01

    A physically-based model is presented for the simulation of a new type of deformable objects-inflatable objects, such as shaped balloons, which consist of pressurized air enclosed by an elastic surface. These objects have properties inherent in both 3D and 2D elastic bodies, as they demonstrate the behaviour of 3D shapes using 2D formulations. As there is no internal structure in them, their behaviour is substantially different from the behaviour of deformable solid objects. We use one of the few available models for deformable surfaces, and enhance it to include the forces of internal and external pressure. These pressure forces may also incorporate buoyancy forces, to allow objects filled with a low density gas to float in denser media. The obtained models demonstrate rich dynamic behaviour, such as bouncing, floating, deflation and inflation.

  3. Lens Model

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...... of probabilistic functionalism, and concerns the environment and the mind, and adaptation by the latter to the former. This entry is about the lens model, and probabilistic functionalism more broadly. Focus will mostly be on firms and their employees, but, to fully appreciate the scope, we have to keep in mind...

  4. Lens Model

    DEFF Research Database (Denmark)

    Nash, Ulrik William

    2014-01-01

    Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory of probabil......Firms consist of people who make decisions to achieve goals. How do these people develop the expectations which underpin the choices they make? The lens model provides one answer to this question. It was developed by cognitive psychologist Egon Brunswik (1952) to illustrate his theory...

  5. Molecular modeling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-01-01

    Full Text Available The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and the exponential growth of the knowledge of protein structures have made it possible for organic compounds to be tailored to decrease the harmful side effects and increase the potency. This article provides a detailed description of the techniques employed in molecular modeling. Molecular modeling is a rapidly developing discipline, and has been supported by the dramatic improvements in computer hardware and software in recent years.

  6. Smashnova Model

    CERN Document Server

    Sivaram, C

    2007-01-01

    An alternate model for gamma ray bursts is suggested. For a white dwarf (WD) and neutron star (NS) very close binary system, the WD (close to Mch) can detonate due to tidal heating, leading to a SN. Material falling on to the NS at relativistic velocities can cause its collapse to a magnetar or quark star or black hole leading to a GRB. As the material smashes on to the NS, it is dubbed the Smashnova model. Here the SN is followed by a GRB. NS impacting a RG (or RSG) (like in Thorne-Zytkow objects) can also cause a SN outburst followed by a GRB. Other variations are explored.

  7. Modelling language

    CERN Document Server

    Cardey, Sylviane

    2013-01-01

    In response to the need for reliable results from natural language processing, this book presents an original way of decomposing a language(s) in a microscopic manner by means of intra/inter‑language norms and divergences, going progressively from languages as systems to the linguistic, mathematical and computational models, which being based on a constructive approach are inherently traceable. Languages are described with their elements aggregating or repelling each other to form viable interrelated micro‑systems. The abstract model, which contrary to the current state of the art works in int

  8. Building Models and Building Modelling

    DEFF Research Database (Denmark)

    Jørgensen, Kaj Asbjørn; Skauge, Jørn

    I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygnings­model­lerings­programmer beskrevet. Vigtige aspekter om......­lering og bygningsmodeller. Det bliver understreget at modellering bør udføres på flere abstraktions­niveauer og i to dimensioner i den såkaldte modelleringsmatrix. Ud fra dette identificeres de primære faser af bygningsmodel­lering. Dernæst beskrives de basale karakteristika for bygningsmodeller. Heri...... inkluderes en præcisering af begreberne objektorienteret software og objektorienteret modeller. Det bliver fremhævet at begrebet objektbaseret modellering giver en tilstrækkelig og bedre forståelse. Endelig beskrives forestillingen om den ideale bygningsmodel som værende én samlet model, der anvendes gennem...

  9. Zeebrugge Model

    DEFF Research Database (Denmark)

    Jensen, Morten S.; Frigaard, Peter

    In the following, results from model tests with Zeebrugge breakwater are presented. The objective with these tests is partly to investigate the influence on wave run-up due to a changing waterlevel during a storm. Finally, the influence on wave run-up due to an introduced longshore current...

  10. Why Model?

    Directory of Open Access Journals (Sweden)

    Olaf eWolkenhauer

    2014-01-01

    Full Text Available Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question Why model?

  11. Why model?

    Science.gov (United States)

    Wolkenhauer, Olaf

    2014-01-01

    Next generation sequencing technologies are bringing about a renaissance of mining approaches. A comprehensive picture of the genetic landscape of an individual patient will be useful, for example, to identify groups of patients that do or do not respond to certain therapies. The high expectations may however not be satisfied if the number of patient groups with similar characteristics is going to be very large. I therefore doubt that mining sequence data will give us an understanding of why and when therapies work. For understanding the mechanisms underlying diseases, an alternative approach is to model small networks in quantitative mechanistic detail, to elucidate the role of gene and proteins in dynamically changing the functioning of cells. Here an obvious critique is that these models consider too few components, compared to what might be relevant for any particular cell function. I show here that mining approaches and dynamical systems theory are two ends of a spectrum of methodologies to choose from. Drawing upon personal experience in numerous interdisciplinary collaborations, I provide guidance on how to model by discussing the question "Why model?"

  12. Model CAPM

    OpenAIRE

    Burianová, Eva

    2008-01-01

    Cílem první části této bakalářské práce je - pomocí analýzy výchozích textů - teoretické shrnutí ekonomických modelů a teorií, na kterých model CAPM stojí: Markowitzův model teorie portfolia (analýza maximalizace očekávaného užitku a na něm založený model výběru optimálního portfolia), Tobina (rozšíření Markowitzova modelu ? rozdělení výběru optimálního portfolia do dvou fází; nejprve určení optimální kombinace rizikových instrumentů a následná alokace dostupného kapitálu mezi tuto optimální ...

  13. Transport modeling

    Institute of Scientific and Technical Information of China (English)

    R.E. Waltz

    2007-01-01

    @@ There has been remarkable progress during the past decade in understanding and modeling turbulent transport in tokamaks. With some exceptions the progress is derived from the huge increases in computational power and the ability to simulate tokamak turbulence with ever more fundamental and physically realistic dynamical equations, e.g.

  14. Painting models

    Science.gov (United States)

    Baart, F.; Donchyts, G.; van Dam, A.; Plieger, M.

    2015-12-01

    The emergence of interactive art has blurred the line between electronic, computer graphics and art. Here we apply this art form to numerical models. Here we show how the transformation of a numerical model into an interactive painting can both provide insights and solve real world problems. The cases that are used as an example include forensic reconstructions, dredging optimization, barrier design. The system can be fed using any source of time varying vector fields, such as hydrodynamic models. The cases used here, the Indian Ocean (HYCOM), the Wadden Sea (Delft3D Curvilinear), San Francisco Bay (3Di subgrid and Delft3D Flexible Mesh), show that the method used is suitable for different time and spatial scales. High resolution numerical models become interactive paintings by exchanging their velocity fields with a high resolution (>=1M cells) image based flow visualization that runs in a html5 compatible web browser. The image based flow visualization combines three images into a new image: the current image, a drawing, and a uv + mask field. The advection scheme that computes the resultant image is executed in the graphics card using WebGL, allowing for 1M grid cells at 60Hz performance on mediocre graphic cards. The software is provided as open source software. By using different sources for a drawing one can gain insight into several aspects of the velocity fields. These aspects include not only the commonly represented magnitude and direction, but also divergence, topology and turbulence .

  15. Modeling Muscles

    Science.gov (United States)

    Goodwyn, Lauren; Salm, Sarah

    2007-01-01

    Teaching the anatomy of the muscle system to high school students can be challenging. Students often learn about muscle anatomy by memorizing information from textbooks or by observing plastic, inflexible models. Although these mediums help students learn about muscle placement, the mediums do not facilitate understanding regarding integration of…

  16. Entrepreneurship Models.

    Science.gov (United States)

    Finger Lakes Regional Education Center for Economic Development, Mount Morris, NY.

    This guide describes seven model programs that were developed by the Finger Lakes Regional Center for Economic Development (New York) to meet the training needs of female and minority entrepreneurs to help their businesses survive and grow and to assist disabled and dislocated workers and youth in beginning small businesses. The first three models…

  17. Quality modelling

    NARCIS (Netherlands)

    Tijskens, L.M.M.

    2003-01-01

    For modelling product behaviour, with respect to quality for users and consumers, its essential to have at least a fundamental notion what quality really is, and which product properties determine the quality assigned by the consumer to a product. In other words: what is allowed and what is to be

  18. Criticality Model

    Energy Technology Data Exchange (ETDEWEB)

    A. Alsaed

    2004-09-14

    The ''Disposal Criticality Analysis Methodology Topical Report'' (YMP 2003) presents the methodology for evaluating potential criticality situations in the monitored geologic repository. As stated in the referenced Topical Report, the detailed methodology for performing the disposal criticality analyses will be documented in model reports. Many of the models developed in support of the Topical Report differ from the definition of models as given in the Office of Civilian Radioactive Waste Management procedure AP-SIII.10Q, ''Models'', in that they are procedural, rather than mathematical. These model reports document the detailed methodology necessary to implement the approach presented in the Disposal Criticality Analysis Methodology Topical Report and provide calculations utilizing the methodology. Thus, the governing procedure for this type of report is AP-3.12Q, ''Design Calculations and Analyses''. The ''Criticality Model'' is of this latter type, providing a process evaluating the criticality potential of in-package and external configurations. The purpose of this analysis is to layout the process for calculating the criticality potential for various in-package and external configurations and to calculate lower-bound tolerance limit (LBTL) values and determine range of applicability (ROA) parameters. The LBTL calculations and the ROA determinations are performed using selected benchmark experiments that are applicable to various waste forms and various in-package and external configurations. The waste forms considered in this calculation are pressurized water reactor (PWR), boiling water reactor (BWR), Fast Flux Test Facility (FFTF), Training Research Isotope General Atomic (TRIGA), Enrico Fermi, Shippingport pressurized water reactor, Shippingport light water breeder reactor (LWBR), N-Reactor, Melt and Dilute, and Fort Saint Vrain Reactor spent nuclear fuel (SNF). The scope of

  19. Information Model for Product Modeling

    Institute of Scientific and Technical Information of China (English)

    焦国方; 刘慎权

    1992-01-01

    The Key problems in product modeling for integrated CAD ∥CAM systems are the information structures and representations of products.They are taking more and more important roles in engineering applications.With the investigation on engineering product information and from the viewpoint of industrial process,in this paper,the information models are proposed and the definitions of the framework of product information are given.And then,the integration and the consistence of product information are discussed by introucing the entity and its instance.As a summary,the information structures described in this paper have many advantage and natures helpful in engineering design.

  20. Building Models and Building Modelling

    DEFF Research Database (Denmark)

    Jørgensen, Kaj; Skauge, Jørn

    2008-01-01

    I rapportens indledende kapitel beskrives de primære begreber vedrørende bygningsmodeller og nogle fundamentale forhold vedrørende computerbaseret modulering bliver opstillet. Desuden bliver forskellen mellem tegneprogrammer og bygnings­model­lerings­programmer beskrevet. Vigtige aspekter om comp...

  1. Molecular Modelling

    Directory of Open Access Journals (Sweden)

    Aarti Sharma

    2009-12-01

    Full Text Available

    The use of computational chemistry in the development of novel pharmaceuticals is becoming an increasingly important
    tool. In the past, drugs were simply screened for effectiveness. The recent advances in computing power and
    the exponential growth of the knowledge of protein structures have made it possible for organic compounds to tailored to
    decrease harmful side effects and increase the potency. This article provides a detailed description of the techniques
    employed in molecular modeling. Molecular modelling is a rapidly developing discipline, and has been supported from
    the dramatic improvements in computer hardware and software in recent years.

  2. Cheating models

    DEFF Research Database (Denmark)

    Arnoldi, Jakob

    The article discusses the use of algorithmic models for so-called High Frequency Trading (HFT) in finance. HFT is controversial yet widespread in modern financial markets. It is a form of automated trading technology which critics among other things claim can lead to market manipulation. Drawing...... on two cases, this article shows that manipulation more likely happens in the reverse way, meaning that human traders attempt to make algorithms ‘make mistakes’ or ‘mislead’ algos. Thus, it is algorithmic models, not humans, that are manipulated. Such manipulation poses challenges for security exchanges....... The article analyses these challenges and argues that we witness a new post-social form of human-technology interaction that will lead to a reconfiguration of professional codes for financial trading....

  3. Acyclic models

    CERN Document Server

    Barr, Michael

    2002-01-01

    Acyclic models is a method heavily used to analyze and compare various homology and cohomology theories appearing in topology and algebra. This book is the first attempt to put together in a concise form this important technique and to include all the necessary background. It presents a brief introduction to category theory and homological algebra. The author then gives the background of the theory of differential modules and chain complexes over an abelian category to state the main acyclic models theorem, generalizing and systemizing the earlier material. This is then applied to various cohomology theories in algebra and topology. The volume could be used as a text for a course that combines homological algebra and algebraic topology. Required background includes a standard course in abstract algebra and some knowledge of topology. The volume contains many exercises. It is also suitable as a reference work for researchers.

  4. Model-free data analysis for source separation based on Non-Negative Matrix Factorization and k-means clustering (NMFk)

    Science.gov (United States)

    Vesselinov, V. V.; Alexandrov, B.

    2014-12-01

    The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the

  5. Nuclear Models

    Science.gov (United States)

    Fossión, Rubén

    2010-09-01

    The atomic nucleus is a typical example of a many-body problem. On the one hand, the number of nucleons (protons and neutrons) that constitute the nucleus is too large to allow for exact calculations. On the other hand, the number of constituent particles is too small for the individual nuclear excitation states to be explained by statistical methods. Another problem, particular for the atomic nucleus, is that the nucleon-nucleon (n-n) interaction is not one of the fundamental forces of Nature, and is hard to put in a single closed equation. The nucleon-nucleon interaction also behaves differently between two free nucleons (bare interaction) and between two nucleons in the nuclear medium (dressed interaction). Because of the above reasons, specific nuclear many-body models have been devised of which each one sheds light on some selected aspects of nuclear structure. Only combining the viewpoints of different models, a global insight of the atomic nucleus can be gained. In this chapter, we revise the the Nuclear Shell Model as an example of the microscopic approach, and the Collective Model as an example of the geometric approach. Finally, we study the statistical properties of nuclear spectra, basing on symmetry principles, to find out whether there is quantum chaos in the atomic nucleus. All three major approaches have been rewarded with the Nobel Prize of Physics. In the text, we will stress how each approach introduces its own series of approximations to reduce the prohibitingly large number of degrees of freedom of the full many-body problem to a smaller manageable number of effective degrees of freedom.

  6. Modelling Behaviour

    DEFF Research Database (Denmark)

    2015-01-01

    This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....

  7. Modeling Minds

    DEFF Research Database (Denmark)

    Michael, John

    others' minds. Then (2), in order to bring to light some possible justifications, as well as hazards and criticisms of the methodology of looking time tests, I will take a closer look at the concept of folk psychology and will focus on the idea that folk psychology involves using oneself as a model...... of other people in order to predict and understand their behavior. Finally (3), I will discuss the historical location and significance of the emergence of looking time tests...

  8. Modeling biomembranes.

    Energy Technology Data Exchange (ETDEWEB)

    Plimpton, Steven James; Heffernan, Julieanne; Sasaki, Darryl Yoshio; Frischknecht, Amalie Lucile; Stevens, Mark Jackson; Frink, Laura J. Douglas

    2005-11-01

    Understanding the properties and behavior of biomembranes is fundamental to many biological processes and technologies. Microdomains in biomembranes or ''lipid rafts'' are now known to be an integral part of cell signaling, vesicle formation, fusion processes, protein trafficking, and viral and toxin infection processes. Understanding how microdomains form, how they depend on membrane constituents, and how they act not only has biological implications, but also will impact Sandia's effort in development of membranes that structurally adapt to their environment in a controlled manner. To provide such understanding, we created physically-based models of biomembranes. Molecular dynamics (MD) simulations and classical density functional theory (DFT) calculations using these models were applied to phenomena such as microdomain formation, membrane fusion, pattern formation, and protein insertion. Because lipid dynamics and self-organization in membranes occur on length and time scales beyond atomistic MD, we used coarse-grained models of double tail lipid molecules that spontaneously self-assemble into bilayers. DFT provided equilibrium information on membrane structure. Experimental work was performed to further help elucidate the fundamental membrane organization principles.

  9. Progress of the LANL Low Temperature/Low Frequency Air Opacity Project - Optical Theory for HET-project: an update, April 2015

    Energy Technology Data Exchange (ETDEWEB)

    Timmermans, Eddy Marcel Elvire [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nisoli, Cristiano [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mozyrsky, Dima [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Hakel, Peter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sherrill, Manolo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Duffy, Leanne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-05-20

    Light radiated from a hot, opaque thermal emitter originates mostly from near the surface at which the object becomes opaque (the surface of last scattering). To be specific, we define the “optical surface” as the surface at which the optical depth, as observed from a detector, takes on the value of 1. The optical depth along a line of sight depends on the wavelength dependent. Accumulating light in different spectral bands, spectral detector then records light from different surfaces, a structure that we can picture somewhat like the layers of an onion. The theoretical framework that predicts the emitted spectral signal is radioactive transfer.

  10. Sampling and Analysis Plan (SAP) for Assessment of LANL-Derived Residual Radionuclides in Soils within Tract A-16-d for Land Conveyance and Transfer

    Energy Technology Data Exchange (ETDEWEB)

    Gillis, Jessica Mcdonnel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-18

    The A-16-D tract consists of the easternmost portion of DP mesa and is bounded on the North by the canyon bottom of DP canyon and on the South by the edge of the initial slope into Los Alamos Canyon (see Figure 1).

  11. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  12. DTN Modeling in OPNET Modeler

    Directory of Open Access Journals (Sweden)

    PAPAJ Jan

    2014-05-01

    Full Text Available Traditional wireless networks use the concept of the point-to-point forwarding inherited from reliable wired networks which seems to be not ideal for wireless environment. New emerging applications and networks operate mostly disconnected. So-called Delay-Tolerant networks (DTNs are receiving increasing attentions from both academia and industry. DTNs introduced a store-carry-and-forward concept solving the problem of intermittent connectivity. Behavior of such networks is verified by real models, computer simulation or combination of the both approaches. Computer simulation has become the primary and cost effective tool for evaluating the performance of the DTNs. OPNET modeler is our target simulation tool and we wanted to spread OPNET’s simulation opportunity towards DTN. We implemented bundle protocol to OPNET modeler allowing simulate cases based on bundle concept as epidemic forwarding which relies on flooding the network with messages and the forwarding algorithm based on the history of past encounters (PRoPHET. The implementation details will be provided in article.

  13. A Model

    Institute of Scientific and Technical Information of China (English)

    Liu Zhiyang

    2011-01-01

    Similar to ISO Technical Committees,SAC Technical Committees undertake the management and coordination of standard's development and amendments in various sectors in industry,playing the role as a bridge among enterprises,research institutions and the governmental standardization administration.How to fully play the essential role is the vital issue SAC has been committing to resolve.Among hundreds of SAC TCs,one stands out in knitting together those isolated,scattered,but highly competitive enterprises in the same industry with the "Standards" thread,and achieving remarkable results in promoting industry development with standardization.It sets a role model for other TCs.

  14. Potential Release Site Sediment Concentrations Correlated to Storm Water Station Runoff through GIS Modeling

    Energy Technology Data Exchange (ETDEWEB)

    C.T. McLean

    2005-06-01

    This research examined the relationship between sediment sample data taken at Potential Release Sites (PRSs) and storm water samples taken at selected sites in and around Los Alamos National Laboratory (LANL). The PRSs had been evaluated for erosion potential and a matrix scoring system implemented. It was assumed that there would be a stronger relationship between the high erosion PRSs and the storm water samples. To establish the relationship, the research was broken into two areas. The first area was raster-based modeling, and the second area was data analysis utilizing the raster based modeling results and the sediment and storm water sample results. Two geodatabases were created utilizing raster modeling functions and the Arc Hydro program. The geodatabase created using only Arc Hydro functions contains very fine catchment drainage areas in association with the geometric network and can be used for future contaminant tracking. The second geodatabase contains sub-watersheds for all storm water stations used in the study along with a geometric network. The second area of the study focused on data analysis. The analytical sediment data table was joined to the PRSs spatial data in ArcMap. All PRSs and PRSs with high erosion potential were joined separately to create two datasets for each of 14 analytes. Only the PRSs above the background value were retained. The storm water station spatial data were joined to the table of analyte values that were either greater than the National Pollutant Discharge Elimination System (NPDES) Multi-Sector General Permit (MSGP) benchmark value, or the Department of Energy (DOE) Drinking Water Defined Contribution Guideline (DWDCG). Only the storm water stations were retained that had sample values greater than the NPDES MSGP benchmark value or the DOE DWDCG. Separate maps were created for each analyte showing the sub-watersheds, the PRSs over background, and the storm water stations greater than the NPDES MSGP benchmark value or the

  15. Colloid-Facilitated Transport of Low-Solubility Radionuclides: A Field, Experimental, and Modeling Investigation

    Energy Technology Data Exchange (ETDEWEB)

    Kersting, A B; Reimus, P W; Abdel-Fattah, A; Allen, P G; Anghel, I; Benedict, F C; Esser, B K; Lu, N; Kung, K S; Nelson, J; Neu, M P; Reilly, S D; Smith, D K; Sylwester, E R; Wang, L; Ware, S D; Warren, RG; Williams, R W; Zavarin, M; Zhao, P

    2003-02-01

    For the last several years, the Underground Test Area (UGTA) program has funded a series of studies carried out by scientists to investigate the role of colloids in facilitating the transport of low-solubility radionuclides in groundwater, specifically plutonium (Pu). Although the studies were carried out independently, the overarching goals of these studies has been to determine if colloids in groundwater at the NTS can and will transport low-solubility radionuclides such as Pu, define the geochemical mechanisms under which this may or may not occur, determine the hydrologic parameters that may or may not enhance transport through fractures and provide recommendations for incorporating this information into future modeling efforts. The initial motivation for this work came from the observation in 1997 and 1998 by scientists from Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL) that low levels of Pu originally from the Benham underground nuclear test were detected in groundwater from two different aquifers collected from wells 1.3 km downgradient (Kersting et al., 1999). Greater than 90% of the Pu and other radionuclides were associated with the naturally occurring colloidal fraction (< 1 micron particles) in the groundwater. The colloids consisted mainly of zeolite (mordenite, clinoptilolite/heulandite), clays (illite, smectite) and cristobalite (SiO{sub 2}). These minerals were also identified as alteration mineral components in the host rock aquifer, a rhyolitic tuff. The observation that Pu can and has migrated in the subsurface at the NTS has forced a rethinking of our basic assumptions regarding the mechanical and geochemical transport pathways of low-solubility radionuclides. If colloid-facilitated transport is the primary mechanism for transporting low-solubility radionuclides in the subsurface, then current transport models based solely on solubility arguments and retardation estimates may underestimate the flux and

  16. Modelling Behaviour

    DEFF Research Database (Denmark)

    2015-01-01

    This book reflects and expands on the current trend in the building industry to understand, simulate and ultimately design buildings by taking into consideration the interlinked elements and forces that act on them. This approach overcomes the traditional, exclusive focus on building tasks, while....... The chapter authors were invited speakers at the 5th Symposium "Modelling Behaviour", which took place at the CITA in Copenhagen in September 2015....... posing new challenges in all areas of the industry from material and structural to the urban scale. Contributions from invited experts, papers and case studies provide the reader with a comprehensive overview of the field, as well as perspectives from related disciplines, such as computer science...

  17. Econometric modelling

    Directory of Open Access Journals (Sweden)

    M. Alguacil Marí

    2017-08-01

    Full Text Available The current economic environment, together with the low scores obtained by our students in recent years, makes it necessary to incorporate new teaching methods. In this sense, econometric modelling provides a unique opportunity offering to the student with the basic tools to address the study of Econometrics in a deeper and novel way. In this article, this teaching method is described, presenting also an example based on a recent study carried out by two students of the Degree of Economics. Likewise, the success of this method is evaluated quantitatively in terms of academic performance. The results confirm our initial idea that the greater involvement of the student, as well as the need for a more complete knowledge of the subject, suppose a stimulus for the study of this subject. As evidence of this, we show how those students who opted for the method we propose here obtained higher qualifications than those that chose the traditional method.

  18. Modelling Defiguration

    DEFF Research Database (Denmark)

    Bork Petersen, Franziska

    2013-01-01

    For the presentation of his autumn/winter 2012 collection in Paris and subsequently in Copenhagen, Danish designer Henrik Vibskov installed a mobile catwalk. The article investigates the choreographic impact of this scenography on those who move through it. Drawing on Dance Studies, the analytical...... advantageous manner. Stepping on the catwalk’s sloping, moving surfaces decelerates the models’ walk and makes it cautious, hesitant and shaky: suddenly the models lack exactly the affirmative, staccato, striving quality of motion, and the condescending expression that they perform on most contemporary...... catwalks. Vibskov’s catwalk induces what the dance scholar Gabriele Brandstetter has labelled a ‘defigurative choregoraphy’: a straying from definitions, which exist in ballet as in other movement-based genres, of how a figure should move and appear (1998). The catwalk scenography in this instance...

  19. On Activity modelling in process modeling

    Directory of Open Access Journals (Sweden)

    Dorel Aiordachioaie

    2001-12-01

    Full Text Available The paper is looking to the dynamic feature of the meta-models of the process modelling process, the time. Some principles are considered and discussed as main dimensions of any modelling activity: the compatibility of the substances, the equipresence of phenomena and the solvability of the model. The activity models are considered and represented at meta-level.

  20. Towards a Multi Business Model Innovation Model

    DEFF Research Database (Denmark)

    Lindgren, Peter; Jørgensen, Rasmus

    2012-01-01

    This paper studies the evolution of business model (BM) innovations related to a multi business model framework. The paper tries to answer the research questions: • What are the requirements for a multi business model innovation model (BMIM)? • How should a multi business model innovation model...... look like? Different generations of BMIMs are initially studied in the context of laying the baseline for how next generation multi BM Innovation model (BMIM) should look like. All generations of models are analyzed with the purpose of comparing the characteristics and challenges of previous...

  1. Better Language Models with Model Merging

    CERN Document Server

    Brants, T

    1996-01-01

    This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.

  2. Final Report, Center for Programming Models for Scalable Parallel Computing: Co-Array Fortran, Grant Number DE-FC02-01ER25505

    Energy Technology Data Exchange (ETDEWEB)

    Robert W. Numrich

    2008-04-22

    The major accomplishment of this project is the production of CafLib, an 'object-oriented' parallel numerical library written in Co-Array Fortran. CafLib contains distributed objects such as block vectors and block matrices along with procedures, attached to each object, that perform basic linear algebra operations such as matrix multiplication, matrix transpose and LU decomposition. It also contains constructors and destructors for each object that hide the details of data decomposition from the programmer, and it contains collective operations that allow the programmer to calculate global reductions, such as global sums, global minima and global maxima, as well as vector and matrix norms of several kinds. CafLib is designed to be extensible in such a way that programmers can define distributed grid and field objects, based on vector and matrix objects from the library, for finite difference algorithms to solve partial differential equations. A very important extra benefit that resulted from the project is the inclusion of the co-array programming model in the next Fortran standard called Fortran 2008. It is the first parallel programming model ever included as a standard part of the language. Co-arrays will be a supported feature in all Fortran compilers, and the portability provided by standardization will encourage a large number of programmers to adopt it for new parallel application development. The combination of object-oriented programming in Fortran 2003 with co-arrays in Fortran 2008 provides a very powerful programming model for high-performance scientific computing. Additional benefits from the project, beyond the original goal, include a programto provide access to the co-array model through access to the Cray compiler as a resource for teaching and research. Several academics, for the first time, included the co-array model as a topic in their courses on parallel computing. A separate collaborative project with LANL and PNNL showed how to

  3. Does a Social Network Based Model of Journal Metrics Improve Ranking? A review of: Bollen, Johan, Herbert Van de Sompel, Joan A. Smith and Rick Luce. “Toward Alternative Metrics of Journal Impact: A Comparison of Download and Citation Data.” Information Processing and Management 41.6 (2005:1419‐40.

    Directory of Open Access Journals (Sweden)

    Carol Perryman

    2007-06-01

    Full Text Available Objective – To test a new model formeasuring journal impact by using principles of social networking. Research questions are as follows: 1. Can valid networks of journal relationships be derived from reader article download patterns registered in a digital library’s server logs? 2. Can social network metrics of journal impact validly be calculated from the structure of such networks? 3. If so, how do the resulting journal impact rankings relate to the ISI impact factor (IF?Design – Bibliometric, social network centrality analysisSetting – Los Alamos National Laboratory (LANL, New Mexico Subjects – 40,847 full‐text articles downloaded from a large digital library by 1,858 unique users over a 6 month period.Methods – Full‐text article downloads from a large digital library for a six‐month period were examined using social networking analysis methods. ISSNs for journals in which the retrieved articles were published were paired based upon the proximity of use by the same user, based on the supposition that proximal downloads are related in some way. Reader‐Generated Networks (RGNs were then tested for small‐world characteristics. The resulting RGN data were then compared with Author‐Generated Networks (AGNs for the same journals indexed in the Institute of Scientific Information (ISI annual impact factor (IF rankings, in the Journal Citation Reports (JCR database. Next, a sample of the AGN‐derived pairings was examined by a team of 22 scientists, who were asked to rate the strength of relationships between journals on a five‐point scale. Centrality ratings were calculated for the AGN and RGN sets of journals, as well as for the ISI IF.Main results – Closeness and centrality rankings for the ISI IF and the AGN metrics were low, but significant, suggesting that centrality metrics are an acceptable impact metric. Comparison between the RGN and ISI IF data found marked differences, with RGN mirroring local population needs

  4. Model Selection Principles in Misspecified Models

    CERN Document Server

    Lv, Jinchi

    2010-01-01

    Model selection is of fundamental importance to high dimensional modeling featured in many contemporary applications. Classical principles of model selection include the Kullback-Leibler divergence principle and the Bayesian principle, which lead to the Akaike information criterion and Bayesian information criterion when models are correctly specified. Yet model misspecification is unavoidable when we have no knowledge of the true model or when we have the correct family of distributions but miss some true predictor. In this paper, we propose a family of semi-Bayesian principles for model selection in misspecified models, which combine the strengths of the two well-known principles. We derive asymptotic expansions of the semi-Bayesian principles in misspecified generalized linear models, which give the new semi-Bayesian information criteria (SIC). A specific form of SIC admits a natural decomposition into the negative maximum quasi-log-likelihood, a penalty on model dimensionality, and a penalty on model miss...

  5. CROSS SECTION EVALUATIONS FOR ENDF/B-VII.

    Energy Technology Data Exchange (ETDEWEB)

    HERMAN, M.; ROCHMAN, D.; OBLOZINSKY, P.

    2006-06-05

    This is the final report of the work performed under the LANL contract on neutron cross section evaluations for ENDF/B-VII (April 2005-May 2006). The purpose of the contract was to ensure seamless integration of the LANL neutron cross section evaluations in the new ENDF/B-VII library. The following work was performed: (1) LANL evaluated data files submitted for inclusion in ENDF/B-VII were checked and, when necessary, formal formatting errors were corrected. As a consequence, ENDF checking codes, run on all LANL files, do not report any errors that would rise concern. (2) LANL dosimetry evaluations for {sup 191}Ir and {sup 193}Ir were completed to match ENDF requirements for the general purpose library suitable for transport calculations. A set of covariances for both isotopes is included in the ENDF files. (3) Library of fission products was assembled and successfully tested with ENDF checking codes, processed with NJOY-99.125 and simple MCNP calculations. (4) KALMAN code has been integrated with the EMPIRE system to allow estimation of covariances based on the combination of measurements and model calculations. Covariances were produced for 155,157-Gd and also for 6 remaining isotopes of Gd.

  6. Pajarito Aerosol Couplings to Ecosystems (PACE) Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Dubey, M [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-01

    Laboratory (LANL) worked on the Pajarito Aerosol Couplings to Ecosystems (PACE) intensive operational period (IOP). PACE’s primary goal was to demonstrate routine Mobile Aerosol Observing System (MAOS) field operations and improve instrumental and operational performance. LANL operated the instruments efficiently and effectively with remote guidance by the instrument mentors. This was the first time a complex suite of instruments had been operated under the ARM model and it proved to be a very successful and cost-effective model to build upon.

  7. Comparison of different theory models and basis sets in the calculations of structures and 13C NMR spectra of [Pt(en)(CBDCA-O, O')], an analogue of the antitumor drug carboplatin.

    Science.gov (United States)

    Gao, Hongwei; Wei, Xiujuan; Liu, Xuting; Yan, Tingxia

    2010-03-25

    Comparisons of various density functional theory (DFT) methods at different basis sets in predicting the molecular structures and (13)C NMR spectra for [Pt(en)(CBDCA-O, O')], an analogue of the antitumor drug carboplatin, are reported. DFT methods including B3LYP, B3PW91, mPW1PW91, PBE1PBE, BPV86, PBEPBE, and LSDA are examined. Different basis sets including LANL2DZ, SDD, LANL2MB, CEP-4G, CEP-31G, and CEP-121G are also considered. It is remarkable that the LSDA/SDD level is clearly superior to all of the remaining density functional methods in predicting the structure of [Pt(en)(CBDCA-O, O')]. The results also indicate that the B3LYP/SDD level is the best to predict (13)C NMR spectra for [Pt(en)(CBDCA-O, O')] among all DFT methods.

  8. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  9. Building Mental Models by Dissecting Physical Models

    Science.gov (United States)

    Srivastava, Anveshna

    2016-01-01

    When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to…

  10. The IMACLIM model; Le modele IMACLIM

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2003-07-01

    This document provides annexes to the IMACLIM model which propose an actualized description of IMACLIM, model allowing the design of an evaluation tool of the greenhouse gases reduction policies. The model is described in a version coupled with the POLES, technical and economical model of the energy industry. Notations, equations, sources, processing and specifications are proposed and detailed. (A.L.B.)

  11. Modelling live forensic acquisition

    CSIR Research Space (South Africa)

    Grobler, MM

    2009-06-01

    Full Text Available This paper discusses the development of a South African model for Live Forensic Acquisition - Liforac. The Liforac model is a comprehensive model that presents a range of aspects related to Live Forensic Acquisition. The model provides forensic...

  12. Continuous Time Model Estimation

    OpenAIRE

    Carl Chiarella; Shenhuai Gao

    2004-01-01

    This paper introduces an easy to follow method for continuous time model estimation. It serves as an introduction on how to convert a state space model from continuous time to discrete time, how to decompose a hybrid stochastic model into a trend model plus a noise model, how to estimate the trend model by simulation, and how to calculate standard errors from estimation of the noise model. It also discusses the numerical difficulties involved in discrete time models that bring about the unit ...

  13. Comparative Protein Structure Modeling Using MODELLER.

    Science.gov (United States)

    Webb, Benjamin; Sali, Andrej

    2016-06-20

    Comparative protein structure modeling predicts the three-dimensional structure of a given protein sequence (target) based primarily on its alignment to one or more proteins of known structure (templates). The prediction process consists of fold assignment, target-template alignment, model building, and model evaluation. This unit describes how to calculate comparative models using the program MODELLER and how to use the ModBase database of such models, and discusses all four steps of comparative modeling, frequently observed errors, and some applications. Modeling lactate dehydrogenase from Trichomonas vaginalis (TvLDH) is described as an example. The download and installation of the MODELLER software is also described. © 2016 by John Wiley & Sons, Inc.

  14. Multi-scale Measurements and Modeling to Verify and Attribute Carbon Dioxide Emissions from Four Corners Power Plants

    Science.gov (United States)

    Dubey, M. K.; Love, S. P.; Henderson, B. G.; Lee, S.; Costigan, K. R.; Reisner, J.; Flowers, B. A.; Chylek, P.

    2011-12-01

    The Four Corners region of New Mexico contains two large coal-fired power plants with real-time in-stack CO2 and pollutant monitors, in a semi-arid region with a feeble natural carbon cycle, making it an ideal site to evaluate remote sensing top-down verification methods. LANL has developed a test-bed site that includes a high-resolution solar tracking Fourier Transform Spectrometer (Bruker 125 HR) to monitor column abundance of greenhouse gases and pollutants (CO2, CH4, N2O and CO), and in situ cavity ring-down (CRDS, Picarro) and standard EPA sensors that measure CO2, CH4, CO, NOx, SO2 and particulates. We also have deployed a meteorological station, a ceilometer to measure boundary layer heights and an AERONET system to measure aerosol optical depths. We have been making continuous measurements since 11 March 2011. Our system's retrievals were validated against airborne in situ vertical gas profiles measured by NCAR's HIPPO system on 7 June 2011. We report observed power-plant signals, their diurnal cycles, and how they depend on local meteorology. Typically, the total-column FTS data show 2 to 8 ppm increases in CO2 when a power-plant plume is blowing towards our site, while the in situ CRDS sensor measures increases of 10 to 50 ppm. In situ CH4 measurements reveal large nocturnal increases of 4-5 ppm that could be from extensive gas and coal mining activities in the region. In contrast, in situ CO2 increases at night are small, likely because the power plant stacks are higher than the nocturnal boundary layer. Furthermore, our site sampled long range transport of pollutants from the Wallow fire that we distinguish from power plant emissions. To analyze our observations, we have developed a customized ultra-high-resolution plume model (HIGRAD) and coupled it with the Weather Research and Forecasting Model with Chemistry (WRF-Chem) in the Four Corners area. Hourly real-time emissions are taken from EPA's in-stack monitors and other spatio-temporally resolved

  15. Concept Modeling vs. Data modeling in Practice

    DEFF Research Database (Denmark)

    Madsen, Bodil Nistrup; Erdman Thomsen, Hanne

    2015-01-01

    account of the inheritance of characteristics and allows us to introduce a number of principles and constraints which render concept modeling more coherent than earlier approaches. Second, we explain how terminological ontologies can be used as the basis for developing conceptual and logical data models......This chapter shows the usefulness of terminological concept modeling as a first step in data modeling. First, we introduce terminological concept modeling with terminological ontologies, i.e. concept systems enriched with characteristics modeled as feature specifications. This enables a formal...

  16. Business Model Innovation

    OpenAIRE

    Dodgson, Mark; Gann, David; Phillips, Nelson; Massa, Lorenzo; Tucci, Christopher

    2014-01-01

    The chapter offers a broad review of the literature at the nexus between Business Models and innovation studies, and examines the notion of Business Model Innovation in three different situations: Business Model Design in newly formed organizations, Business Model Reconfiguration in incumbent firms, and Business Model Innovation in the broad context of sustainability. Tools and perspectives to make sense of Business Models and support managers and entrepreneurs in dealing with Business Model ...

  17. Modeling cholera outbreaks.

    Science.gov (United States)

    Chao, Dennis L; Longini, Ira M; Morris, J Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios.

  18. Modeling cholera outbreaks

    Science.gov (United States)

    Longini, Ira M.; Morris, J. Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating models and the need to tailor models for different outbreak scenarios. PMID:23412687

  19. Model Manipulation for End-User Modelers

    DEFF Research Database (Denmark)

    Acretoaie, Vlad

    of these proposals. To achieve its first goal, the thesis presents the findings of a Systematic Mapping Study showing that human factors topics are scarcely and relatively poorly addressed in model transformation research. Motivated by these findings, the thesis explores the requirements of end-user modelers......End-user modelers are domain experts who create and use models as part of their work. They are typically not Software Engineers, and have little or no programming and meta-modeling experience. However, using model manipulation languages developed in the context of Model-Driven Engineering often...... requires such experience. These languages are therefore only used by a small subset of the modelers that could, in theory, benefit from them. The goals of this thesis are to substantiate this observation, introduce the concepts and tools required to overcome it, and provide empirical evidence in support...

  20. Air Quality Dispersion Modeling - Alternative Models

    Science.gov (United States)

    Models, not listed in Appendix W, that can be used in regulatory applications with case-by-case justification to the Reviewing Authority as noted in Section 3.2, Use of Alternative Models, in Appendix W.

  1. From Product Models to Product State Models

    DEFF Research Database (Denmark)

    Larsen, Michael Holm

    1999-01-01

    A well-known technology designed to handle product data is Product Models. Product Models are in their current form not able to handle all types of product state information. Hence, the concept of a Product State Model (PSM) is proposed. The PSM and in particular how to model a PSM is the Research...... Object for this project. In the presentation, benefits and challenges of the PSM will be presented as a basis for the discussion....

  2. Measurement and Modeling: Infectious Disease Modeling

    NARCIS (Netherlands)

    Kretzschmar, MEE

    2016-01-01

    After some historical remarks about the development of mathematical theory for infectious disease dynamics we introduce a basic mathematical model for the spread of an infection with immunity. The concepts of the model are explained and the model equations are derived from first principles. Using th

  3. (U) A Gruneisen Equation of State for TPX. Application in FLAG

    Energy Technology Data Exchange (ETDEWEB)

    Fredenburg, David A. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Aslam, Tariq Dennis [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bennett, Langdon Stanford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-11-02

    A Gruneisen equation of state (EOS) is developed for the polymer TPX (poly 4-methyl-1-pentene) within the LANL hydrocode FLAG. Experimental shock Hugoniot data for TPX is fit to a form of the Gruneisen EOS, and the necessary parameters for implementing the TPX EOS in FLAG are presented. The TPX EOS is further validated through one-dimensional simulations of recent double-shock experiments, and a comparison is made between the new Gruneisen EOS for TPX and the EOS representation for TPX used in the LANL Common Model.

  4. Internship at Los Alamos National Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Ryan Q. [Los Alamos National Laboratory

    2012-07-11

    Los Alamos National Laboratory (LANL) is located in Los Alamos, New Mexico. It provides support for our country's nuclear weapon stockpile as well as many other scientific research projects. I am an Undergraduate Student Intern in the Systems Design and Analysis group within the Nuclear Nonproliferation division of the Global Security directorate at LANL. I have been tasked with data analysis and modeling of particles in a fluidized bed system for the capture of carbon dioxide from power plant flue gas.

  5. Modelling of Hydraulic Robot

    DEFF Research Database (Denmark)

    Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik

    1997-01-01

    This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...... of the laws of physics on the system. The unknown (or uncertain) parameters are estimated with Maximum Likelihood (ML) parameter estimation. The identified model has been evaluated by comparing the measurements with simulation of the model. The identified model was much more capable of describing the dynamics...... of the system than the deterministic model....

  6. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models. These ...

  7. "Bohr's Atomic Model."

    Science.gov (United States)

    Willden, Jeff

    2001-01-01

    "Bohr's Atomic Model" is a small interactive multimedia program that introduces the viewer to a simplified model of the atom. This interactive simulation lets students build an atom using an atomic construction set. The underlying design methodology for "Bohr's Atomic Model" is model-centered instruction, which means the central model of the…

  8. Modelling of Hydraulic Robot

    DEFF Research Database (Denmark)

    Madsen, Henrik; Zhou, Jianjun; Hansen, Lars Henrik

    1997-01-01

    This paper describes a case study of identifying the physical model (or the grey box model) of a hydraulic test robot. The obtained model is intended to provide a basis for model-based control of the robot. The physical model is formulated in continuous time and is derived by application...

  9. Forest-fire models

    Science.gov (United States)

    Haiganoush Preisler; Alan Ager

    2013-01-01

    For applied mathematicians forest fire models refer mainly to a non-linear dynamic system often used to simulate spread of fire. For forest managers forest fire models may pertain to any of the three phases of fire management: prefire planning (fire risk models), fire suppression (fire behavior models), and postfire evaluation (fire effects and economic models). In...

  10. Solicited abstract: Global hydrological modeling and models

    Science.gov (United States)

    Xu, Chong-Yu

    2010-05-01

    The origins of rainfall-runoff modeling in the broad sense can be found in the middle of the 19th century arising in response to three types of engineering problems: (1) urban sewer design, (2) land reclamation drainage systems design, and (3) reservoir spillway design. Since then numerous empirical, conceptual and physically-based models are developed including event based models using unit hydrograph concept, Nash's linear reservoir models, HBV model, TOPMODEL, SHE model, etc. From the late 1980s, the evolution of global and continental-scale hydrology has placed new demands on hydrologic modellers. The macro-scale hydrological (global and regional scale) models were developed on the basis of the following motivations (Arenll, 1999). First, for a variety of operational and planning purposes, water resource managers responsible for large regions need to estimate the spatial variability of resources over large areas, at a spatial resolution finer than can be provided by observed data alone. Second, hydrologists and water managers are interested in the effects of land-use and climate variability and change over a large geographic domain. Third, there is an increasing need of using hydrologic models as a base to estimate point and non-point sources of pollution loading to streams. Fourth, hydrologists and atmospheric modellers have perceived weaknesses in the representation of hydrological processes in regional and global climate models, and developed global hydrological models to overcome the weaknesses of global climate models. Considerable progress in the development and application of global hydrological models has been achieved to date, however, large uncertainties still exist considering the model structure including large scale flow routing, parameterization, input data, etc. This presentation will focus on the global hydrological models, and the discussion includes (1) types of global hydrological models, (2) procedure of global hydrological model development

  11. Bayesian Model Selection and Statistical Modeling

    CERN Document Server

    Ando, Tomohiro

    2010-01-01

    Bayesian model selection is a fundamental part of the Bayesian statistical modeling process. The quality of these solutions usually depends on the goodness of the constructed Bayesian model. Realizing how crucial this issue is, many researchers and practitioners have been extensively investigating the Bayesian model selection problem. This book provides comprehensive explanations of the concepts and derivations of the Bayesian approach for model selection and related criteria, including the Bayes factor, the Bayesian information criterion (BIC), the generalized BIC, and the pseudo marginal lik

  12. From Numeric Models to Granular System Modeling

    Directory of Open Access Journals (Sweden)

    Witold Pedrycz

    2015-03-01

    To make this study self-contained, we briefly recall the key concepts of granular computing and demonstrate how this conceptual framework and its algorithmic fundamentals give rise to granular models. We discuss several representative formal setups used in describing and processing information granules including fuzzy sets, rough sets, and interval calculus. Key architectures of models dwell upon relationships among information granules. We demonstrate how information granularity and its optimization can be regarded as an important design asset to be exploited in system modeling and giving rise to granular models. With this regard, an important category of rule-based models along with their granular enrichments is studied in detail.

  13. Geologic Framework Model Analysis Model Report

    Energy Technology Data Exchange (ETDEWEB)

    R. Clayton

    2000-12-19

    The purpose of this report is to document the Geologic Framework Model (GFM), Version 3.1 (GFM3.1) with regard to data input, modeling methods, assumptions, uncertainties, limitations, and validation of the model results, qualification status of the model, and the differences between Version 3.1 and previous versions. The GFM represents a three-dimensional interpretation of the stratigraphy and structural features of the location of the potential Yucca Mountain radioactive waste repository. The GFM encompasses an area of 65 square miles (170 square kilometers) and a volume of 185 cubic miles (771 cubic kilometers). The boundaries of the GFM were chosen to encompass the most widely distributed set of exploratory boreholes (the Water Table or WT series) and to provide a geologic framework over the area of interest for hydrologic flow and radionuclide transport modeling through the unsaturated zone (UZ). The depth of the model is constrained by the inferred depth of the Tertiary-Paleozoic unconformity. The GFM was constructed from geologic map and borehole data. Additional information from measured stratigraphy sections, gravity profiles, and seismic profiles was also considered. This interim change notice (ICN) was prepared in accordance with the Technical Work Plan for the Integrated Site Model Process Model Report Revision 01 (CRWMS M&O 2000). The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The GFM is one component of the Integrated Site Model (ISM) (Figure l), which has been developed to provide a consistent volumetric portrayal of the rock layers, rock properties, and mineralogy of the Yucca Mountain site. The ISM consists of three components: (1) Geologic Framework Model (GFM); (2) Rock Properties Model (RPM); and (3) Mineralogic Model (MM). The ISM merges the detailed project stratigraphy into model stratigraphic units that are most useful for the primary downstream models and the

  14. Model Theory and Applications

    CERN Document Server

    Mangani, P

    2011-01-01

    This title includes: Lectures - G.E. Sacks - Model theory and applications, and H.J. Keisler - Constructions in model theory; and, Seminars - M. Servi - SH formulas and generalized exponential, and J.A. Makowski - Topological model theory.

  15. Wildfire Risk Main Model

    Data.gov (United States)

    Earth Data Analysis Center, University of New Mexico — The model combines three modeled fire behavior parameters (rate of spread, flame length, crown fire potential) and one modeled ecological health measure (fire regime...

  16. Energy modelling software

    CSIR Research Space (South Africa)

    Osburn, L

    2010-01-01

    Full Text Available The construction industry has turned to energy modelling in order to assist them in reducing the amount of energy consumed by buildings. However, while the energy loads of buildings can be accurately modelled, energy models often under...

  17. Computational neurogenetic modeling

    CERN Document Server

    Benuskova, Lubica

    2010-01-01

    Computational Neurogenetic Modeling is a student text, introducing the scope and problems of a new scientific discipline - Computational Neurogenetic Modeling (CNGM). CNGM is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. These include neural network models and their integration with gene network models. This new area brings together knowledge from various scientific disciplines, such as computer and information science, neuroscience and cognitive science, genetics and molecular biol

  18. Predictive Models for Music

    OpenAIRE

    Paiement, Jean-François; Grandvalet, Yves; Bengio, Samy

    2008-01-01

    Modeling long-term dependencies in time series has proved very difficult to achieve with traditional machine learning methods. This problem occurs when considering music data. In this paper, we introduce generative models for melodies. We decompose melodic modeling into two subtasks. We first propose a rhythm model based on the distributions of distances between subsequences. Then, we define a generative model for melodies given chords and rhythms based on modeling sequences of Narmour featur...

  19. TRACKING CLIMATE MODELS

    Data.gov (United States)

    National Aeronautics and Space Administration — CLAIRE MONTELEONI*, GAVIN SCHMIDT, AND SHAILESH SAROHA* Climate models are complex mathematical models designed by meteorologists, geophysicists, and climate...

  20. Environmental Modeling Center

    Data.gov (United States)

    Federal Laboratory Consortium — The Environmental Modeling Center provides the computational tools to perform geostatistical analysis, to model ground water and atmospheric releases for comparison...

  1. Multilevel modeling using R

    CERN Document Server

    Finch, W Holmes; Kelley, Ken

    2014-01-01

    A powerful tool for analyzing nested designs in a variety of fields, multilevel/hierarchical modeling allows researchers to account for data collected at multiple levels. Multilevel Modeling Using R provides you with a helpful guide to conducting multilevel data modeling using the R software environment.After reviewing standard linear models, the authors present the basics of multilevel models and explain how to fit these models using R. They then show how to employ multilevel modeling with longitudinal data and demonstrate the valuable graphical options in R. The book also describes models fo

  2. Global Business Models

    DEFF Research Database (Denmark)

    Rask, Morten

    insight from the literature about business models, international product policy, international entry modes and globalization into a conceptual model of relevant design elements of global business models, enabling global business model innovation to deal with differences in a downstream perspective...... regarding the customer interface and in an upstream perspective regarding the supply infrastructure. The paper offers a coherent conceptual dynamic meta-model of global business model innovation. Students, scholars and managers within the field of international business can use this conceptualization...... to understand, to study, and to create global business model innovation. Managerial and research implications draw on the developed ideal type of global business model innovation....

  3. Continuous system modeling

    Science.gov (United States)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  4. Understandings of 'Modelling'

    DEFF Research Database (Denmark)

    Andresen, Mette

    2007-01-01

    This paper meets the common critique of the teaching of non-authentic modelling in school mathematics. In the paper, non-authentic modelling is related to a change of view on the intentions of modelling from knowledge about applications of mathematical models to modelling for concept formation. Non......-authentic modelling is also linked with the potentials of exploration of ready-made models as a forerunner for more authentic modelling processes. The discussion includes analysis of an episode of students? work in the classroom, which serves to illustrate how concept formation may be linked to explorations of a non...

  5. Interfacing materials models with fire field models

    Energy Technology Data Exchange (ETDEWEB)

    Nicolette, V.F.; Tieszen, S.R.; Moya, J.L.

    1995-12-01

    For flame spread over solid materials, there has traditionally been a large technology gap between fundamental combustion research and the somewhat simplistic approaches used for practical, real-world applications. Recent advances in computational hardware and computational fluid dynamics (CFD)-based software have led to the development of fire field models. These models, when used in conjunction with material burning models, have the potential to bridge the gap between research and application by implementing physics-based engineering models in a transient, multi-dimensional tool. This paper discusses the coupling that is necessary between fire field models and burning material models for the simulation of solid material fires. Fire field models are capable of providing detailed information about the local fire environment. This information serves as an input to the solid material combustion submodel, which subsequently calculates the impact of the fire environment on the material. The response of the solid material (in terms of thermal response, decomposition, charring, and off-gassing) is then fed back into the field model as a source of mass, momentum and energy. The critical parameters which must be passed between the field model and the material burning model have been identified. Many computational issues must be addressed when developing such an interface. Some examples include the ability to track multiple fuels and species, local ignition criteria, and the need to use local grid refinement over the burning material of interest.

  6. Combustion modeling in a model combustor

    Institute of Scientific and Technical Information of China (English)

    L.Y.Jiang; I.Campbell; K.Su

    2007-01-01

    The flow-field of a propane-air diffusion flame combustor with interior and exterior conjugate heat transfers was numerically studied.Results obtained from four combustion models,combined with the re-normalization group (RNG) k-ε turbulence model,discrete ordinates radiation model and enhanced wall treatment are presented and discussed.The results are compared with a comprehensive database obtained from a series of experimental measurements.The flow patterns and the recirculation zone length in the combustion chamber are accurately predicted,and the mean axial velocities are in fairly good agreement with the experimental data,particularly at downstream sections for all four combustion models.The mean temperature profiles are captured fairly well by the eddy dissipation (EDS),probability density function (PDF),and laminar flamelet combustion models.However,the EDS-finite-rate combustion model fails to provide an acceptable temperature field.In general,the flamelet model illustrates little superiority over the PDF model,and to some extent the PDF model shows better performance than the EDS model.

  7. Comparing current cluster, massively parallel, and accelerated systems

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Kevin J [Los Alamos National Laboratory; Davis, Kei [Los Alamos National Laboratory; Hoisie, Adolfy [Los Alamos National Laboratory; Kerbyson, Darren J [Los Alamos National Laboratory; Pakin, Scott [Los Alamos National Laboratory; Lang, Mike [Los Alamos National Laboratory; Sancho Pitarch, Jose C [Los Alamos National Laboratory

    2010-01-01

    Currently there is large architectural diversity in high perfonnance computing systems. They include 'commodity' cluster systems that optimize per-node performance for small jobs, massively parallel processors (MPPs) that optimize aggregate perfonnance for large jobs, and accelerated systems that optimize both per-node and aggregate performance but only for applications custom-designed to take advantage of such systems. Because of these dissimilarities, meaningful comparisons of achievable performance are not straightforward. In this work we utilize a methodology that combines both empirical analysis and performance modeling to compare clusters (represented by a 4,352-core IB cluster), MPPs (represented by a 147,456-core BG/P), and accelerated systems (represented by the 129,600-core Roadrunner) across a workload of four applications. Strengths of our approach include the ability to compare architectures - as opposed to specific implementations of an architecture - attribute each application's performance bottlenecks to characteristics unique to each system, and to explore performance scenarios in advance of their availability for measurement. Our analysis illustrates that application performance is essentially unrelated to relative peak performance but that application performance can be both predicted and explained using modeling.

  8. Modeling Aeolian Transport of Contaminated Sediments at Los Alamos National Laboratory, Technical Area 54, Area G: Sensitivities to Succession, Disturbance, and Future Climate

    Energy Technology Data Exchange (ETDEWEB)

    Whicker, Jeffrey J. [Los Alamos National Laboratory; Kirchner, Thomas B. [New Mexico State University; Breshears, David D. [University of Arizona; Field, Jason P. [University of Arizona

    2012-03-27

    The Technical Area 54 (TA-54) Area G disposal facility is used for the disposal of radioactive waste at Los Alamos National Laboratory (LANL). U.S. Department of Energy (DOE) Order 435.1 (DOE, 2001) requires that radioactive waste be managed in a manner that protects public health and safety and the environment. In compliance with that requirement, DOE field sites must prepare and maintain site-specific radiological performance assessments for facilities that receive waste after September 26, 1988. Sites are also required to conduct composite analyses for facilities that receive waste after this date; these analyses account for the cumulative impacts of all waste that has been (and will be) disposed of at the facilities and other sources of radioactive material that may interact with these facilities. LANL issued Revision 4 of the Area G performance assessment and composite analysis in 2008. In support of those analyses, vertical and horizontal sediment flux data were collected at two analog sites, each with different dominant vegetation characteristics, and used to estimate rates of vertical resuspension and wind erosion for Area G. The results of that investigation indicated that there was no net loss of soil at the disposal site due to wind erosion, and suggested minimal impacts of wind on the long-term performance of the facility. However, that study did not evaluate the potential for contaminant transport caused by the horizontal movement of soil particles over long time frames. Since that time, additional field data have been collected to estimate wind threshold velocities for initiating sediment transport due to saltation and rates of sediment transport once those thresholds are reached. Data such as these have been used in the development of the Vegetation Modified Transport (VMTran) model. This model is designed to estimate patterns and long-term rates of contaminant redistribution caused by winds at the site, taking into account the impacts of plant

  9. Regularized Structural Equation Modeling.

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J; McArdle, John J

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.

  10. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically activated sludge models – are introduced since these define...

  11. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise....... Efficient and good modeling practice therefore requires the use of a proper set of guidelines, thus grounding the modeling studies on a general and systematic framework. Last but not least, general limitations of WWTP models – more specifically, activated sludge models – are introduced since these define...

  12. ROCK PROPERTIES MODEL ANALYSIS MODEL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Clinton Lum

    2002-02-04

    The purpose of this Analysis and Model Report (AMR) is to document Rock Properties Model (RPM) 3.1 with regard to input data, model methods, assumptions, uncertainties and limitations of model results, and qualification status of the model. The report also documents the differences between the current and previous versions and validation of the model. The rock properties models are intended principally for use as input to numerical physical-process modeling, such as of ground-water flow and/or radionuclide transport. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. This work was conducted in accordance with the following planning documents: WA-0344, ''3-D Rock Properties Modeling for FY 1998'' (SNL 1997, WA-0358), ''3-D Rock Properties Modeling for FY 1999'' (SNL 1999), and the technical development plan, Rock Properties Model Version 3.1, (CRWMS M&O 1999c). The Interim Change Notice (ICNs), ICN 02 and ICN 03, of this AMR were prepared as part of activities being conducted under the Technical Work Plan, TWP-NBS-GS-000003, ''Technical Work Plan for the Integrated Site Model, Process Model Report, Revision 01'' (CRWMS M&O 2000b). The purpose of ICN 03 is to record changes in data input status due to data qualification and verification activities. These work plans describe the scope, objectives, tasks, methodology, and implementing procedures for model construction. The constraints, caveats, and limitations associated with this model are discussed in the appropriate text sections that follow. The work scope for this activity consists of the following: (1) Conversion of the input data (laboratory measured porosity data, x-ray diffraction mineralogy, petrophysical calculations of bound water, and petrophysical calculations of porosity) for each borehole into stratigraphic coordinates; (2) Re-sampling and merging of data sets; (3

  13. Model Reduction of Nonlinear Fire Dynamics Models

    OpenAIRE

    Lattimer, Alan Martin

    2016-01-01

    Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order mod...

  14. Better models are more effectively connected models

    Science.gov (United States)

    Nunes, João Pedro; Bielders, Charles; Darboux, Frederic; Fiener, Peter; Finger, David; Turnbull-Lloyd, Laura; Wainwright, John

    2016-04-01

    The concept of hydrologic and geomorphologic connectivity describes the processes and pathways which link sources (e.g. rainfall, snow and ice melt, springs, eroded areas and barren lands) to accumulation areas (e.g. foot slopes, streams, aquifers, reservoirs), and the spatial variations thereof. There are many examples of hydrological and sediment connectivity on a watershed scale; in consequence, a process-based understanding of connectivity is crucial to help managers understand their systems and adopt adequate measures for flood prevention, pollution mitigation and soil protection, among others. Modelling is often used as a tool to understand and predict fluxes within a catchment by complementing observations with model results. Catchment models should therefore be able to reproduce the linkages, and thus the connectivity of water and sediment fluxes within the systems under simulation. In modelling, a high level of spatial and temporal detail is desirable to ensure taking into account a maximum number of components, which then enables connectivity to emerge from the simulated structures and functions. However, computational constraints and, in many cases, lack of data prevent the representation of all relevant processes and spatial/temporal variability in most models. In most cases, therefore, the level of detail selected for modelling is too coarse to represent the system in a way in which connectivity can emerge; a problem which can be circumvented by representing fine-scale structures and processes within coarser scale models using a variety of approaches. This poster focuses on the results of ongoing discussions on modelling connectivity held during several workshops within COST Action Connecteur. It assesses the current state of the art of incorporating the concept of connectivity in hydrological and sediment models, as well as the attitudes of modellers towards this issue. The discussion will focus on the different approaches through which connectivity

  15. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    on the ease with which prior knowledge can be incorporated. It is interesting to note that researchers in Control Theory, Neural Networks,Statistics, Artificial Intelligence and Fuzzy Logic have more or less independently developed very similar modelling methods, calling them Local ModelNetworks, Operating...... of introduction of existing knowledge, as well as the ease of model interpretation. This book attempts to outlinemuch of the common ground between the various approaches, encouraging the transfer of ideas.Recent progress in algorithms and analysis is presented, with constructive algorithms for automated model...

  16. Integrity modelling of tropospheric delay models

    Science.gov (United States)

    Rózsa, Szabolcs; Bastiaan Ober, Pieter; Mile, Máté; Ambrus, Bence; Juni, Ildikó

    2017-04-01

    The effect of the neutral atmosphere on signal propagation is routinely estimated by various tropospheric delay models in satellite navigation. Although numerous studies can be found in the literature investigating the accuracy of these models, for safety-of-life applications it is crucial to study and model the worst case performance of these models using very low recurrence frequencies. The main objective of the INTegrity of TROpospheric models (INTRO) project funded by the ESA PECS programme is to establish a model (or models) of the residual error of existing tropospheric delay models for safety-of-life applications. Such models are required to overbound rare tropospheric delays and should thus include the tails of the error distributions. Their use should lead to safe error bounds on the user position and should allow computation of protection levels for the horizontal and vertical position errors. The current tropospheric model from the RTCA SBAS Minimal Operational Standards has an associated residual error that equals 0.12 meters in the vertical direction. This value is derived by simply extrapolating the observed distribution of the residuals into the tail (where no data is present) and then taking the point where the cumulative distribution has an exceedance level would be 10-7.While the resulting standard deviation is much higher than the estimated standard variance that best fits the data (0.05 meters), it surely is conservative for most applications. In the context of the INTRO project some widely used and newly developed tropospheric delay models (e.g. RTCA MOPS, ESA GALTROPO and GPT2W) were tested using 16 years of daily ERA-INTERIM Reanalysis numerical weather model data and the raytracing technique. The results showed that the performance of some of the widely applied models have a clear seasonal dependency and it is also affected by a geographical position. In order to provide a more realistic, but still conservative estimation of the residual

  17. Non-preconditioned conjugate gradient on cell and FPCA-based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-03-10

    This work presents a detailed implementation of a double precision, Non-Preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{trademark} in conjunction with x86 Opteron{trademark} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  18. Non-preconditioned conjugate gradient on cell and FPGA based hybrid supercomputer nodes

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, David H [Los Alamos National Laboratory; Dubois, Andrew J [Los Alamos National Laboratory; Boorman, Thomas M [Los Alamos National Laboratory; Connor, Carolyn M [Los Alamos National Laboratory

    2009-01-01

    This work presents a detailed implementation of a double precision, non-preconditioned, Conjugate Gradient algorithm on a Roadrunner heterogeneous supercomputer node. These nodes utilize the Cell Broadband Engine Architecture{sup TM} in conjunction with x86 Opteron{sup TM} processors from AMD. We implement a common Conjugate Gradient algorithm, on a variety of systems, to compare and contrast performance. Implementation results are presented for the Roadrunner hybrid supercomputer, SRC Computers, Inc. MAPStation SRC-6 FPGA enhanced hybrid supercomputer, and AMD Opteron only. In all hybrid implementations wall clock time is measured, including all transfer overhead and compute timings.

  19. Numerical Modelling of Streams

    DEFF Research Database (Denmark)

    Vestergaard, Kristian

    In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative...

  20. Graphical Models with R

    DEFF Research Database (Denmark)

    Højsgaard, Søren; Edwards, David; Lauritzen, Steffen

    , the book provides examples of how more advanced aspects of graphical modeling can be represented and handled within R. Topics covered in the seven chapters include graphical models for contingency tables, Gaussian and mixed graphical models, Bayesian networks and modeling high dimensional data...

  1. Dynamic Latent Classification Model

    DEFF Research Database (Denmark)

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...... in the process as well as modeling dependences between attributes....

  2. HRM: HII Region Models

    Science.gov (United States)

    Wenger, Trey V.; Kepley, Amanda K.; Balser, Dana S.

    2017-07-01

    HII Region Models fits HII region models to observed radio recombination line and radio continuum data. The algorithm includes the calculations of departure coefficients to correct for non-LTE effects. HII Region Models has been used to model star formation in the nucleus of IC 342.

  3. Multilevel IRT Model Assessment

    NARCIS (Netherlands)

    Fox, Jean-Paul; Ark, L. Andries; Croon, Marcel A.

    2005-01-01

    Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT

  4. Models for Dynamic Applications

    DEFF Research Database (Denmark)

    2011-01-01

    be applied to formulate, analyse and solve these dynamic problems and how in the case of the fuel cell problem the model consists of coupledmeso and micro scale models. It is shown how data flows are handled between the models and how the solution is obtained within the modelling environment....

  5. Multivariate GARCH models

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....

  6. The Model Confidence Set

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Lunde, Asger; Nason, James M.

    The paper introduces the model confidence set (MCS) and applies it to the selection of models. A MCS is a set of models that is constructed such that it will contain the best model with a given level of confidence. The MCS is in this sense analogous to a confidence interval for a parameter. The M...

  7. Modelling Railway Interlocking Systems

    DEFF Research Database (Denmark)

    Lindegaard, Morten Peter; Viuf, P.; Haxthausen, Anne Elisabeth

    2000-01-01

    In this report we present a model of interlocking systems, and describe how the model may be validated by simulation. Station topologies are modelled by graphs in which the nodes denote track segments, and the edges denote connectivity for train traÆc. Points and signals are modelled by annotatio...

  8. AIDS Epidemiological models

    Science.gov (United States)

    Rahmani, Fouad Lazhar

    2010-11-01

    The aim of this paper is to present mathematical modelling of the spread of infection in the context of the transmission of the human immunodeficiency virus (HIV) and the acquired immune deficiency syndrome (AIDS). These models are based in part on the models suggested in the field of th AIDS mathematical modelling as reported by ISHAM [6].

  9. Multivariate GARCH models

    DEFF Research Database (Denmark)

    Silvennoinen, Annastiina; Teräsvirta, Timo

    This article contains a review of multivariate GARCH models. Most common GARCH models are presented and their properties considered. This also includes nonparametric and semiparametric models. Existing specification and misspecification tests are discussed. Finally, there is an empirical example...... in which several multivariate GARCH models are fitted to the same data set and the results compared....

  10. Multilevel IRT Model Assessment

    NARCIS (Netherlands)

    Fox, Gerardus J.A.; Ark, L. Andries; Croon, Marcel A.

    2005-01-01

    Modelling complex cognitive and psychological outcomes in, for example, educational assessment led to the development of generalized item response theory (IRT) models. A class of models was developed to solve practical and challenging educational problems by generalizing the basic IRT models. An IRT

  11. Biomass Scenario Model

    Energy Technology Data Exchange (ETDEWEB)

    2015-09-01

    The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.

  12. Lumped-parameter models

    Energy Technology Data Exchange (ETDEWEB)

    Ibsen, Lars Bo; Liingaard, M.

    2006-12-15

    A lumped-parameter model represents the frequency dependent soil-structure interaction of a massless foundation placed on or embedded into an unbounded soil domain. In this technical report the steps of establishing a lumped-parameter model are presented. Following sections are included in this report: Static and dynamic formulation, Simple lumped-parameter models and Advanced lumped-parameter models. (au)

  13. Plant development models

    NARCIS (Netherlands)

    Chuine, I.; Garcia de Cortazar-Atauri, I.; Kramer, K.; Hänninen, H.

    2013-01-01

    In this chapter we provide a brief overview of plant phenology modeling, focusing on mechanistic phenological models. After a brief history of plant phenology modeling, we present the different models which have been described in the literature so far and highlight the main differences between them,

  14. Generic Market Models

    NARCIS (Netherlands)

    R. Pietersz (Raoul); M. van Regenmortel

    2005-01-01

    textabstractCurrently, there are two market models for valuation and risk management of interest rate derivatives, the LIBOR and swap market models. In this paper, we introduce arbitrage-free constant maturity swap (CMS) market models and generic market models featuring forward rates that span perio

  15. A Model for Conversation

    DEFF Research Database (Denmark)

    Ayres, Phil

    2012-01-01

    This essay discusses models. It examines what models are, the roles models perform and suggests various intentions that underlie their construction and use. It discusses how models act as a conversational partner, and how they support various forms of conversation within the conversational activity...... of design. Three distinctions are drawn through which to develop this discussion of models in an architectural context. An examination of these distinctions serves to nuance particular characteristics and roles of models, the modelling activity itself and those engaged in it....

  16. Talk about toy models

    Science.gov (United States)

    Luczak, Joshua

    2017-02-01

    Scientific models are frequently discussed in philosophy of science. A great deal of the discussion is centred on approximation, idealisation, and on how these models achieve their representational function. Despite the importance, distinct nature, and high presence of toy models, they have received little attention from philosophers. This paper hopes to remedy this situation. It aims to elevate the status of toy models: by distinguishing them from approximations and idealisations, by highlighting and elaborating on several ways the Kac ring, a simple statistical mechanical model, is used as a toy model, and by explaining why toy models can be used to successfully carry out important work without performing a representational function.

  17. Latent classification models

    DEFF Research Database (Denmark)

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    One of the simplest, and yet most consistently well-performing setof classifiers is the \\NB models. These models rely on twoassumptions: $(i)$ All the attributes used to describe an instanceare conditionally independent given the class of that instance,and $(ii)$ all attributes follow a specific...... parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  18. Wastewater treatment models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2011-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...

  19. Wastewater Treatment Models

    DEFF Research Database (Denmark)

    Gernaey, Krist; Sin, Gürkan

    2008-01-01

    The state-of-the-art level reached in modeling wastewater treatment plants (WWTPs) is reported. For suspended growth systems, WWTP models have evolved from simple description of biological removal of organic carbon and nitrogen in aeration tanks (ASM1 in 1987) to more advanced levels including...... the practice of WWTP modeling by linking the wastewater treatment line with the sludge handling line in one modeling platform. Application of WWTP models is currently rather time consuming and thus expensive due to the high model complexity, and requires a great deal of process knowledge and modeling expertise...

  20. The Hospitable Meal Model

    DEFF Research Database (Denmark)

    Justesen, Lise; Overgaard, Svend Skafte

    2017-01-01

    This article presents an analytical model that aims to conceptualize how meal experiences are framed when taking into account a dynamic understanding of hospitality: the meal model is named The Hospitable Meal Model. The idea behind The Hospitable Meal Model is to present a conceptual model...... that can serve as a frame for developing hospitable meal competencies among professionals working within the area of institutional foodservices as well as a conceptual model for analysing meal experiences. The Hospitable Meal Model transcends and transforms existing meal models by presenting a more open......-ended approach towards meal experiences. The underlying purpose of The Hospitable Meal Model is to provide the basis for creating value for the individuals involved in institutional meal services. The Hospitable Meal Model was developed on the basis of an empirical study on hospital meal experiences explored...

  1. Protein Models Comparator

    CERN Document Server

    Widera, Paweł

    2011-01-01

    The process of comparison of computer generated protein structural models is an important element of protein structure prediction. It has many uses including model quality evaluation, selection of the final models from a large set of candidates or optimisation of parameters of energy functions used in template free modelling and refinement. Although many protein comparison methods are available online on numerous web servers, their ability to handle a large scale model comparison is often very limited. Most of the servers offer only a single pairwise structural comparison, and they usually do not provide a model-specific comparison with a fixed alignment between the models. To bridge the gap between the protein and model structure comparison we have developed the Protein Models Comparator (pm-cmp). To be able to deliver the scalability on demand and handle large comparison experiments the pm-cmp was implemented "in the cloud". Protein Models Comparator is a scalable web application for a fast distributed comp...

  2. Nonuniform Markov models

    CERN Document Server

    Ristad, E S; Ristad, Eric Sven; Thomas, Robert G.

    1996-01-01

    A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform Markov model is that it makes predictions of varying lengths using contexts of varying lengths. Experiments on the Wall Street Journal reveal that the nonuniform model performs slightly better than the classic interpolated Markov model. This result is somewhat remarkable because both models contain identical numbers of parameters whose values are estimated in a similar manner. The only difference between the two models is how they combine the statistics of longer and shorter strings. Keywords: nonuniform Markov model, interpolated Markov m...

  3. Lumped Thermal Household Model

    DEFF Research Database (Denmark)

    Biegel, Benjamin; Andersen, Palle; Stoustrup, Jakob

    2013-01-01

    a lumped model approach as an alternative to the individual models. In the lumped model, the portfolio is seen as baseline consumption superimposed with an ideal storage of limited power and energy capacity. The benefit of such a lumped model is that the computational effort of flexibility optimization......In this paper we discuss two different approaches to model the flexible power consumption of heat pump heated households: individual household modeling and lumped modeling. We illustrate that a benefit of individual modeling is that we can overview and optimize the complete flexibility of a heat...... pump portfolio. Following, we illustrate two disadvantages of individual models, namely that it requires much computational effort to optimize over a large portfolio, and second that it is difficult to accurately model the houses in certain time periods due to local disturbances. Finally, we propose...

  4. Calibrated Properties Model

    Energy Technology Data Exchange (ETDEWEB)

    C. Ahlers; H. Liu

    2000-03-12

    The purpose of this Analysis/Model Report (AMR) is to document the Calibrated Properties Model that provides calibrated parameter sets for unsaturated zone (UZ) flow and transport process models for the Yucca Mountain Site Characterization Project (YMP). This work was performed in accordance with the ''AMR Development Plan for U0035 Calibrated Properties Model REV00. These calibrated property sets include matrix and fracture parameters for the UZ Flow and Transport Model (UZ Model), drift seepage models, drift-scale and mountain-scale coupled-processes models, and Total System Performance Assessment (TSPA) models as well as Performance Assessment (PA) and other participating national laboratories and government agencies. These process models provide the necessary framework to test conceptual hypotheses of flow and transport at different scales and predict flow and transport behavior under a variety of climatic and thermal-loading conditions.

  5. Introduction to Adjoint Models

    Science.gov (United States)

    Errico, Ronald M.

    2015-01-01

    In this lecture, some fundamentals of adjoint models will be described. This includes a basic derivation of tangent linear and corresponding adjoint models from a parent nonlinear model, the interpretation of adjoint-derived sensitivity fields, a description of methods of automatic differentiation, and the use of adjoint models to solve various optimization problems, including singular vectors. Concluding remarks will attempt to correct common misconceptions about adjoint models and their utilization.

  6. Modeling cholera outbreaks

    OpenAIRE

    Chao, Dennis L.; Ira M Longini; Morris, J. Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...

  7. Business Model Visualization

    OpenAIRE

    Zagorsek, Branislav

    2013-01-01

    Business model describes the company’s most important activities, proposed value, and the compensation for the value. Business model visualization enables to simply and systematically capture and describe the most important components of the business model while the standardization of the concept allows the comparison between companies. There are several possibilities how to visualize the model. The aim of this paper is to describe the options for business model visualization and business mod...

  8. Diffeomorphic Statistical Deformation Models

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Hansen, Mads/Fogtman; Larsen, Rasmus

    2007-01-01

    In this paper we present a new method for constructing diffeomorphic statistical deformation models in arbitrary dimensional images with a nonlinear generative model and a linear parameter space. Our deformation model is a modified version of the diffeomorphic model introduced by Cootes et al. Th...... with ground truth in form of manual expert annotations, and compared to Cootes's model. We anticipate applications in unconstrained diffeomorphic synthesis of images, e.g. for tracking, segmentation, registration or classification purposes....

  9. Dimension of linear models

    DEFF Research Database (Denmark)

    Høskuldsson, Agnar

    1996-01-01

    Determination of the proper dimension of a given linear model is one of the most important tasks in the applied modeling work. We consider here eight criteria that can be used to determine the dimension of the model, or equivalently, the number of components to use in the model. Four...... the basic problems in determining the dimension of linear models. Then each of the eight measures are treated. The results are illustrated by examples....

  10. Modeling cholera outbreaks

    OpenAIRE

    Dennis L Chao; Longini, Ira M.; Morris, J. Glenn

    2014-01-01

    Mathematical modeling can be a valuable tool for studying infectious disease outbreak dynamics and simulating the effects of possible interventions. Here, we describe approaches to modeling cholera outbreaks and how models have been applied to explore intervention strategies, particularly in Haiti. Mathematical models can play an important role in formulating and evaluating complex cholera outbreak response options. Major challenges to cholera modeling are insufficient data for calibrating mo...

  11. Multiple Model Approaches to Modelling and Control,

    DEFF Research Database (Denmark)

    Why Multiple Models?This book presents a variety of approaches which produce complex models or controllers by piecing together a number of simpler subsystems. Thisdivide-and-conquer strategy is a long-standing and general way of copingwith complexity in engineering systems, nature and human probl...

  12. Model Checking of Boolean Process Models

    CERN Document Server

    Schneider, Christoph

    2011-01-01

    In the field of Business Process Management formal models for the control flow of business processes have been designed since more than 15 years. Which methods are best suited to verify the bulk of these models? The first step is to select a formal language which fixes the semantics of the models. We adopt the language of Boolean systems as reference language for Boolean process models. Boolean systems form a simple subclass of coloured Petri nets. Their characteristics are low tokens to model explicitly states with a subsequent skipping of activations and arbitrary logical rules of type AND, XOR, OR etc. to model the split and join of the control flow. We apply model checking as a verification method for the safeness and liveness of Boolean systems. Model checking of Boolean systems uses the elementary theory of propositional logic, no modal operators are needed. Our verification builds on a finite complete prefix of a certain T-system attached to the Boolean system. It splits the processes of the Boolean sy...

  13. Pavement Aging Model by Response Surface Modeling

    Directory of Open Access Journals (Sweden)

    Manzano-Ramírez A.

    2011-10-01

    Full Text Available In this work, surface course aging was modeled by Response Surface Methodology (RSM. The Marshall specimens were placed in a conventional oven for time and temperature conditions established on the basis of the environment factors of the region where the surface course is constructed by AC-20 from the Ing. Antonio M. Amor refinery. Volatilized material (VM, load resistance increment (ΔL and flow resistance increment (ΔF models were developed by the RSM. Cylindrical specimens with real aging were extracted from the surface course pilot to evaluate the error of the models. The VM model was adequate, in contrast (ΔL and (ΔF models were almost adequate with an error of 20 %, that was associated with the other environmental factors, which were not considered at the beginning of the research.

  14. Software quality and process improvement in scientific simulation codes

    Energy Technology Data Exchange (ETDEWEB)

    Ambrosiano, J.; Webster, R. [Los Alamos National Lab., NM (United States)

    1997-11-01

    This report contains viewgraphs on the quest to develope better simulation code quality through process modeling and improvement. This study is based on the experience of the authors and interviews with ten subjects chosen from simulation code development teams at LANL. This study is descriptive rather than scientific.

  15. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  16. PAGOSA physics manual

    Energy Technology Data Exchange (ETDEWEB)

    Weseloh, Wayne N.; Clancy, Sean P.; Painter, James W.

    2010-08-01

    PAGOSA is a computational fluid dynamics computer program developed at Los Alamos National Laboratory (LANL) for the study of high-speed compressible flow and high-rate material deformation. PAGOSA is a three-dimensional Eulerian finite difference code, solving problems with a wide variety of equations of state (EOSs), material strength, and explosive modeling options.

  17. MCNP{trademark} Software Quality Assurance plan

    Energy Technology Data Exchange (ETDEWEB)

    Abhold, H.M.; Hendricks, J.S.

    1996-04-01

    MCNP is a computer code that models the interaction of radiation with matter. MCNP is developed and maintained by the Transport Methods Group (XTM) of the Los Alamos National Laboratory (LANL). This plan describes the Software Quality Assurance (SQA) program applied to the code. The SQA program is consistent with the requirements of IEEE-730.1 and the guiding principles of ISO 900.

  18. An Innovative Network to Improve Sea Ice Prediction in a Changing Arctic

    Science.gov (United States)

    2014-09-30

    approaches Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the collection of information is estimated to average 1...Hunke ( LANL , model improvements), Lawrence Hamilton (UNH, interfacing with stakeholders), Walt Meier (NASA, satellite observations), and Helen Wiggins

  19. Adapting wave-front algorithms to efficiently utilize systems with deep communication hierarchies

    Energy Technology Data Exchange (ETDEWEB)

    Kerbyson, Darren J [Los Alamos National Laboratory; Lang, Michael [Los Alamos National Laboratory; Pakin, Scott [Los Alamos National Laboratory

    2009-01-01

    Large-scale systems increasingly exhibit a differential between intra-chip and inter-chip communication performance. Processor-cores on the same socket are able to communicate at lower latencies, and with higher bandwidths, than cores on different sockets either within the same node or between nodes. A key challenge is to efficiently use this communication hierarchy and hence optimize performance. We consider here the class of applications that contain wave-front processing. In these applications data can only be processed after their upstream neighbors have been processed. Similar dependencies result between processors in which communication is required to pass boundary data downstream and whose cost is typically impacted by the slowest communication channel in use. In this work we develop a novel hierarchical wave-front approach that reduces the use of slower communications in the hierarchy but at the cost of additional computation and higher use of on-chip communications. This tradeoff is explored using a performance model and an implementation on the Petascale Roadrunner system demonstrates a 27% performance improvement at full system-scale on a kernel application. The approach is generally applicable to large-scale multi-core and accelerated systems where a differential in system communication performance exists.

  20. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  1. Product and Process Modelling

    DEFF Research Database (Denmark)

    Cameron, Ian T.; Gani, Rafiqul

    This book covers the area of product and process modelling via a case study approach. It addresses a wide range of modelling applications with emphasis on modelling methodology and the subsequent in-depth analysis of mathematical models to gain insight via structural aspects of the models....... These approaches are put into the context of life cycle modelling, where multiscale and multiform modelling is increasingly prevalent in the 21st century. The book commences with a discussion of modern product and process modelling theory and practice followed by a series of case studies drawn from a variety...... to biotechnology applications, food, polymer and human health application areas. The book highlights to important nature of modern product and process modelling in the decision making processes across the life cycle. As such it provides an important resource for students, researchers and industrial practitioners....

  2. Practical Marginalized Multilevel Models.

    Science.gov (United States)

    Griswold, Michael E; Swihart, Bruce J; Caffo, Brian S; Zeger, Scott L

    2013-01-01

    Clustered data analysis is characterized by the need to describe both systematic variation in a mean model and cluster-dependent random variation in an association model. Marginalized multilevel models embrace the robustness and interpretations of a marginal mean model, while retaining the likelihood inference capabilities and flexible dependence structures of a conditional association model. Although there has been increasing recognition of the attractiveness of marginalized multilevel models, there has been a gap in their practical application arising from a lack of readily available estimation procedures. We extend the marginalized multilevel model to allow for nonlinear functions in both the mean and association aspects. We then formulate marginal models through conditional specifications to facilitate estimation with mixed model computational solutions already in place. We illustrate the MMM and approximate MMM approaches on a cerebrovascular deficiency crossover trial using SAS and an epidemiological study on race and visual impairment using R. Datasets, SAS and R code are included as supplemental materials.

  3. Modelling Foundations and Applications

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 8th European Conference on Modelling Foundations and Applications, held in Kgs. Lyngby, Denmark, in July 2012. The 20 revised full foundations track papers and 10 revised full applications track papers presented were carefully reviewed...... and selected from 81 submissions. Papers on all aspects of MDE were received, including topics such as architectural modelling and product lines, code generation, domain-specic modeling, metamodeling, model analysis and verication, model management, model transformation and simulation. The breadth of topics...

  4. On Communication Models

    Institute of Scientific and Technical Information of China (English)

    蒋娜; 谢有琪

    2012-01-01

    With the development of human society, the social hub enlarges beyond one community to the extent that the world is deemed as a community as a whole. Communication, therefore, plays an increasingly important role in our daily life. As a consequence, communication model or the definition of which is not so much a definition as a guide in communication. However, some existed communication models are not as practical as it was. This paper tries to make an overall contrast among three communication models Coded Model, Gable Communication Model and Ostensive Inferential Model, to see how they assist people to comprehend verbal and non -verbal communication.

  5. Modeling worldwide highway networks

    Science.gov (United States)

    Villas Boas, Paulino R.; Rodrigues, Francisco A.; da F. Costa, Luciano

    2009-12-01

    This Letter addresses the problem of modeling the highway systems of different countries by using complex networks formalism. More specifically, we compare two traditional geographical models with a modified geometrical network model where paths, rather than edges, are incorporated at each step between the origin and the destination vertices. Optimal configurations of parameters are obtained for each model and used for the comparison. The highway networks of Australia, Brazil, India, and Romania are considered and shown to be properly modeled by the modified geographical model.

  6. THE IMPROVED XINANJIANG MODEL

    Institute of Scientific and Technical Information of China (English)

    LI Zhi-jia; YAO Cheng; KONG Xiang-guang

    2005-01-01

    To improve the Xinanjiang model, the runoff generating from infiltration-excess is added to the model.The another 6 parameters are added to Xinanjiang model.In principle, the improved Xinanjiang model can be used to simulate runoff in the humid, semi-humid and also semi-arid regions.The application in Yi River shows the improved Xinanjiang model could forecast discharge with higher accuracy and can satisfy the practical requirements.It also shows that the improved model is reasonable.

  7. Microsoft tabular modeling cookbook

    CERN Document Server

    Braak, Paul te

    2013-01-01

    This book follows a cookbook style with recipes explaining the steps for developing analytic data using Business Intelligence Semantic Models.This book is designed for developers who wish to develop powerful and dynamic models for users as well as those who are responsible for the administration of models in corporate environments. It is also targeted at analysts and users of Excel who wish to advance their knowledge of Excel through the development of tabular models or who wish to analyze data through tabular modeling techniques. We assume no prior knowledge of tabular modeling

  8. Five models of capitalism

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Bresser-Pereira

    2012-03-01

    Full Text Available Besides analyzing capitalist societies historically and thinking of them in terms of phases or stages, we may compare different models or varieties of capitalism. In this paper I survey the literature on this subject, and distinguish the classification that has a production or business approach from those that use a mainly political criterion. I identify five forms of capitalism: among the rich countries, the liberal democratic or Anglo-Saxon model, the social or European model, and the endogenous social integration or Japanese model; among developing countries, I distinguish the Asian developmental model from the liberal-dependent model that characterizes most other developing countries, including Brazil.

  9. Holographic twin Higgs model.

    Science.gov (United States)

    Geller, Michael; Telem, Ofri

    2015-05-15

    We present the first realization of a "twin Higgs" model as a holographic composite Higgs model. Uniquely among composite Higgs models, the Higgs potential is protected by a new standard model (SM) singlet elementary "mirror" sector at the sigma model scale f and not by the composite states at m_{KK}, naturally allowing for m_{KK} beyond the LHC reach. As a result, naturalness in our model cannot be constrained by the LHC, but may be probed by precision Higgs measurements at future lepton colliders, and by direct searches for Kaluza-Klein excitations at a 100 TeV collider.

  10. Energy-consumption modelling

    Energy Technology Data Exchange (ETDEWEB)

    Reiter, E.R.

    1980-01-01

    A highly sophisticated and accurate approach is described to compute on an hourly or daily basis the energy consumption for space heating by individual buildings, urban sectors, and whole cities. The need for models and specifically weather-sensitive models, composite models, and space-heating models are discussed. Development of the Colorado State University Model, based on heat-transfer equations and on a heuristic, adaptive, self-organizing computation learning approach, is described. Results of modeling energy consumption by the city of Minneapolis and Cheyenne are given. Some data on energy consumption in individual buildings are included.

  11. Empirical Model Building Data, Models, and Reality

    CERN Document Server

    Thompson, James R

    2011-01-01

    Praise for the First Edition "This...novel and highly stimulating book, which emphasizes solving real problems...should be widely read. It will have a positive and lasting effect on the teaching of modeling and statistics in general." - Short Book Reviews This new edition features developments and real-world examples that showcase essential empirical modeling techniques Successful empirical model building is founded on the relationship between data and approximate representations of the real systems that generated that data. As a result, it is essential for researchers who construct these m

  12. Develop a Model Component

    Science.gov (United States)

    Ensey, Tyler S.

    2013-01-01

    During my internship at NASA, I was a model developer for Ground Support Equipment (GSE). The purpose of a model developer is to develop and unit test model component libraries (fluid, electrical, gas, etc.). The models are designed to simulate software for GSE (Ground Special Power, Crew Access Arm, Cryo, Fire and Leak Detection System, Environmental Control System (ECS), etc. .) before they are implemented into hardware. These models support verifying local control and remote software for End-Item Software Under Test (SUT). The model simulates the physical behavior (function, state, limits and 110) of each end-item and it's dependencies as defined in the Subsystem Interface Table, Software Requirements & Design Specification (SRDS), Ground Integrated Schematic (GIS), and System Mechanical Schematic.(SMS). The software of each specific model component is simulated through MATLAB's Simulink program. The intensiv model development life cycle is a.s follows: Identify source documents; identify model scope; update schedule; preliminary design review; develop model requirements; update model.. scope; update schedule; detailed design review; create/modify library component; implement library components reference; implement subsystem components; develop a test script; run the test script; develop users guide; send model out for peer review; the model is sent out for verifictionlvalidation; if there is empirical data, a validation data package is generated; if there is not empirical data, a verification package is generated; the test results are then reviewed; and finally, the user. requests accreditation, and a statement of accreditation is prepared. Once each component model is reviewed and approved, they are intertwined together into one integrated model. This integrated model is then tested itself, through a test script and autotest, so that it can be concluded that all models work conjointly, for a single purpose. The component I was assigned, specifically, was a

  13. Biosphere Model Report

    Energy Technology Data Exchange (ETDEWEB)

    D.W. Wu; A.J. Smith

    2004-11-08

    The purpose of this report is to document the biosphere model, the Environmental Radiation Model for Yucca Mountain, Nevada (ERMYN), which describes radionuclide transport processes in the biosphere and associated human exposure that may arise as the result of radionuclide release from the geologic repository at Yucca Mountain. The biosphere model is one of the process models that support the Yucca Mountain Project (YMP) Total System Performance Assessment (TSPA) for the license application (LA), TSPA-LA. The ERMYN provides the capability of performing human radiation dose assessments. This report documents the biosphere model, which includes: (1) Describing the reference biosphere, human receptor, exposure scenarios, and primary radionuclides for each exposure scenario (Section 6.1); (2) Developing a biosphere conceptual model using site-specific features, events, and processes (FEPs) (Section 6.2), the reference biosphere (Section 6.1.1), the human receptor (Section 6.1.2), and approximations (Sections 6.3.1.4 and 6.3.2.4); (3) Building a mathematical model using the biosphere conceptual model (Section 6.3) and published biosphere models (Sections 6.4 and 6.5); (4) Summarizing input parameters for the mathematical model, including the uncertainty associated with input values (Section 6.6); (5) Identifying improvements in the ERMYN compared with the model used in previous biosphere modeling (Section 6.7); (6) Constructing an ERMYN implementation tool (model) based on the biosphere mathematical model using GoldSim stochastic simulation software (Sections 6.8 and 6.9); (7) Verifying the ERMYN by comparing output from the software with hand calculations to ensure that the GoldSim implementation is correct (Section 6.10); (8) Validating the ERMYN by corroborating it with published biosphere models; comparing conceptual models, mathematical models, and numerical results (Section 7).

  14. Major Differences between the Jerome Model and the Horace Model

    Institute of Scientific and Technical Information of China (English)

    朱艳

    2014-01-01

    There are three famous translation models in the field of translation: the Jerome model, the Horace model and the Schleiermacher model. The production and development of the three models have significant influence on the translation. To find the major differences between the two western classical translation theoretical models, we discuss the Jerome model and the Hor-ace model deeply in this paper.

  15. Modelling cointegration in the vector autoregressive model

    DEFF Research Database (Denmark)

    Johansen, Søren

    2000-01-01

    A survey is given of some results obtained for the cointegrated VAR. The Granger representation theorem is discussed and the notions of cointegration and common trends are defined. The statistical model for cointegrated I(1) variables is defined, and it is shown how hypotheses on the cointegrating...... relations can be estimated under suitable identification conditions. The asymptotic theory is briefly mentioned and a few economic applications of the cointegration model are indicated....

  16. Emissions Modeling Clearinghouse

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Emissions Modeling Clearinghouse (EMCH) supports and promotes emissions modeling activities both internal and external to the EPA. Through this site, the EPA...

  17. ASC Champ Orbit Model

    DEFF Research Database (Denmark)

    Riis, Troels; Jørgensen, John Leif

    1999-01-01

    This documents describes a test of the implementation of the ASC orbit model for the Champ satellite.......This documents describes a test of the implementation of the ASC orbit model for the Champ satellite....

  18. World Magnetic Model 2015

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The World Magnetic Model is the standard model used by the U.S. Department of Defense, the U.K. Ministry of Defence, the North Atlantic Treaty Organization (NATO)...

  19. Laboratory of Biological Modeling

    Data.gov (United States)

    Federal Laboratory Consortium — The Laboratory of Biological Modeling is defined by both its methodologies and its areas of application. We use mathematical modeling in many forms and apply it to...

  20. Model comparison in ANOVA.

    Science.gov (United States)

    Rouder, Jeffrey N; Engelhardt, Christopher R; McCabe, Simon; Morey, Richard D

    2016-12-01

    Analysis of variance (ANOVA), the workhorse analysis of experimental designs, consists of F-tests of main effects and interactions. Yet, testing, including traditional ANOVA, has been recently critiqued on a number of theoretical and practical grounds. In light of these critiques, model comparison and model selection serve as an attractive alternative. Model comparison differs from testing in that one can support a null or nested model vis-a-vis a more general alternative by penalizing more flexible models. We argue this ability to support more simple models allows for more nuanced theoretical conclusions than provided by traditional ANOVA F-tests. We provide a model comparison strategy and show how ANOVA models may be reparameterized to better address substantive questions in data analysis.