SASSE MODELING OF A URANIUM MOLYBDENUM SEPARATION FLOWSHEET
Energy Technology Data Exchange (ETDEWEB)
Laurinat, J
2007-05-31
H-Canyon Engineering (HCE) is evaluating the feasibility of processing material from the Super Kukla Prompt Burst Reactor, which operated at the Nevada Test Site from 1964 to 1978. This material is comprised of 90 wt % uranium (U) (at approximately 20% 235U enrichment) alloyed with 10 wt % molybdenum (Mo). The objective is to dissolve the material in nitric acid (HNO{sub 3}) in the H-Canyon dissolvers and then to process the dissolved material through H-Canyon First and Second Cycle solvent extraction. The U product from Second Cycle will be sent to the highly enriched uranium (HEU) blend down program. In the blend down program, enriched U from the 1EU product stream will be blended with natural U at a ratio of 1 part enriched U per 3.5 parts natural U to meet a reactor fuel specification of 4.95% 235U before being shipped for use by the Tennessee Valley Authority (TVA) in its nuclear plants. The TVA specification calls for <200 mg Mo/g U (200 ppm). Since natural U has about 10 mg Mo/g U, the required purity of the 1EU product prior to blending is about 800 mg Mo/g U, allowing for uncertainties. HCE requested that the Savannah River National Laboratory (SRNL) define a flowsheet for the safe and efficient processing of the U-10Mo material. This report presents a computational model of the solvent extraction portion of the proposed flowsheet. The two main objectives of the computational model are to demonstrate that the Mo impurity requirement can be met and to show that the solvent feed rates in the proposed flowsheet, in particular to 1A and 1D Banks, are adequate to prevent refluxing of U and thereby ensure nuclear criticality safety. SASSE (Spreadsheet Algorithm for Stagewise Solvent Extraction), a Microsoft Excel spreadsheet that supports Argonne National Laboratory's proprietary AMUSE (Argonne Model for Universal Solvent Extraction) code, was selected to model the U/Mo separation flowsheet. SASSE spreadsheet models of H-Canyon First and Second Cycle
Report on the flowsheet model for the electrochemical treatment of liquid radioactive wastes
International Nuclear Information System (INIS)
Hobbs, D.T.
1995-01-01
The objective of this report is to describe the modeling and optimization procedure for the electrochemical removal of nitrates and nitrites from low level radioactive wastes. The simulation is carried out in SPEEDUP trademark, which is a state of the art flowsheet modeling package. The flowsheet model will provide a better understanding of the process and aid in the scale-up of the system. For example, the flowsheet model has shown that the electrochemical cell must be operated in batch mode to achieve 95% destruction. The present status of the flowsheet model is detailed in this report along with a systematic description of the batch optimization of the electrochemical cell. Results from two batch runs and one optimization run are also presented
Group Contribution Based Process Flowsheet Synthesis, Design and Modelling
DEFF Research Database (Denmark)
d'Anterroches, Loïc; Gani, Rafiqul
2005-01-01
the flowsheet structure. Just as a functional group is a collection of atoms, a process-group is a collection of operations forming an "unit" operation or a set of "unit" operations. The link between the process-groups are the streams similar to the bonds that are attachments to atoms/groups. Each process...
Modelling Template for the Development of the Process Flowsheet
DEFF Research Database (Denmark)
Fedorova, Marina; Gani, Rafiqul
2015-01-01
provides building blocks for the templates (generic models previously developed); 3) computer aided methods and tools, that include procedures to perform model translation, model analysis, model verification/validation, model solution and model documentation. In this work, the integrated use of all three......Models are playing important roles in design and analysis of chemicals/bio-chemicals based products and the processes that manufacture them. Model-based methods and tools have the potential to decrease the number of experiments, which can be expensive and time consuming, and point to candidates......, where the experimental effort could be focused.In this contribution a general modelling framework for systematic model building through modelling templates, which supports the reuse of existing models via its tools integration and model import and export capabilities, is presented. Modelling templates...
Technetium removal: preliminary flowsheet options
International Nuclear Information System (INIS)
Eager, K.M.
1995-01-01
This document presents the results of a preliminary investigation into options for preliminary flowsheets for 99Tc removal from Hanford Site tank waste. A model is created to show the path of 99Tc through pretreatment to disposal. The Tank Waste Remediation (TWRS) flowsheet (Orme 1995) is used as a baseline. Ranges of important inputs to the model are developed, such as 99Tc inventory in the tanks and important splits through the TWRS flowsheet. Several technetium removal options are discussed along with sensitivities of the removal schemes to important model parameters
Energy Technology Data Exchange (ETDEWEB)
Jantzen, C. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Zamecnik, J. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Missimer, D. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-08
Control of the REDuction/OXidation (REDOX) state of glasses containing high concentrations of transition metals, such as High Level Waste (HLW) glasses, is critical in order to eliminate processing difficulties caused by overly reduced or overly oxidized melts. Operation of a HLW melter at Fe^{+2}/ΣFe ratios of between 0.09 and 0.33, a range which is not overly oxidizing or overly reducing, helps retain radionuclides in the melt, i.e. long-lived radioactive ^{99}Tc species in the less volatile reduced Tc^{4+} state, ^{104}Ru in the melt as reduced Ru^{+4} state as insoluble RuO_{2}, and hazardous volatile Cr^{6+} in the less soluble and less volatile Cr^{+3} state in the glass. The melter REDOX control balances the oxidants and reductants from the feed and from processing additives such as antifoam. Currently, the Defense Waste Processing Facility (DWPF) is running a formic acid-nitric acid (FN) flowsheet where formic acid is the main reductant and nitric acid is the main oxidant. During decomposition formate and formic acid releases H_{2} gas which requires close control of the melter vapor space flammability. A switch to a nitric acid-glycolic acid (GN) flowsheet is desired as the glycolic acid flowsheet releases considerably less H_{2} gas upon decomposition. This would greatly simplify DWPF processing. Development of an EE term for glycolic acid in the GN flowsheet is documented in this study.
International Nuclear Information System (INIS)
Davison, J.E.; Eldershaw, C.E.
1992-01-01
This document forms the final report on a study of a coal-fired magnetohydrodynamic (MHD)/steam electric power generation system carried out by British Coal Corporation for the Commission of the European Communities. The study objective was to provide mass and energy balances and overall plant efficiency predictions for MHD to assist the Commission in their evaluation of advanced power generation technologies. In early 1990 the British Coal Corporation completed a study for the Commission in which a computer flowsheet modelling package was used to predict the performance of a conceptual air blown MHD plant. Since that study was carried out increasing emphasis has been placed on the possible need to reduce CO 2 emissions to counter the so-called greenhouse effect. Air blown MHD could greatly reduce CO 2 emissions per KWh by virtue of its high thermal efficiency. However, if even greater reductions in CO 2 emissions were required the CO 2 produced by coal combustion may have to be disposed of, for example into the deep ocean or underground caverns. To achieve this at minimum cost a concentrated CO 2 flue gas would be required. This could be achieved in an MHD plant by using a mixture of high purity oxygen and recycled CO 2 flue gas in the combustor. To assess this plant concept the European Commission awarded British Coal a contract to produce performance predictions using the access computer program
Process Options Description for Vitrification Flowsheet Model of INEEL Sodium Bearing Waste
International Nuclear Information System (INIS)
Nichols, T.T.; Taylor, D.D.; Lauerhass, L.; Barnes, C.M.
2002-01-01
The technical information required for the development of a basic steady-state process simulation of the vitrification treatment train of sodium bearing waste (SBW) at Idaho National Engineering and Environmental Laboratory (INEEL) is presented. The objective of the modeling effort is to provide the predictive capability required to optimize an entire treatment train and assess system-wide impacts of local changes at individual unit operations, with the aim of reducing the schedule and cost of future process/facility design efforts. All the information required a priori for engineers to construct and link unit operation modules in a commercial software simulator to represent the alternative treatment trains is presented. The information is of a mid- to high-level nature and consists of the following: (1) a description of twenty-four specific unit operations--their operating conditions and constraints, primary species and key outputs, and the initial modeling approaches that will be used in the first year of the simulation's development; (2) three potential configurations of the unit operations (trains) and their interdependencies via stream connections; and (3) representative stream compositional makeups
A new flowsheeting tool for flue gas treating
van Elk, E. P.; Arendsen, A. R. J.; Versteeg, G. F.
2009-01-01
A new flowsheeting tool, specifically designed for steady-state simulation of acid gas treating processes, has been developed. The models implemented in the new tool combine all issues relevant for the design, optimization and analysis of acid gas treating processes, including post-combustion and
International Nuclear Information System (INIS)
Homma, Shunji; Sakamoto, Susumu; Takanashi, Mitsuhiro; Nammo, Akihiko; Satoh, Yoshihiro; Soejima, Takayuki; Koga, Jiro; Matsumoto, Shiro
1995-01-01
A computer aided process flowsheet design and analysis system, COMPAS has been developed in order to carry out the flowsheet calculation on the process flow diagram of nuclear fuel reprocessing. All of equipments, such as dissolver, mixer-settler, and so on, in the process flowsheet diagram are graphically visualized as icon on a bitmap display of UNIX workstation. Drawing of a flowsheet can be carried out easily by the mouse operation. Not only a published numerical simulation code but also a user's original one can be used on the COMPAS. Specifications of the equipment and the concentration of components in the stream displayed as tables can be edited by a computer user. Results of calculation can be also displayed graphically. Two examples show that the COMPAS is applicable to decide operating conditions of Purex process and to analyze extraction behavior in a mixer-settler extractor. (author)
International Nuclear Information System (INIS)
Williams, D.F.; Knauer, J.B.; O'Kelley, G.D.; Wiggins, J.T.; Porter, C.E.
1992-01-01
The production of large quantities of high specific activity 63 Ni (>10Ci/g) requires both a highly enriched 62 Ni target and a long irradiation period at high neutron flux. Trace impurities in the nickel and associated target materials are also activated and account for a significant fraction of the discharged activity and essentially all of the gamma activity. While most of these undesirable activation products (mainly transition metals) can be easily removed as chloride complexes during anion exchange, chromium, present as 51 Cr, and manganese, present as 54 Mn, are exceptions and require solvent extraction of the in-cell product to achieve the desired purity. In addition to summarizing the current development and production experience, optimized flowsheets are discussed
Application of structured flowsheets to global evaluation of tank waste processing alternatives
International Nuclear Information System (INIS)
Jansen, G.; Knutson, B.J.; Niccoli, L.G.; Frank, D.D.
1994-01-01
Remediation of the Hanford waste tanks requires integration of chemical technologies and evaluation of alternatives from the perspective of the overall Hanford cleanup purpose. The use of Design/IDEF (R) logic to connect chemical process functions to the overall cleanup mission in the Hanford Strategic Analysis (HSA) and to Aspen Plus (R) process models can show the effect of each process step on global performance measures such as safety, cost, and public perception. This hybrid of chemical process analysis and systems engineering produces structured material balance flowsheets at any level of process aggregation within the HSA. Connectivity and consistent process and stream nomenclature are automatically transferred between detailed process models, the HSA top purpose, and the global material balance flowsheet evaluation. Applications to separation processes is demonstrated for a generic Truex-Sludge Wash flowsheet with many process options and for the aggregation of a Clean Option flowsheet from a detailed chemical process level to a global evaluation level
ON-LINE CALCULATOR: FORWARD CALCULATION JOHNSON ETTINGER MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
SLUDGE BATCH 6 PHASE II FLOWSHEET SIMULATIONS
Energy Technology Data Exchange (ETDEWEB)
Koopman, D.; Best, D.
2010-03-30
Two Sludge Receipt and Adjustment Tank (SRAT) runs were used to demonstrate that a fairly wide window of acid stoichiometry was available for processing SB6 Phase II flowsheet simulant (Tank 40 simulant) while still meeting the dual goals of acceptable nitrate destruction and controlled hydrogen generation. Phase II was an intermediate flowsheet study for the projected composition of Tank 40 after transfer of SB6/Tank 51 sludge to the heel of SB5. The composition was based on August 2009 projections. A window of about 50% in total acid was found between acceptable nitrite destruction and excessive hydrogen generation.
Computer Aided Flowsheet Design using Group Contribution Methods
DEFF Research Database (Denmark)
Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul
2011-01-01
different unit operations in the system. Feasible flowsheet configurations are generated using efficient combinatorial algorithms and the performance of each candidate flowsheet is evaluated using a set of flowsheet properties. A systematic notation system called SFILES is used to store the structural...
Computer Aided Flowsheet Design using Group Contribution Methods
DEFF Research Database (Denmark)
Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul
In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... different unit operations in the system. Feasible flowsheet configurations are generated using efficient combinatorial algorithms and the performance of each candidate flowsheet is evaluated using a set of flowsheet properties. A systematic notation system called SFILES is used to store the structural...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....
Low temperature dissolution flowsheet for plutonium metal
Energy Technology Data Exchange (ETDEWEB)
Daniel, W. E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Almond, P. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Rudisill, T. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-05-01
The H-Canyon flowsheet used to dissolve Pu metal for PuO_{2} production utilizes boiling HNO_{3}. SRNL was requested to develop a complementary dissolution flowsheet at two reduced temperature ranges. The dissolution and H_{2} generation rates of Pu metal were investigated using a dissolving solution at ambient temperature (20-30 °C) and for an intermediate temperature of 50-60 °C. Additionally, the testing included an investigation of the dissolution rates and characterization of the off-gas generated from the ambient temperature dissolution of carbon steel cans and the nylon bags that contain the Pu metal when charged to the dissolver.
Flowsheet optimization of a lubricant base oil hydrotreatment process
Directory of Open Access Journals (Sweden)
Medeiros J. L.
2004-01-01
Full Text Available Unsaturated, nitrogenated and sulfured compounds may reach undesirable levels in lubricant base oils, requiring hydrotreatment (HDT at high temperatures and pressures. HDT processes are well known for their high capital and operational costs due to the use of hydrogen, compressors and multistage heterogeneous reactors. Process costs are thus highly dependent on the applied conditions. An oversized process entails unnecessary costs and capital investment. On the other hand, mild reaction conditions lead to unspecified products. This work analyzes the process using a compositional modeling and a kinetic framework developed previously (Barbosa et al., 2002 for the HDT of lubricant base oils. The industrial reactor is rigorously modeled on an adiabatic and multistage configuration. For the remaining equipment in the flowsheet, we adopt shortcut models for compressors and exchangers. The process is then optimized in terms of its variables subject to product specification constraints.
Method for innovative synthesis-design of chemical process flowsheets
DEFF Research Database (Denmark)
Kumar Tula, Anjan; Gani, Rafiqul
is available, rigorous simulation is performed to validate the synthesis-design. Note that since the flowsheet is synthesized and the operations in the flowsheet designed to match a set of design targets, there are no iterations involved as the final flowsheet is among the best, if not the best. In this paper...... of chemical processes, where, chemical process flowsheets could be synthesized in the same way as atoms or groups of atoms are synthesized to form molecules in computer aided molecular design (CAMD) techniques [4]. That, from a library of building blocks (functional process-groups) and a set of rules to join...... and selected for further analysis. In the next stage, the design parameters for the operations of the flowsheet are established through reverse engineering approaches based on driving forces available for each operation. In the final stage, when all the necessary information for a rigorous process simulation...
Evaluation of quartz melt rate furnace with the nitric-glycolic flowsheet
Energy Technology Data Exchange (ETDEWEB)
Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Miller, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-08-03
The Savannah River National Laboratory (SRNL) was tasked to support validation of the Defense Waste Processing Facility (DWPF) melter offgas flammability model for the Nitric-Glycolic (NG) flowsheet. The work is supplemental to the Cold Cap Evaluation Furnace (CEF) testing conducted in 20141 and the Slurry-fed Melt Rate Furnace (SMRF) testing conducted in 20162 that supported Deliverable 4 of the DWPF & Saltstone Facility Engineering Technical Task Request (TTR).3 The Quartz Melt Rate Furnace (QMRF) was evaluated as a bench-scale scoping tool to potentially be used in lieu of or simply prior to the use of the larger-scale SMRF or CEF. The QMRF platform has been used previously to evaluate melt rate behavior and offgas compositions of DWPF glasses prepared from the Nitric-Formic (NF) flowsheet but not for the NG flowsheet and not with continuous feeding.4 The overall objective of the 2016-2017 testing was to evaluate the efficacy of the QMRF as a lab-scale platform for steady state, continuously fed melter testing with the NG flowsheet as an alternative to more expensive and complex testing with the SMRF or CEF platforms.
An equation oriented approach to steady state flowsheeting of methanol synthesis loop
International Nuclear Information System (INIS)
Fathikalajahi, J.; Baniadam, M.; Rahimpour, M.R.
2008-01-01
An equation-oriented approach was developed for steady state flowsheeting of a commercial methanol plant. The loop consists of fixed bed reactor, flash separator, preheater, coolers, and compressor. For steady sate flowsheeting of the plant mathematical model of reactor and other units are needed. Reactor used in loop is a Lurgi type and its configuration is rather complex. Previously reactor and flash separator are modeled as two important units of plant. The model is based on mass and energy balances in each equipment and utilizing some auxiliary equations such as rate of reaction and thermodynamics model for activity coefficients of liquid. In order to validate the mathematical model for the synthesis loop, some simulation data were performed using operating conditions and characteristics of the commercial plant. The good agreement between the steady state simulation results and the plant data shows the validity of the model
Integrated HLW Conceptual Process Flowsheet(s) for the Crystalline Silicotitanate Process SRDF-98-04
International Nuclear Information System (INIS)
Jacobs, R.A.
1998-01-01
The Strategic Research and Development Fund (SRDF) provided funds to develop integrated conceptual flowsheets and material balances for a CST process as a potential replacement for, or second generation to, the ITP process. This task directly supports another SRDF task: Glass Form for HLW Sludge with CST, SRDF-98-01, by M. K. Andrews which seeks to further develop sludge/CST glasses that could be used if the ITP process were replaced by CST ion exchange. The objective of the proposal was to provide flowsheet support for development and evaluation of a High Level Waste Division process to replace ITP. The flowsheets would provide a conceptual integrated material balance showing the impact on the HLW division. The evaluation would incorporate information to be developed by Andrews and Harbour on CST/DWPF glass formulations and provide the bases for evaluating the economic impact of the proposed replacement process. Coincident with this study, the Salt Disposition Team began its evaluation of alternatives for disposition of the HLW salts in the SRS waste tanks. During that time, the CST IX process was selected as one of four alternatives (of eighteen Phase II alternatives) for further evaluation during Phase III
Flowsheet finalisation for immobilisation of SGHWR wastes
International Nuclear Information System (INIS)
Lee, D.J.
1984-09-01
This report summarises research and development work carried out during the year ended March 1983 on the programme for cementing the Winfrith Reactor (SGHWR) sludge. Further results from the characterisation programme are reported, together with data from the cementation programme. Formulations based on Ordinary Portland Cement (OPC), ground granulated blast furnace slag (BFS) and Pulverised Fuel Ash (PFA) have been tested. The results show that a blend of 90% BFS/10% OPC by weight, gives the best properties. Chemical pretreatment as a method for producing a stable waste form is discussed. A dewatering pretreatment to provide a sludge suitable for direct cementation is also outlined. A flowsheet for cementing the SGHWR sludge is proposed based on these laboratory and pilot scale studies. The major components required for the active plant are identified and provisional plant layouts are given. (author)
Mixed Waste Treatment Project: LLNL and LANL computer simulations of integrated flowsheets
International Nuclear Information System (INIS)
Camp, D.W.; Dietsche, L.J.; Upadhye, R.S.; Borduin, L.C.; Pendergrass, J.A.; Thompson, T.K.
1994-01-01
Computer simulations of mixed waste processing flowsheets using ASPEN PLUS process simulation software were completed by a joint Lawrence Livermore National Laboratory/Los National Laboratory (LLNL/LANL) effort for the US Department of Energy Mixed Waste Treatment Project. The LLNL model used relatively detailed synthesized chemical ''cocktails'' to simulate waste streams. The LANL approach used less detail but made extensive use of simple steam splitters and thermodynamic coal models for combustible waste compositions. The two modeling approaches agreed within 16% for the product streams and within 25% for the auxiliary fuel rate. The discrepancy between the auxiliary fuel rates was traced to different methods of handling organics in lab packs and scintillation vials with the process models. The ASPEN models are valuable tools for evaluating waste processing flowsheets
GLYCOLIC-FORMIC ACID FLOWSHEET FINAL REPORT FOR DOWNSELECTION DECISION
Energy Technology Data Exchange (ETDEWEB)
Lambert, D.; Pickenheim, B.; Stone, M.; Newell, J.; Best, D.
2011-03-10
Flowsheet testing was performed to develop the nitric-glycolic-formic acid flowsheet (referred to as the glycolic-formic flowsheet throughout the rest of the report) as an alternative to the nitric/formic flowsheet currently being processed at the DWPF. This new flowsheet has shown that mercury can be removed in the Sludge Receipt and Adjustment Tank (SRAT) with minimal hydrogen generation. All processing objectives were also met, including greatly reducing the Slurry Mix Evaporator (SME) product yield stress as compared to the baseline nitric/formic flowsheet. Forty-six runs were performed in total, including the baseline run and the melter feed preparation runs. Significant results are summarized. The baseline nitric/formic flowsheet run, using the SB6 simulant produced by Harrell was extremely difficult to process successfully under existing DWPF acceptance criteria with this simulant at the HM levels of noble metals. While nitrite was destroyed and mercury was removed to near the DWPF limit, the rheology of the SRAT and SME products were well above design basis and hydrogen generation far exceeded the DWPF SRAT limit. In addition, mixing during the SME cycle was very poor. In this sense, the nitric/glycolic/formic acid flowsheet represents a significant upgrade over the current flowsheet. Mercury was successfully removed with almost no hydrogen generation and the SRAT and SME products yield stresses were within process limits or previously processed ranges. The glycolic-formic flowsheet has a very wide processing window. Testing was completed from 100% to 200% of acid stoichiometry and using a glycolic-formic mixture from 40% to 100% glycolic acid. The testing met all processing requirements throughout these processing windows. This should allow processing at an acid stoichiometry of 100% and a glycolic-formic mixture of 80% glycolic acid with minimal hydrogen generation. It should also allow processing endpoints in the SRAT and SME at significantly higher
Carbon cycle modeling calculations for the IPCC
International Nuclear Information System (INIS)
Wuebbles, D.J.; Jain, A.K.
1993-01-01
We carried out essentially all the carbon cycle modeling calculations that were required by the IPCC Working Group 1. Specifically, IPCC required two types of calculations, namely, ''inverse calculations'' (input was CO 2 concentrations and the output was CO 2 emissions), and the ''forward calculations'' (input was CO 2 emissions and output was CO 2 concentrations). In particular, we have derived carbon dioxide concentrations and/or emissions for several scenarios using our coupled climate-carbon cycle modelling system
Recommendation of ruthenium source for sludge batch flowsheet studies
Energy Technology Data Exchange (ETDEWEB)
Woodham, W. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-09-13
Included herein is a preliminary analysis of previously-generated data from sludge batches 7a, 7b, 8, and 9 sludge simulant and real-waste testing, performed to recommend a form of ruthenium for future sludge batch simulant testing under the nitric-formic flowsheet. Focus is given to reactions present in the Sludge Receipt and Adjustment Tank cycle, given that this cycle historically produces the most changes in chemical composition during Chemical Process Cell processing. Data is presented and analyzed for several runs performed under the nitric-formic flowsheet, with consideration given to effects on the production of hydrogen gas, nitrous oxide gas, consumption of formate, conversion of nitrite to nitrate, and the removal and recovery of mercury during processing. Additionally, a brief discussion is given to the effect of ruthenium source selection under the nitric-glycolic flowsheet. An analysis of data generated from scaled demonstration testing, sludge batch 9 qualification testing, and antifoam degradation testing under the nitric-glycolic flowsheet is presented. Experimental parameters of interest under the nitric-glycolic flowsheet include N2O production, glycolate destruction, conversion of glycolate to formate and oxalate, and the conversion of nitrite to nitrate. To date, the number of real-waste experiments that have been performed under the nitric-glycolic flowsheet is insufficient to provide a complete understanding of the effects of ruthenium source selection in simulant experiments with regard to fidelity to real-waste testing. Therefore, a determination of comparability between the two ruthenium sources as employed under the nitric-glycolic flowsheet is made based on available data in order to inform ruthenium source selection for future testing under the nitric-glycolic flowsheet.
Precipitates/Salts Model Sensitivity Calculation
International Nuclear Information System (INIS)
Mariner, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation on potential seepage waters within a potential repository drift. This work is developed and documented using procedure AP-3.12Q, ''Calculations'', in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The specific objective of this calculation is to examine the sensitivity and uncertainties of the Precipitates/Salts model. The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b). The calculation in the current document examines the effects of starting water composition, mineral suppressions, and the fugacity of carbon dioxide (CO 2 ) on the chemical evolution of water in the drift
Nitric-glycolic flowsheet testing for maximum hydrogen generation rate
Energy Technology Data Exchange (ETDEWEB)
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is developing for implementation a flowsheet with a new reductant to replace formic acid. Glycolic acid has been tested over the past several years and found to effectively replace the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the chemical generation of hydrogen and ammonia, allows purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective adjustment of the SRAT/SME rheology, and is favorable with respect to melter flammability. The objective of this work was to perform DWPF Chemical Process Cell (CPC) testing at conditions that would bound the catalytic hydrogen production for the nitric-glycolic flowsheet.
Procafd: Computer Aided Tool for Synthesis-Design & Analysis of Chemical Process Flowsheets
DEFF Research Database (Denmark)
Kumar Tula, Anjan; Eden, Mario R.; Gani, Rafiqul
2015-01-01
unit operations (reactor, distillation, flash, crystallization, etc.), bonds representing streams and/or recycles, rules for chemical feasibility also representing process flowsheet feasibility and sum of group contributions representing the performance of the flowsheet. In the next stage, the design...
A revised calculational model for fission
International Nuclear Information System (INIS)
Atchison, F.
1998-09-01
A semi-empirical parametrization has been developed to calculate the fission contribution to evaporative de-excitation of nuclei with a very wide range of charge, mass and excitation-energy and also the nuclear states of the scission products. The calculational model reproduces measured values (cross-sections, mass distributions, etc.) for a wide range of fissioning systems: Nuclei from Ta to Cf, interactions involving nucleons up to medium energy and light ions. (author)
Model calculations in correlated finite nuclei
Energy Technology Data Exchange (ETDEWEB)
Guardiola, R.; Ros, J. (Granada Univ. (Spain). Dept. de Fisica Nuclear); Polls, A. (Tuebingen Univ. (Germany, F.R.). Inst. fuer Theoretische Physik)
1980-10-21
In order to study the convergence condition of the FAHT cluster expansion several model calculations are described and numerically tested. It is concluded that this cluster expansion deals properly with the central part of the two-body distribution function, but presents some difficulties for the exchange part.
EARTHWORK VOLUME CALCULATION FROM DIGITAL TERRAIN MODELS
Directory of Open Access Journals (Sweden)
JANIĆ Milorad
2015-06-01
Full Text Available Accurate calculation of cut and fill volume has an essential importance in many fields. This article shows a new method, which has no approximation, based on Digital Terrain Models. A relatively new mathematical model is developed for that purpose, which is implemented in the software solution. Both of them has been tested and verified in the praxis on several large opencast mines. This application is developed in AutoLISP programming language and works in AutoCAD environment.
SRNL report for the tank waste disposition integrated flowsheet: Corrosion testing
Energy Technology Data Exchange (ETDEWEB)
Wyrwas, R. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2015-09-30
A series of cyclic potentiodynamic polarization (CPP) tests were performed in support of the Tank Waste Disposition Integrated Flowsheet (TWDIF). The focus of the testing was to assess the effectiveness of the SRNL model for predicting the amount of nitrite inhibitor needed to prevent pitting induced by increasing halide concentrations. The testing conditions were selected to simulate the dilute process stream that is proposed to be returned to tank farms from treating the off-gas from the low activity waste melter in the Waste Treatment and Immobilization Plant.
ON-LINE CALCULATOR: JOHNSON ETTINGER VAPOR INTRUSION MODEL
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
International Nuclear Information System (INIS)
Cardona Alzate, C.A.; Sanchez Toro, O.J.
2006-01-01
Fuel ethanol is considered one of the most important renewable fuels due to the economic and environmental benefits of its use. Lignocellulosic biomass is the most promising feedstock for producing bioethanol due to its global availability and to the energy gain that can be obtained when non-fermentable materials from biomass are used for cogeneration of heat and power. In this work, several process configurations for fuel ethanol production from lignocellulosic biomass were studied through process simulation using Aspen Plus. Some flowsheets considering the possibilities of reaction-reaction integration were taken into account among the studied process routes. The flowsheet variants were analyzed from the energy point of view utilizing as comparison criterion the energy consumption needed to produce 1 L of anhydrous ethanol. Simultaneous saccharification and cofermentation process with water recycling showed the best results accounting an energy consumption of 41.96 MJ/L EtOH. If pervaporation is used as dehydration method instead of azeotropic distillation, further energy savings can be obtained. In addition, energy balance was estimated using the results from the simulation and literature data. A net energy value of 17.65-18.93 MJ/L EtOH was calculated indicating the energy efficiency of the lignocellulosic ethanol
ALTERNATIVE FLOWSHEETS FOR THE SULFUR-IODINE THERMOCHEMICAL HYDROGEN CYCLE
Energy Technology Data Exchange (ETDEWEB)
BROWN,LC; LENTSCH,RD; BESENBRUCH,GE; SCHULTZ,KR; FUNK,JE
2003-02-01
OAK-B135 A hydrogen economy will need significant new sources of hydrogen. Unless large-scale carbon sequestration can be economically implemented, use of hydrogen reduces greenhouse gases only if the hydrogen is produced with non-fossil energy sources. Nuclear energy is one of the limited options available. One of the promising approaches to produce large quantities of hydrogen from nuclear energy efficiently is the Sulfur-Iodine (S-I) thermochemical water-splitting cycle, driven by high temperature heat from a helium Gas-Cooled Reactor. They have completed a study of nuclear-driven thermochemical water-splitting processes. The final task of this study was the development of a flowsheet for a prototype S-I production plant. An important element of this effort was the evaluation of alternative flowsheets and selection of the reference design.
Plutonium--uranium partitioning; alternate flowsheet Plutonium Reclamation Facility. [SEPHIS
Energy Technology Data Exchange (ETDEWEB)
Fort, L.A.
1975-12-01
The SEPHIS computer program was used to predict the transient and steady-state concentrations in a stage-wise scheme for the Pu reclamation solvent extraction system. With the aid of the computer an alternative flowsheet for Pu--U partitioning was constructed. The goal of the alternative program is to reduce Pu losses from the initial stripping column and reduce the quantity of Pu-bearing wastes from the solvent extraction system. (JSR)
Dissolution flowsheet for high flux isotope reactor fuel
Energy Technology Data Exchange (ETDEWEB)
Foster, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-09-27
As part of the Spent Nuclear Fuel (SNF) processing campaign, H-Canyon is planning to begin dissolving High Flux Isotope Reactor (HFIR) fuel in late FY17 or early FY18. Each HFIR fuel core contains inner and outer fuel elements which were fabricated from uranium oxide (U_{3}O_{8}) dispersed in a continuous Al phase using traditional powder metallurgy techniques. Fuels fabricated in this manner, like other SNF’s processed in H-Canyon, dissolve by the same general mechanisms with similar gas generation rates and the production of H_{2}. The HFIR fuel cores will be dissolved and the recovered U will be down-blended into low-enriched U. HFIR fuel was previously processed in H-Canyon using a unique insert in both the 6.1D and 6.4D dissolvers. Multiple cores will be charged to the same dissolver solution maximizing the concentration of dissolved Al. The objective of this study was to identify flowsheet conditions through literature review and laboratory experimentation to safely and efficiently dissolve the HFIR fuel in H-Canyon. Laboratory-scale experiments were performed to evaluate the dissolution of HFIR fuel using both Al 1100 and Al 6061 T6 alloy coupons. The Al 1100 alloy was considered a representative surrogate which provided an upper bound on the generation of flammable (i.e., H_{2}) gas during the dissolution process. The dissolution of the Al 6061 T6 alloy proceeded at a slower rate than the Al 1100 alloy and was used to verify that the target Al concentration in solution could be achieved for the selected Hg concentration. Mass spectrometry and Raman spectroscopy were used to provide continuous monitoring of the concentration of H_{2} and other permanent gases in the dissolution offgas allowing the development of H_{2} generation rate profiles. The H_{2} generation rates were subsequently used to evaluate if a full HFIR core could be dissolved in an H-Canyon dissolver without exceeding 60% of the
Matrix model calculations beyond the spherical limit
International Nuclear Information System (INIS)
Ambjoern, J.; Chekhov, L.; Kristjansen, C.F.; Makeenko, Yu.
1993-01-01
We propose an improved iterative scheme for calculating higher genus contributions to the multi-loop (or multi-point) correlators and the partition function of the hermitian one matrix model. We present explicit results up to genus two. We develop a version which gives directly the result in the double scaling limit and present explicit results up to genus four. Using the latter version we prove that the hermitian and the complex matrix model are equivalent in the double scaling limit and that in this limit they are both equivalent to the Kontsevich model. We discuss how our results away from the double scaling limit are related to the structure of moduli space. (orig.)
Cost Calculation Model for Logistics Service Providers
Directory of Open Access Journals (Sweden)
Zoltán Bokor
2012-11-01
Full Text Available The exact calculation of logistics costs has become a real challenge in logistics and supply chain management. It is essential to gain reliable and accurate costing information to attain efficient resource allocation within the logistics service provider companies. Traditional costing approaches, however, may not be sufficient to reach this aim in case of complex and heterogeneous logistics service structures. So this paper intends to explore the ways of improving the cost calculation regimes of logistics service providers and show how to adopt the multi-level full cost allocation technique in logistics practice. After determining the methodological framework, a sample cost calculation scheme is developed and tested by using estimated input data. Based on the theoretical findings and the experiences of the pilot project it can be concluded that the improved costing model contributes to making logistics costing more accurate and transparent. Moreover, the relations between costs and performances also become more visible, which enhances the effectiveness of logistics planning and controlling significantly
Nitric-glycolic flowsheet evaluation with the slurry-fed melt rate furnace
Energy Technology Data Exchange (ETDEWEB)
Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Miller, D. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Fowley, M. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Zamecnik, J. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-03-01
The Savannah River National Laboratory (SRNL) was tasked to support validation of the Defense Waste Processing Facility (DWPF) melter offgas flammability model for the nitric-glycolic (NG) flowsheet. The work supports Deliverable 4 of the DWPF & Saltstone Facility Engineering Technical Task Request (TTR)1 and is supplemental to the Cold Cap Evaluation Furnace (CEF) testing conducted in 2014.2 The Slurry-fed Melt Rate Furnace (SMRF) was selected for the supplemental testing as it requires significantly less resources than the CEF and could provide a tool for more rapid analysis of melter feeds in the future. The SMRF platform has been used previously to evaluate melt rate behavior of DWPF glasses, but was modified to accommodate analysis of the offgas stream. Additionally, the Melt Rate Furnace (MRF) and Quartz Melt Rate Furnace (QMRF) were utilized for evaluations. MRF data was used exclusively for melt behavior observations and REDuction/OXidation (REDOX) prediction comparisons and will be briefly discussed in conjunction with its support of the SMRF testing. The QMRF was operated similarly to the SMRF for the same TTR task, but will be discussed in a separate future report. The overall objectives of the SMRF testing were to; 1) Evaluate the efficacy of the SMRF as a platform for steady state melter testing with continuous feeding and offgas analysis; and 2) Generate supplemental melter offgas flammability data to support the melter offgas flammability modelling effort for DWPF implementation of the NG flowsheet.
Sen, Maitraye; Chaudhury, Anwesha; Singh, Ravendra; John, Joyce; Ramachandran, Rohit
2013-03-10
Properties of active pharmaceutical ingredients influence the critical quality attributes (CQAs) of final solid dosage forms (e.g. tablets). In the last decade, continuous manufacturing has been shown to be a promising alternative to batch processing in the pharmaceutical industry. Therefore, a quantitative model-based analysis of the influence of upstream API properties on downstream processing quality metrics will lead to enhanced QbD in pharmaceutical drug product manufacturing (Benyahia et al., 2012). In this study, a dynamic flowsheet simulation of an integrated API purification step (crystallization), followed by filtration and drying, with a downstream process (powder mixing) is presented. Results show that the temperature profile of a cooling crystallization process influences the crystal size distribution which in turn impacts the RSD and API concentration of the powder mixing process, which in turn has a direct effect on tablet properties (Boukouvala et al., 2012). A hybrid PBM-DEM model is also presented to demonstrate the coupling of particle-scale information with process-scale information leading to enhanced elucidation of the dynamics of the overall flowsheet simulation. Copyright © 2013 Elsevier B.V. All rights reserved.
Mercury Phase II Study - Mercury Behavior in Salt Processing Flowsheet
Energy Technology Data Exchange (ETDEWEB)
Jain, V. [Savannah River Remediation, LLC., Aiken, SC (United States); Shah, H. [Savannah River Remediation, LLC., Aiken, SC (United States). Sludge and Salt Planning; Bannochie, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Wilmarth, W. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-07-25
Mercury (Hg) in the Savannah River Site Liquid Waste System (LWS) originated from decades of canyon processing where it was used as a catalyst for dissolving the aluminum cladding of reactor fuel. Approximately 60 metric tons of mercury is currently present throughout the LWS. Mercury has long been a consideration in the LWS, from both hazard and processing perspectives. In February 2015, a Mercury Program Team was established at the request of the Department of Energy to develop a comprehensive action plan for long-term management and removal of mercury. Evaluation was focused in two Phases. Phase I activities assessed the Liquid Waste inventory and chemical processing behavior using a system-by-system review methodology, and determined the speciation of the different mercury forms (Hg+, Hg++, elemental Hg, organomercury, and soluble versus insoluble mercury) within the LWS. Phase II activities are building on the Phase I activities, and results of the LWS flowsheet evaluations will be summarized in three reports: Mercury Behavior in the Salt Processing Flowsheet (i.e. this report); Mercury Behavior in the Defense Waste Processing Facility (DWPF) Flowsheet; and Mercury behavior in the Tank Farm Flowsheet (Evaporator Operations). The evaluation of the mercury behavior in the salt processing flowsheet indicates, inter alia, the following: (1) In the assembled Salt Batches 7, 8 and 9 in Tank 21, the total mercury is mostly soluble with methylmercury (MHg) contributing over 50% of the total mercury. Based on the analyses of samples from 2H Evaporator feed and drop tanks (Tanks 38/43), the source of MHg in Salt Batches 7, 8 and 9 can be attributed to the 2H evaporator concentrate used in assembling the salt batches. The 2H Evaporator is used to evaporate DWPF recycle water. (2) Comparison of data between Tank 21/49, Salt Solution Feed Tank (SSFT), Decontaminated Salt Solution Hold Tank (DSSHT), and Tank 50 samples suggests that the total mercury as well as speciated
Mercury Phase II Study - Mercury Behavior in Salt Processing Flowsheet
International Nuclear Information System (INIS)
Jain, V.; Shah, H.; Wilmarth, W. R.
2016-01-01
Mercury (Hg) in the Savannah River Site Liquid Waste System (LWS) originated from decades of canyon processing where it was used as a catalyst for dissolving the aluminum cladding of reactor fuel. Approximately 60 metric tons of mercury is currently present throughout the LWS. Mercury has long been a consideration in the LWS, from both hazard and processing perspectives. In February 2015, a Mercury Program Team was established at the request of the Department of Energy to develop a comprehensive action plan for long-term management and removal of mercury. Evaluation was focused in two Phases. Phase I activities assessed the Liquid Waste inventory and chemical processing behavior using a system-by-system review methodology, and determined the speciation of the different mercury forms (Hg+, Hg++, elemental Hg, organomercury, and soluble versus insoluble mercury) within the LWS. Phase II activities are building on the Phase I activities, and results of the LWS flowsheet evaluations will be summarized in three reports: Mercury Behavior in the Salt Processing Flowsheet (i.e. this report); Mercury Behavior in the Defense Waste Processing Facility (DWPF) Flowsheet; and Mercury behavior in the Tank Farm Flowsheet (Evaporator Operations). The evaluation of the mercury behavior in the salt processing flowsheet indicates, inter alia, the following: (1) In the assembled Salt Batches 7, 8 and 9 in Tank 21, the total mercury is mostly soluble with methylmercury (MHg) contributing over 50% of the total mercury. Based on the analyses of samples from 2H Evaporator feed and drop tanks (Tanks 38/43), the source of MHg in Salt Batches 7, 8 and 9 can be attributed to the 2H evaporator concentrate used in assembling the salt batches. The 2H Evaporator is used to evaporate DWPF recycle water. (2) Comparison of data between Tank 21/49, Salt Solution Feed Tank (SSFT), Decontaminated Salt Solution Hold Tank (DSSHT), and Tank 50 samples suggests that the total mercury as well as speciated
Shell model calculations for exotic nuclei
Energy Technology Data Exchange (ETDEWEB)
Brown, B.A. (Michigan State Univ., East Lansing, MI (USA)); Warburton, E.K. (Brookhaven National Lab., Upton, NY (USA)); Wildenthal, B.H. (New Mexico Univ., Albuquerque, NM (USA). Dept. of Physics and Astronomy)
1990-02-01
In this paper we review the progress of the shell-model approach to understanding the properties of light exotic nuclei (A < 40). By shell-model'' we mean the consistent and large-scale application of the classic methods discussed, for example, in the book of de-Shalit and Talmi. Modern calculations incorporate as many of the important configurations as possible and make use of realistic effective interactions for the valence nucleons. Properties such as the nuclear densities depend on the mean-field potential, which is usually separately from the valence interaction. We will discuss results for radii which are based on a standard Hartree-Fock approach with Skyrme-type interactions.
Effective hamiltonian calculations using incomplete model spaces
International Nuclear Information System (INIS)
Koch, S.; Mukherjee, D.
1987-01-01
It appears that the danger of encountering ''intruder states'' is substantially reduced if an effective hamiltonian formalism is developed for incomplete model spaces (IMS). In a Fock-space approach, the proof a ''connected diagram theorem'' is fairly straightforward with exponential-type of ansatze for the wave-operator W, provided the normalization chosen for W is separable. Operationally, one just needs a suitable categorization of the Fock-space operators into ''diagonal'' and ''non-diagonal'' parts that is generalization of the corresponding procedure for the complete model space. The formalism is applied to prototypical 2-electron systems. The calculations have been performed on the Cyber 205 super-computer. The authors paid special attention to an efficient vectorization for the construction and solution of the resulting coupled non-linear equations
Technology development in support of the TWRS process flowsheet. Revision 1
International Nuclear Information System (INIS)
Washenfelder, D.J.
1995-01-01
The Tank Waste Remediation System is to treat and dispose of Hanford's Single-Shell and Double-Shell Tank Waste. The TWRS Process Flowsheet, (WHC-SD-WM-TI-613 Rev. 1) described a flowsheet based on a large number of assumptions and engineering judgements that require verification or further definition through process and technology development activities. This document takes off from the TWRS Process Flowsheet to identify and prioritize tasks that should be completed to strengthen the technical foundation for the flowsheet
HYBRID SULFUR FLOWSHEETS USING PEM ELECTROLYSIS AND A BAYONET DECOMPOSITION REACTOR
Energy Technology Data Exchange (ETDEWEB)
Gorensek, M; William Summers, W
2008-05-30
A conceptual design is presented for a Hybrid Sulfur process for the production of hydrogen using a high-temperature nuclear heat source to split water. The process combines proton exchange membrane-based SO{sub 2}-depolarized electrolyzer technology being developed at Savannah River National Laboratory with silicon carbide bayonet decomposition reactor technology being developed at Sandia National Laboratories. Both are part of the US DOE Nuclear Hydrogen Initiative. The flowsheet otherwise uses only proven chemical process components. Electrolyzer product is concentrated from 50 wt% sulfuric acid to 75 wt% via recuperative vacuum distillation. Pinch analysis is used to predict the high-temperature heat requirement for sulfuric acid decomposition. An Aspen Plus{trademark} model of the flowsheet indicates 340.3 kJ high-temperature heat, 75.5 kJ low-temperature heat, 1.31 kJ low-pressure steam, and 120.9 kJ electric power are consumed per mole of H{sub 2} product, giving an LHV efficiency of 35.3% (41.7% HHV efficiency) if electric power is available at a conversion efficiency of 45%.
Acceleration methods and models in Sn calculations
International Nuclear Information System (INIS)
Sbaffoni, M.M.; Abbate, M.J.
1984-01-01
In some neutron transport problems solved by the discrete ordinate method, it is relatively common to observe some particularities as, for example, negative fluxes generation, slow and insecure convergences and solution instabilities. The commonly used models for neutron flux calculation and acceleration methods included in the most used codes were analyzed, in face of their use in problems characterized by a strong upscattering effect. Some special conclusions derived from this analysis are presented as well as a new method to perform the upscattering scaling for solving the before mentioned problems in this kind of cases. This method has been included in the DOT3.5 code (two dimensional discrete ordinates radiation transport code) generating a new version of wider application. (Author) [es
An Improved Plutonium Trifluoride Precipitation Flowsheet
Energy Technology Data Exchange (ETDEWEB)
Harmon, H.D.
2001-06-26
This report discusses results of the plutonium trifluoride two-stage precipitation study. A series of precipitation experiments was used to identify the significant process variables affecting precipitation performance. A mathematical model of the precipitation process was developed which is based on the formation of plutonium fluoride complexes. The precipitation model relates all process variables, in a single equation, to a single parameter which can be used to control the performance of the plutonium trifluoride precipitation process. Recommendations have been made which will optimize the FB-Line plutonium trifluoride precipitation process.
International Nuclear Information System (INIS)
Seymour, R.G.
1995-01-01
High-level waste (HLW) feed preparation flowsheet development was initiated in fiscal year (FY) 1994 to evaluate alternative flowsheets for preparing melter feed for high-temperature melter (HTM) vitrification testing. Three flowsheets were proposed that might lead to increased processing capacity relative to the Hanford Waste Vitrification Plant (HWVP) and that were flexible enough to use with other HLW melter technologies. This document describes the decision path that led to the selection of flowsheets to be tested in the FY 1994 small-scale HTM tests. Feed preparation flowsheet development for the HLW HTM was based on the feed preparation flowsheet that was developed for the HWVP. This approach allowed the HLW program to build upon the extensive feed preparation flowsheet database developed under the HWVP Project. Primary adjustments to the HWVP flowsheet were to the acid adjustment and glass component additions. Developmental background regarding the individual features of the HLW feed preparation flowsheets is provided. Applicability of the HWVP flowsheet features to the new HLW vitrification mission is discussed. The proposed flowsheets were tested at the laboratory-scale at Pacific Northwest Laboratory. Based on the results of this testing and previously established criteria, a reductant-based flowsheet using glycolic acid and a nitric acid-based flowsheet were selected for the FY 1994 small-scale HTM testing
Energy Technology Data Exchange (ETDEWEB)
Seymour, R.G.
1995-02-17
High-level waste (HLW) feed preparation flowsheet development was initiated in fiscal year (FY) 1994 to evaluate alternative flowsheets for preparing melter feed for high-temperature melter (HTM) vitrification testing. Three flowsheets were proposed that might lead to increased processing capacity relative to the Hanford Waste Vitrification Plant (HWVP) and that were flexible enough to use with other HLW melter technologies. This document describes the decision path that led to the selection of flowsheets to be tested in the FY 1994 small-scale HTM tests. Feed preparation flowsheet development for the HLW HTM was based on the feed preparation flowsheet that was developed for the HWVP. This approach allowed the HLW program to build upon the extensive feed preparation flowsheet database developed under the HWVP Project. Primary adjustments to the HWVP flowsheet were to the acid adjustment and glass component additions. Developmental background regarding the individual features of the HLW feed preparation flowsheets is provided. Applicability of the HWVP flowsheet features to the new HLW vitrification mission is discussed. The proposed flowsheets were tested at the laboratory-scale at Pacific Northwest Laboratory. Based on the results of this testing and previously established criteria, a reductant-based flowsheet using glycolic acid and a nitric acid-based flowsheet were selected for the FY 1994 small-scale HTM testing.
Flowsheets and source terms for radioactive waste projections
International Nuclear Information System (INIS)
Forsberg, C.W.
1985-03-01
Flowsheets and source terms used to generate radioactive waste projections in the Integrated Data Base (IDB) Program are given. Volumes of each waste type generated per unit product throughput have been determined for the following facilities: uranium mining, UF 6 conversion, uranium enrichment, fuel fabrication, boiling-water reactors (BWRs), pressurized-water reactors (PWRs), and fuel reprocessing. Source terms for DOE/defense wastes have been developed. Expected wastes from typical decommissioning operations for each facility type have been determined. All wastes are also characterized by isotopic composition at time of generation and by general chemical composition. 70 references, 21 figures, 53 tables
Preliminary evaluation of Am/Cm melter feed preparation process upset recovery flowsheets
International Nuclear Information System (INIS)
Stone, M.E.
2000-01-01
This document summarizes the results from the development of flowsheets to recover from credible processing errors specified in TTR 99-MNSS/SE-006. The proposed flowsheets were developed in laboratory scale equipment and will be utilized with minor modifications for full scale demonstrations in the Am/Cm Pilot Facility
Sludge batch 9 follow-on actual-waste testing for the nitric-glycolic flowsheet
Energy Technology Data Exchange (ETDEWEB)
Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Crawford, C. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Pareizs, J. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-03-23
An actual-waste Sludge Batch 9 qualification run with the nitric-glycolic flowsheet (SC-18) was performed in FY16. In order to supplement the knowledge base for the nitric-glycolic flowsheet, additional testing was performed on the product slurries, condensates, and intermediate samples from run SC-18.
Energy Technology Data Exchange (ETDEWEB)
Bugrov, K.V.; Korotaev, V.G.; Korchenkin, K.K.; Logunov, M.V.; Ludin, S.A.; Mashkin, A.N.; Melentev, A.B.; Samarina, N.S. [FSUE ' PAMayak' , Lenin st., 35, Ozersk 456780 (Russian Federation)
2016-07-01
In order to improve the extraction flowsheet of RT-1 Plant two versions of plutonium purification unit flowsheet were developed: a flowsheet with stabilization of Pu(IV)-Np(IV) valence pair and Pu, Np co-recovery, and a flowsheet with stabilization of Pu(IV)-Np(V) valence pair and Pu recovery. The task related to stabilization of the valence pair of the target components in the required state was solved with the use of reactants already applied at RT-1 Plant, namely, hydrogen peroxide, hydrazine nitrate and catalyst (Fe). Both flowsheets were adapted for the plant purification facility with minimum modifications of the equipment, and passed the full scale industrial testing. As a result of this work, reduction in volume and salt content of the raffinate was achieved. (authors)
A Component Model for Cable System Calculations
Nijs, J.M.M. de; Boschma, J.J.
2012-01-01
Unfortunately, no method yet exists for system calculations to support cable engineers with the technical challenge of increasing digital loads when confronted with ever-increasing capacity demands from commercial departments. This article introduces a reliable method of cable system calculations.
DWPF nitric-glycolic flowsheet chemical process cell chemistry. Part 1
Energy Technology Data Exchange (ETDEWEB)
Zamecnik, J. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-02-01
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by SRNL from 2011 to 2015. The goal of this work was to develop empirical correlations for these variables versus measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) state of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the initial work on these correlations based on the aforementioned data. Further refinement of the models as additional data is collected is recommended.
Dissolution Flowsheet for High Flux Isotope Reactor Fuel
Energy Technology Data Exchange (ETDEWEB)
Daniel, W. E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Rudisill, T. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); O' Rourke, P. E. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Karay, N. S [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-09-27
As part of the Spent Nuclear Fuel (SNF) processing campaign, H-Canyon is planning to begin dissolving High Flux Isotope Reactor (HFIR) fuel in late FY17 or early FY18. Each HFIR fuel core contains inner and outer fuel elements which were fabricated from uranium oxide (U_{3}O_{8}) dispersed in a continuous Al phase using traditional powder metallurgy techniques. Fuels fabricated in this manner, like other SNF’s processed in H-Canyon, dissolve by the same general mechanisms with similar gas generation rates and the production of H_{2}. The HFIR fuel cores will be dissolved and the recovered U will be down-blended into low-enriched U. HFIR fuel was previously processed in H-Canyon using a unique insert in both the 6.1D and 6.4D dissolvers. Multiple cores will be charged to the same dissolver solution maximizing the concentration of dissolved Al. The objective of this study was to identify flowsheet conditions through literature review and laboratory experimentation to safely and efficiently dissolve the HFIR fuel in H-Canyon. Laboratory-scale experiments were performed to evaluate the dissolution of HFIR fuel using both Al 1100 and Al 6061 T6 alloy coupons. The Al 1100 alloy was considered a representative surrogate which provided an upper bound on the generation of flammable (i.e., H_{2}) gas during the dissolution process. The dissolution of the Al 6061 T6 alloy proceeded at a slower rate than the Al 1100 alloy, and was used to verify that the target Al concentration in solution could be achieved for the selected Hg concentration. Mass spectrometry and Raman spectroscopy were used to provide continuous monitoring of the concentration of H_{2} and other permanent gases in the dissolution offgas, allowing the development of H_{2} generation rate profiles. The H_{2} generation rates were subsequently used to evaluate if a full HFIR core could be dissolved in an H-Canyon dissolver without exceeding 60% of the
Neutron transport model for standard calculation experiment
International Nuclear Information System (INIS)
Lukhminskij, B.E.; Lyutostanskij, Yu.S.; Lyashchuk, V.I.; Panov, I.V.
1989-01-01
The neutron transport calculation algorithms in complex composition media with a predetermined geometry are realized by the multigroups representations within Monte Carlo methods in the MAMONT code. The code grade was evaluated with benchmark experiments comparison. The neutron leakage spectra calculations in the spherical-symmetric geometry were carried out for iron and polyethylene. The MAMONT code utilization for metrological furnishes of the geophysics tasks is proposed. The code is orientated towards neutron transport and secondary nuclides accumulation calculations in blankets and geophysics media. 7 refs.; 2 figs
Countercurrent reactor design and flowsheet for iodine-sulfur thermochemical water splitting process
International Nuclear Information System (INIS)
Leybros, J.; Carles, Ph.; Borgard, J.M.
2009-01-01
A conceptual design is presented for the I/S process for the production of hydrogen using a high-temperature nuclear heat source to split water. The process includes a countercurrent reactor being developed by CEA within them framework of an international collaboration (I-NERI project) with DOE at General Atomics (San Diego, CA). A ProsimPlus (TM) model of the flowsheet indicates 600 kJ high-temperature heat and 69 kJ electric power are consumed per mole of H 2 product (with an assumed pressure of 120 bars). The net thermal efficiency would be 38% (HHV basis) if electric power is available at a conversion efficiency of 45%. (authors)
International Nuclear Information System (INIS)
Shekhar Kumar; Koganti, S.B.
2003-07-01
Benchmarking and application of a computer code SIMPSEX for high plutonium FBR flowsheets was reported recently in an earlier report (IGC-234). Improvements and recompilation of the code (Version 4.01, March 2003) required re-validation with the existing benchmarks as well as additional benchmark flowsheets. Improvements in the high Pu region (Pu Aq >30 g/L) resulted in better results in the 75% Pu flowsheet benchmark. Below 30 g/L Pu Aq concentration, results were identical to those from the earlier version (SIMPSEX Version 3, code compiled in 1999). In addition, 13 published flowsheets were taken as additional benchmarks. Eleven of these flowsheets have a wide range of feed concentrations and few of them are β-γ active runs with FBR fuels having a wide distribution of burnup and Pu ratios. A published total partitioning flowsheet using externally generated U(IV) was also simulated using SIMPSEX. SIMPSEX predictions were compared with listed predictions from conventional SEPHIS, PUMA, PUNE and PUBG. SIMPSEX results were found to be comparable and better than the result from above listed codes. In addition, recently reported UREX demo results along with AMUSE simulations are also compared with SIMPSEX predictions. Results of the benchmarking SIMPSEX with these 14 benchmark flowsheets are discussed in this report. (author)
Shell model calculations for exotic nuclei
International Nuclear Information System (INIS)
Brown, B.A.; Wildenthal, B.H.
1991-01-01
A review of the shell-model approach to understanding the properties of light exotic nuclei is given. Binding energies including p and p-sd model spaces and sd and sd-pf model spaces; cross-shell excitations around 32 Mg, including weak-coupling aspects and mechanisms for lowering the ntw excitations; beta decay properties of neutron-rich sd model, of p-sd and sd-pf model spaces, of proton-rich sd model space; coulomb break-up cross sections are discussed. (G.P.) 76 refs.; 12 figs
Impact of scaling on the nitric-glycolic acid flowsheet
Energy Technology Data Exchange (ETDEWEB)
Lambert, D. [Savannah River Site (SRS), Aiken, SC (United States)
2016-02-01
Savannah River Remediation (SRR) is considering using glycolic acid as a replacement for formic acid in Sludge Receipt and Adjustment Tank (SRAT) processing in the Defense Waste Processing Facility (DWPF). Catalytic decomposition of formic acid is responsible for the generation of hydrogen, a potentially flammable gas, during processing. To prevent the formation of a flammable mixture in the offgas, an air purge is used to dilute the hydrogen concentration below the 60% of the Composite Lower Flammability Limit (CLFL). The offgas is continuously monitored for hydrogen using Gas Chromatographs (GCs). Since formic acid is much more volatile and toxic than glycolic acid, a formic acid spill would lead to the release of much larger quantities to the environment. Switching from formic acid to glycolic acid is expected to eliminate the hydrogen flammability hazard leading to lower air purges, thus downgrading of Safety Significant GCs to Process Support GCs, and minimizing the consequence of a glycolic acid tank leak in DWPF. Overall this leads to a reduction in process operation costs and an increase in safety margin. Experiments were completed at three different scales to demonstrate that the nitric-glycolic acid flowsheet scales from the 4-L lab scale to the 22-L bench scale and 220-L engineering scale. Ten process demonstrations of the sludge-only flowsheet for SRAT and Slurry Mix Evaporator (SME) cycles were performed using Sludge Batch 8 (SB8)-Tank 40 simulant. No Actinide Removal Process (ARP) product or strip effluent was added during the runs. Six experiments were completed at the 4-L scale, two experiments were completed at the 22-L scale, and two experiments were completed at the 220-L scale. Experiments completed at the 4-L scale (100 and 110% acid stoichiometry) were repeated at the 22-L and 220-L scale for scale comparisons.
Uncertainty calculation in transport models and forecasts
DEFF Research Database (Denmark)
Manzo, Stefano; Prato, Carlo Giacomo
. Forthcoming: European Journal of Transport and Infrastructure Research, 15-3, 64-72. 4 The last paper4 examined uncertainty in the spatial composition of residence and workplace locations in the Danish National Transport Model. Despite the evidence that spatial structure influences travel behaviour...... to increase the quality of the decision process and to develop robust or adaptive plans. In fact, project evaluation processes that do not take into account model uncertainty produce not fully informative and potentially misleading results so increasing the risk inherent to the decision to be taken...
Temperature Calculations in the Coastal Modeling System
2017-04-01
with the change of water turbidity in coastal and estuarine systems. Water quality and ecological models often require input of water temperature...of the American Society of Civil Engineers 81(717): 1–11. Sánchez, A., W. Wu, H. Li, M. E. Brown, C. W. Reed, J. D. Rosati, and Z. Demirbilek. 2014
Shafiee, Alireza
2016-06-25
In optimization-based process flowsheet synthesis, optimization methods, including genetic algorithms (GA), are used as advantageous tools to select a high performance flowsheet by ‘screening’ large numbers of possible flowsheets. In this study, we expand the role of GA to include flowsheet generation through proposing a modified Greedysub tour crossover operator. Performance of the proposed crossover operator is compared with four other commonly used operators. The proposed GA optimizationbased process synthesis method is applied to generate the optimum process flowsheet for a multicomponent membrane-based CO2 capture process. Within defined constraints and using the random-point crossover, CO2 purity of 0.827 (equivalent to 0.986 on dry basis) is achieved which results in improvement (3.4%) over the simplest crossover operator applied. In addition, the least variability in the converged flowsheet and CO2 purity is observed for random-point crossover operator, which approximately implies closeness of the solution to the global optimum, and hence the consistency of the algorithm. The proposed crossover operator is found to improve the convergence speed of the algorithm by 77.6%.
Development of new model for high explosives detonation parameters calculation
Directory of Open Access Journals (Sweden)
Jeremić Radun
2012-01-01
Full Text Available The simple semi-empirical model for calculation of detonation pressure and velocity for CHNO explosives has been developed, which is based on experimental values of detonation parameters. Model uses Avakyan’s method for determination of detonation products' chemical composition, and is applicable in wide range of densities. Compared with the well-known Kamlet's method and numerical model of detonation based on BKW EOS, the calculated values from proposed model have significantly better accuracy.
Precipitates/Salts Model Calculations for Various Drift Temperature Environments
International Nuclear Information System (INIS)
Marnier, P.
2001-01-01
The objective and scope of this calculation is to assist Performance Assessment Operations and the Engineered Barrier System (EBS) Department in modeling the geochemical effects of evaporation within a repository drift. This work is developed and documented using procedure AP-3.12Q, Calculations, in support of ''Technical Work Plan For Engineered Barrier System Department Modeling and Testing FY 02 Work Activities'' (BSC 2001a). The primary objective of this calculation is to predict the effects of evaporation on the abstracted water compositions established in ''EBS Incoming Water and Gas Composition Abstraction Calculations for Different Drift Temperature Environments'' (BSC 2001c). A secondary objective is to predict evaporation effects on observed Yucca Mountain waters for subsequent cement interaction calculations (BSC 2001d). The Precipitates/Salts model is documented in an Analysis/Model Report (AMR), ''In-Drift Precipitates/Salts Analysis'' (BSC 2001b)
International Nuclear Information System (INIS)
Bell, J.T.; Burch, W.D.; Collins, E.D.; Forsberg, C.W.; Prince, B.E.; Bond, W.D.; Campbell, D.O.; Delene, J.G.; Mailen, J.C.
1990-08-01
A preliminary study by a group of experts at ORNL has generated and evaluated a number of aqueous and non-aqueous flowsheets for recovering transuranium actinides from LWR fuel for use as fuel in an LMR and, at the same time, for transmutation of the wastes to less hazardous materials. The need for proliferation resistance was a consideration in the flowsheets. The current state of development of the flowsheets was evaluated and recommendations for additional study were made. 3 refs., 6 figs
An analysis of alternative New Technical Strategy flowsheets for tank waste remediation system
International Nuclear Information System (INIS)
Booker, C.P.
1994-01-01
The Hanford Tank remediation plans have gone through a few revisions for the best waste processing system. Some designs have been complex while others have been fairly simple. One of the key means in understanding and selecting among the various proposed systems is a discrete events modeling of the system. This modeling provides insight into (1) The total required size of the system; (2) The amount of material, such as reagents and other added materials that must be supplied; (3) The final mass of waste that must be stored; and (4) Areas within the system where a small change can greatly effect the total system. Discrete events modeling also provides the means by which various proposed systems may be compared. It is the framework in which variations within a particular system may be explored and compared to other instantiations. This study examines the current New Technical Strategy flowsheet system with discrete event modeling. Some of the possible variations within that system are examined and compared. Further, an previously proposed, more complex system is examined
International Nuclear Information System (INIS)
Penwell, D.L.
1994-01-01
This preliminary flowsheet document describes an ion exchange process which uses resorcinol-formaldehyde (R-F) resin to remove cesium from Hanford tank waste. The flowsheet describes one possible equipment configuration, and contains mass balances based on that configuration with feeds of Neutralized Current Acid Waste, and Double Shell Slurry Feed. The flowsheet also discusses process alternatives, unresolved issues, and development needs associated with the ion exchange process. It is expected that this flowsheet will evolve as open issues are resolved and progress is made on development needs. This is part of the Tank Waste Remediation Program at Hanford. 26 refs, 6 figs, 25 tabs
Defense Waste Processing Facility Nitric- Glycolic Flowsheet Chemical Process Cell Chemistry: Part 2
Energy Technology Data Exchange (ETDEWEB)
Zamecnik, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-06-06
The conversions of nitrite to nitrate, the destruction of glycolate, and the conversion of glycolate to formate and oxalate were modeled for the Nitric-Glycolic flowsheet using data from Chemical Process Cell (CPC) simulant runs conducted by Savannah River National Laboratory (SRNL) from 2011 to 2016. The goal of this work was to develop empirical correlation models to predict these values from measureable variables from the chemical process so that these quantities could be predicted a-priori from the sludge or simulant composition and measurable processing variables. The need for these predictions arises from the need to predict the REDuction/OXidation (REDOX) state of the glass from the Defense Waste Processing Facility (DWPF) melter. This report summarizes the work on these correlations based on the aforementioned data. Previous work on these correlations was documented in a technical report covering data from 2011-2015. This current report supersedes this previous report. Further refinement of the models as additional data are collected is recommended.
In-Drift Microbial Communities Model Validation Calculations
Energy Technology Data Exchange (ETDEWEB)
D. M. Jolley
2001-09-24
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculation
Energy Technology Data Exchange (ETDEWEB)
D. M. Jolley
2001-10-31
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.
In-Drift Microbial Communities Model Validation Calculations
International Nuclear Information System (INIS)
Jolley, D.M.
2001-01-01
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
Energy Technology Data Exchange (ETDEWEB)
D.M. Jolley
2001-12-18
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.
IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS
International Nuclear Information System (INIS)
D.M. Jolley
2001-01-01
The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data
The accuracy of heavy ion optical model calculations
International Nuclear Information System (INIS)
Kozik, T.
1980-01-01
There is investigated in detail the sources and magnitude of numerical errors in heavy ion optical model calculations. It is shown on example of 20 Ne + 24 Mg scattering at Esub(LAB)=100 MeV. (author)
Modeling and Calculator Tools for State and Local Transportation Resources
Air quality models, calculators, guidance and strategies are offered for estimating and projecting vehicle air pollution, including ozone or smog-forming pollutants, particulate matter and other emissions that pose public health and air quality concerns.
Chemical flowsheet conditions for preparing urania spheres by internal gelation
International Nuclear Information System (INIS)
Haas, P.A.; Begovich, J.M.; Ryon, A.D.; Vavruska, J.S.
1979-01-01
Small, ceramic urania spheres can be prepared for use as nuclear fuel by internal chemical gelation of uranyl nitrate solution droplets. Decomposition of hexamethylenetetramine (HMTA) dissolved in the uranyl nitrate solution releases ammonia to precipitate hydrated UO 3 . Previously established flowsheet conditions have been improved and modified at ORNL and have been applied to prepare dense UO 2 spheres with average diameters of 1200, 300, and 30 μm. Acid-deficient uranyl nitrate (ADUN) solutions up to 3.4 M in uranium with NO 3 - /U mole ratios of 1.5 to 1.7 are prepared by dissolution of U 3 O 8 or UO 3 . Continuous mixing of metered, cooled ADUN containing urea and HMTA solutions provides a smooth, regulated flow of the temperature-sensitive feed solution. The gelation times for solution drops in organic liquids at 45 to 95 0 C depend on both the chemical reaction rates and the rates of heat transfer. The gel properties vary with temperature and other gelation variables. Gelation conditions were determined which allow easy washing, drying, firing, and sintering to produce dense UO 2 spheres of all three sizes. The 1200- and 300-μm UO 2 spheres were pepared by gelation in trichloroethylene at 50 to 65 0 C; 2-ethyl-l-hexanol was used as the gelation medium to prepare 30-μm UO 2 spheres. Washing and drying requirements were determined. The gel dried to 225 0 C contains about 95% UO 3 ; the remaining components are H 2 O, NH 3 - , which are volatilized during firing to UO 2
A methodology for constructing the calculation model of scientific spreadsheets
Vos, de M.; Wielemaker, J.; Schreiber, G.; Wielinga, B.; Top, J.L.
2015-01-01
Spreadsheets models are frequently used by scientists to analyze research data. These models are typically described in a paper or a report, which serves as single source of information on the underlying research project. As the calculation workflow in these models is not made explicit, readers are
Mathematical models for calculating radiation dose to the fetus
International Nuclear Information System (INIS)
Watson, E.E.
1992-01-01
Estimates of radiation dose from radionuclides inside the body are calculated on the basis of energy deposition in mathematical models representing the organs and tissues of the human body. Complex models may be used with radiation transport codes to calculate the fraction of emitted energy that is absorbed in a target tissue even at a distance from the source. Other models may be simple geometric shapes for which absorbed fractions of energy have already been calculated. Models of Reference Man, the 15-year-old (Reference Woman), the 10-year-old, the five-year-old, the one-year-old, and the newborn have been developed and used for calculating specific absorbed fractions (absorbed fractions of energy per unit mass) for several different photon energies and many different source-target combinations. The Reference woman model is adequate for calculating energy deposition in the uterus during the first few weeks of pregnancy. During the course of pregnancy, the embryo/fetus increases rapidly in size and thus requires several models for calculating absorbed fractions. In addition, the increases in size and changes in shape of the uterus and fetus result in the repositioning of the maternal organs and in different geometric relationships among the organs and the fetus. This is especially true of the excretory organs such as the urinary bladder and the various sections of the gastrointestinal tract. Several models have been developed for calculating absorbed fractions of energy in the fetus, including models of the uterus and fetus for each month of pregnancy and complete models of the pregnant woman at the end of each trimester. In this paper, the available models and the appropriate use of each will be discussed. (Author) 19 refs., 7 figs
Effective UV radiation from model calculations and measurements
Feister, Uwe; Grewe, Rolf
1994-01-01
Model calculations have been made to simulate the effect of atmospheric ozone and geographical as well as meteorological parameters on solar UV radiation reaching the ground. Total ozone values as measured by Dobson spectrophotometer and Brewer spectrometer as well as turbidity were used as input to the model calculation. The performance of the model was tested by spectroradiometric measurements of solar global UV radiation at Potsdam. There are small differences that can be explained by the uncertainty of the measurements, by the uncertainty of input data to the model and by the uncertainty of the radiative transfer algorithms of the model itself. Some effects of solar radiation to the biosphere and to air chemistry are discussed. Model calculations and spectroradiometric measurements can be used to study variations of the effective radiation in space in space time. The comparability of action spectra and their uncertainties are also addressed.
Model for calculating the boron concentration in PWR type reactors
International Nuclear Information System (INIS)
Reis Martins Junior, L.L. dos; Vanni, E.A.
1986-01-01
A PWR boron concentration model has been developed for use with RETRAN code. The concentration model calculates the boron mass balance in the primary circuit as the injected boron mixes and is transported through the same circuit. RETRAN control blocks are used to calculate the boron concentration in fluid volumes during steady-state and transient conditions. The boron reactivity worth is obtained from the core concentration and used in RETRAN point kinetics model. A FSAR type analysis of a Steam Line Break Accident in Angra I plant was selected to test the model and the results obtained indicate a sucessfull performance. (Author) [pt
HOM study and parameter calculation of the TESLA cavity model
Zeng, Ri-Hua; Gerigk Frank; Wang Guang-Wei; Wegner Rolf; Liu Rong; Schuh Marcel
2010-01-01
The Superconducting Proton Linac (SPL) is the project for a superconducting, high current H-accelerator at CERN. To find dangerous higher order modes (HOMs) in the SPL superconducting cavities, simulation and analysis for the cavity model using simulation tools are necessary. The. existing TESLA 9-cell cavity geometry data have been used for the initial construction of the models in HFSS. Monopole, dipole and quadrupole modes have been obtained by applying different symmetry boundaries on various cavity models. In calculation, scripting language in HFSS was used to create scripts to automatically calculate the parameters of modes in these cavity models (these scripts are also available in other cavities with different cell numbers and geometric structures). The results calculated automatically are then compared with the values given in the TESLA paper. The optimized cavity model with the minimum error will be taken as the base for further simulation of the SPL cavities.
Microbial Communities Model Parameter Calculation for TSPA/SR
Energy Technology Data Exchange (ETDEWEB)
D. Jolley
2001-07-16
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M&O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M&O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow {Delta}G (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M&O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed.
Microbial Communities Model Parameter Calculation for TSPA/SR
International Nuclear Information System (INIS)
D. Jolley
2001-01-01
This calculation has several purposes. First the calculation reduces the information contained in ''Committed Materials in Repository Drifts'' (BSC 2001a) to useable parameters required as input to MING V1.O (CRWMS M and O 1998, CSCI 30018 V1.O) for calculation of the effects of potential in-drift microbial communities as part of the microbial communities model. The calculation is intended to replace the parameters found in Attachment II of the current In-Drift Microbial Communities Model revision (CRWMS M and O 2000c) with the exception of Section 11-5.3. Second, this calculation provides the information necessary to supercede the following DTN: M09909SPAMING1.003 and replace it with a new qualified dataset (see Table 6.2-1). The purpose of this calculation is to create the revised qualified parameter input for MING that will allow ΔG (Gibbs Free Energy) to be corrected for long-term changes to the temperature of the near-field environment. Calculated herein are the quadratic or second order regression relationships that are used in the energy limiting calculations to potential growth of microbial communities in the in-drift geochemical environment. Third, the calculation performs an impact review of a new DTN: M00012MAJIONIS.000 that is intended to replace the currently cited DTN: GS9809083 12322.008 for water chemistry data used in the current ''In-Drift Microbial Communities Model'' revision (CRWMS M and O 2000c). Finally, the calculation updates the material lifetimes reported on Table 32 in section 6.5.2.3 of the ''In-Drift Microbial Communities'' AMR (CRWMS M and O 2000c) based on the inputs reported in BSC (2001a). Changes include adding new specified materials and updating old materials information that has changed
batman: BAsic Transit Model cAlculatioN in Python
Kreidberg, Laura
2015-11-01
I introduce batman, a Python package for modeling exoplanet transit light curves. The batman package supports calculation of light curves for any radially symmetric stellar limb darkening law, using a new integration algorithm for models that cannot be quickly calculated analytically. The code uses C extension modules to speed up model calculation and is parallelized with OpenMP. For a typical light curve with 100 data points in transit, batman can calculate one million quadratic limb-darkened models in 30 seconds with a single 1.7 GHz Intel Core i5 processor. The same calculation takes seven minutes using the four-parameter nonlinear limb darkening model (computed to 1 ppm accuracy). Maximum truncation error for integrated models is an input parameter that can be set as low as 0.001 ppm, ensuring that the community is prepared for the precise transit light curves we anticipate measuring with upcoming facilities. The batman package is open source and publicly available at https://github.com/lkreidberg/batman .
Energy Technology Data Exchange (ETDEWEB)
Lambert, Dan P.; Stone, Michael E.; Newell, J. David; Fellinger, Terri L.; Bricker, Jonathan M.
2012-09-14
The Defense Waste Processing Facility (DWPF) processes legacy nuclear waste generated at the Savannah River Site (SRS) during production of plutonium and tritium demanded by the Cold War. The nuclear waste is first treated via a complex sequence of controlled chemical reactions and then vitrified into a borosilicate glass form and poured into stainless steel canisters. Converting the nuclear waste into borosilicate glass canisters is a safe, effective way to reduce the volume of the waste and stabilize the radionuclides. Testing was initiated to determine whether the elimination of formic acid from the DWPF's chemical processing flowsheet would eliminate catalytic hydrogen generation. Historically, hydrogen is generated in chemical processing of alkaline High Level Waste sludge in DWPF. In current processing, sludge is combined with nitric and formic acid to neutralize the waste, reduce mercury and manganese, destroy nitrite, and modify (thin) the slurry rheology. The noble metal catalyzed formic acid decomposition produces hydrogen and carbon dioxide. Elimination of formic acid by replacement with glycolic acid has the potential to eliminate the production of catalytic hydrogen. Flowsheet testing was performed to develop the nitric-glycolic acid flowsheet as an alternative to the nitric-formic flowsheet currently being processed at the DWPF. This new flowsheet has shown that mercury can be reduced and removed by steam stripping in DWPF with no catalytic hydrogen generation. All processing objectives were also met, including greatly reducing the Slurry Mix Evaporator (SME) product yield stress as compared to the baseline nitric/formic flowsheet. Ten DWPF tests were performed with nonradioactive simulants designed to cover a broad compositional range. No hydrogen was generated in testing without formic acid.
Microscopic interacting boson model calculations for even–even ...
Indian Academy of Sciences (India)
one of the goals of the present study is to test interacting boson model calculations in the mass region of A ∼= 130 by comparing them with some previous experimental and theoretical results. The interacting boson model offers a simple Hamiltonian, capable of describing collective nuclear properties across a wide range of ...
Calculating gait kinematics using MR-based kinematic models.
Scheys, Lennart; Desloovere, Kaat; Spaepen, Arthur; Suetens, Paul; Jonkers, Ilse
2011-02-01
Rescaling generic models is the most frequently applied approach in generating biomechanical models for inverse kinematics. Nevertheless it is well known that this procedure introduces errors in calculated gait kinematics due to: (1) errors associated with palpation of anatomical landmarks, (2) inaccuracies in the definition of joint coordinate systems. Based on magnetic resonance (MR) images, more accurate, subject-specific kinematic models can be built that are significantly less sensitive to both error types. We studied the difference between the two modelling techniques by quantifying differences in calculated hip and knee joint kinematics during gait. In a clinically relevant patient group of 7 pediatric cerebral palsy (CP) subjects with increased femoral anteversion, gait kinematic were calculated using (1) rescaled generic kinematic models and (2) subject-specific MR-based models. In addition, both sets of kinematics were compared to those obtained using the standard clinical data processing workflow. Inverse kinematics, calculated using rescaled generic models or the standard clinical workflow, differed largely compared to kinematics calculated using subject-specific MR-based kinematic models. The kinematic differences were most pronounced in the sagittal and transverse planes (hip and knee flexion, hip rotation). This study shows that MR-based kinematic models improve the reliability of gait kinematics, compared to generic models based on normal subjects. This is the case especially in CP subjects where bony deformations may alter the relative configuration of joint coordinate systems. Whilst high cost impedes the implementation of this modeling technique, our results demonstrate that efforts should be made to improve the level of subject-specific detail in the joint axes determination. Copyright © 2010 Elsevier B.V. All rights reserved.
Optimizing the calculation grid for atmospheric dispersion modelling.
Van Thielen, S; Turcanu, C; Camps, J; Keppens, R
2015-04-01
This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. Copyright © 2015 Elsevier Ltd. All rights reserved.
FLOWSHEET FOR ALUMINUM REMOVAL FROM SLUDGE BATCH 6
International Nuclear Information System (INIS)
Pike, J.; Gillam, J.
2008-01-01
Samples of Tank 12 sludge slurry show a substantially larger fraction of aluminum than originally identified in sludge batch planning. The Liquid Waste Organization (LWO) plans to formulate Sludge Batch 6 (SB6) with about one half of the sludge slurry in Tank 12 and one half of the sludge slurry in Tank 4. LWO identified aluminum dissolution as a method to mitigate the effect of having about 50% more solids in High Level Waste (HLW) sludge than previously planned. Previous aluminum dissolution performed in a HLW tank in 1982 was performed at approximately 85 C for 5 days and dissolved nearly 80% of the aluminum in the sludge slurry. In 2008, LWO successfully dissolved 64% of the aluminum at approximately 60 C in 46 days with minimal tank modifications and using only slurry pumps as a heat source. This report establishes the technical basis and flowsheet for performing an aluminum removal process in Tank 51 for SB6 that incorporates the lessons learned from previous aluminum dissolution evolutions. For SB6, aluminum dissolution process temperature will be held at a minimum of 65 C for at least 24 days, but as long as practical or until as much as 80% of the aluminum is dissolved. As planned, an aluminum removal process can reduce the aluminum in SB6 from about 84,500 kg to as little as 17,900 kg with a corresponding reduction of total insoluble solids in the batch from 246,000 kg to 131,000 kg. The extent of the reduction may be limited by the time available to maintain Tank 51 at dissolution temperature. The range of dissolution in four weeks based on the known variability in dissolution kinetics can range from 44 to more than 80%. At 44% of the aluminum dissolved, the mass reduction is approximately 1/2 of the mass noted above, i.e., 33,300 kg of aluminum instead of 66,600 kg. Planning to reach 80% of the aluminum dissolved should allow a maximum of 81 days for dissolution and reduce the allowance if test data shows faster kinetics. 47,800 kg of the dissolved
Approximate dynamic fault tree calculations for modelling water supply risks
International Nuclear Information System (INIS)
Lindhe, Andreas; Norberg, Tommy; Rosén, Lars
2012-01-01
Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.
Summary of Calculation Performed with NPIC's New FGR Model
International Nuclear Information System (INIS)
Jiao Yongjun; Li Wenjie; Zhou Yi; Xing Shuo
2013-01-01
1. Introduction The NPIC modeling group has performed calculations on both real cases and idealized cases in FUMEX II and III data packages. The performance code we used is COPERNIC 2.4 developed by AREVA but a new FGR model has been added. Therefore, a comparison study has been made between the Bernard model (V2.2) and the new model, in order to evaluate the performance of the new model. As mentioned before, the focus of our study lies in thermal fission gas release, or more specifically the grain boundary bubble behaviors. 2. Calculation method There are some differences between the calculated burnup and measured burnup in many real cases. Considering FGR is significant dependent on rod average burnup, a multiplicative factor on fuel rod linear power, i.e. FQE, is applied and adjusted in the calculations to ensure the calculated burnup generally equals the measured burnup. Also, a multiplicative factor on upper plenum volume, i.e. AOPL, is applied and adjusted in the calculations to ensure the calculated free volume equals pre-irradiation data of total free volume in rod. Cladding temperatures were entered if they were provided . Otherwise the cladding temperatures are calculated from the inlet coolant temperature. The results are presented in excel form as an attachment of this paper, including thirteen real cases and three idealized cases. Three real cases (BK353, BK370, US PWR TSQ022) are excluded from validation of the new model, because the athermal release predicted is even greater than release measured, which means a negative thermal release. Obviously it is not reasonable for validation, but the results are also listed in excel (sheet 'Cases excluded from validation'). 3. Results The results of 10 real cases are listed in sheet 'Steady case summary', which summarizes measured and predicted values of Bu, FGR for each case, and plots M/P ratio of FGR calculation by different models in COPERNIC. A statistic comparison was also made with three indexes, i
Model calculations of groundwater conditions on Sternoe peninsula
International Nuclear Information System (INIS)
Axelsson, C.-L.; Carlsson, L.
1979-09-01
The groundwater condition within the bedrock of Sternoe was calculated by the use of a two-dimensional FEM-model. Five sections were laid out over the area. The sections had a depth of five km and length between two and six km. First the piezometric head was calculated in two major tectonic zones where the hydraulic conductivity was set to 10 -6 m/s. In the other sections of which two cross the tectonic zones, the bedrock was assumed to have hydraulic conductivities of 10 -8 m/s in the uppermost 300 m and 10 -11 m/s in the rest. From the maps of the piezometric head obtained, the flow time was calculated for the groundwater from 500 meters depth to a tectonic zone or to the 300 meters level below the sea. This calculation was performed for two sections both with and without tectonic zones. Also the influence of groundwater discharge from a well in one point in one of the tectonic zones was calculated. The kinematic porosity was assumed 10 -4 . The result showed that the flow time varied between 1000 to 500 000 years within the area with the exception of the nearest 100 m zone to any of the tectonic zones. For further calculations the use of three-dimensional models was proposed. (Auth.)
Optimizing the calculation grid for atmospheric dispersion modelling
International Nuclear Information System (INIS)
Van Thielen, S.; Turcanu, C.; Camps, J.; Keppens, R.
2015-01-01
This paper presents three approaches to find optimized grids for atmospheric dispersion measurements and calculations in emergency planning. This can be useful for deriving optimal positions for mobile monitoring stations, or help to reduce discretization errors and improve recommendations. Indeed, threshold-based recommendations or conclusions may differ strongly on the shape and size of the grid on which atmospheric dispersion measurements or calculations of pollutants are based. Therefore, relatively sparse grids that retain as much information as possible, are required. The grid optimization procedure proposed here is first demonstrated with a simple Gaussian plume model as adopted in atmospheric dispersion calculations, which provides fast calculations. The optimized grids are compared to the Noodplan grid, currently used for emergency planning in Belgium, and to the exact solution. We then demonstrate how it can be used in more realistic dispersion models. - Highlights: • Grid points for atmospheric dispersion calculations are optimized. • Using heuristics the optimization problem results into different grid shapes. • Comparison between optimized models and the Noodplan grid is performed
Precision calculations in supersymmetric extensions of the Standard Model
International Nuclear Information System (INIS)
Slavich, P.
2013-01-01
This dissertation is organized as follows: in the next chapter I will summarize the structure of the supersymmetric extensions of the standard model (SM), namely the MSSM (Minimal Supersymmetric Standard Model) and the NMSSM (Next-to-Minimal Supersymmetric Standard Model), I will provide a brief overview of different patterns of SUSY (supersymmetry) breaking and discuss some issues on the renormalization of the input parameters that are common to all calculations of higher-order corrections in SUSY models. In chapter 3 I will review and describe computations on the production of MSSM Higgs bosons in gluon fusion. In chapter 4 I will review results on the radiative corrections to the Higgs boson masses in the NMSSM. In chapter 5 I will review the calculation of BR(B → X s γ in the MSSM with Minimal Flavor Violation (MFV). Finally, in chapter 6 I will briefly summarize the outlook of my future research. (author)
Ab initio calculations and modelling of atomic cluster structure
DEFF Research Database (Denmark)
Solov'yov, Ilia; Lyalin, Andrey G.; Solov'yov, Andrey V.
2004-01-01
framework for modelling the fusion process of noble gas clusters is presented. We report the striking correspondence of the peaks in the experimentally measured abundance mass spectra with the peaks in the size-dependence of the second derivative of the binding energy per atom calculated for the chain...... of the noble gas clusters up to 150 atoms....
TTS-Polttopuu - cost calculation model for fuelwood
International Nuclear Information System (INIS)
Naett, H.; Ryynaenen, S.
1999-01-01
The TTS-Institutes's Forestry Department has developed a computer based cost-calculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation, chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486- level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY-research programme. (orig.)
A modified calculation model for groundwater flowing to horizontal ...
Indian Academy of Sciences (India)
All these valleys are located in Loess plateau of northern Shaanxi, China. The existing calculation model for single hori- zontal seepage well was built by Wang and Zhang. (2007) based on theory of coupled seepage-pipe flow and equivalent hydraulic conductivity (Chen. 1995; Chen and Lin 1998a, 1998b; Chen and.
A kinematic model for calculating the magnitude of angular ...
African Journals Online (AJOL)
Keplerian velocity laws imply the existence of velocity shear and shear viscosity within an accretion disk. Due to this viscosity, angular momentum is transferred from the faster moving inner regions to the slower-moving outer regions of the disk. Here we have formulated a model for calculating the magnitude of angular ...
Black Hole Entropy Calculation in a Modified Thin Film Model
Indian Academy of Sciences (India)
2016-01-27
Jan 27, 2016 ... The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the ...
The role of hand calculations in ground water flow modeling.
Haitjema, Henk
2006-01-01
Most ground water modeling courses focus on the use of computer models and pay little or no attention to traditional analytic solutions to ground water flow problems. This shift in education seems logical. Why waste time to learn about the method of images, or why study analytic solutions to one-dimensional or radial flow problems? Computer models solve much more realistic problems and offer sophisticated graphical output, such as contour plots of potentiometric levels and ground water path lines. However, analytic solutions to elementary ground water flow problems do have something to offer over computer models: insight. For instance, an analytic one-dimensional or radial flow solution, in terms of a mathematical expression, may reveal which parameters affect the success of calibrating a computer model and what to expect when changing parameter values. Similarly, solutions for periodic forcing of one-dimensional or radial flow systems have resulted in a simple decision criterion to assess whether or not transient flow modeling is needed. Basic water balance calculations may offer a useful check on computer-generated capture zones for wellhead protection or aquifer remediation. An easily calculated "characteristic leakage length" provides critical insight into surface water and ground water interactions and flow in multi-aquifer systems. The list goes on. Familiarity with elementary analytic solutions and the capability of performing some simple hand calculations can promote appropriate (computer) modeling techniques, avoids unnecessary complexity, improves reliability, and is likely to save time and money. Training in basic hand calculations should be an important part of the curriculum of ground water modeling courses.
Preliminary flowsheet for the conversion of Hanford high-level waste to glass
International Nuclear Information System (INIS)
Beary, M.M.; Chick, L.A.; Ely, P.C.; Gott, S.A.
1977-06-01
The flowsheets describe a process for converting waste removed from the Hanford underground waste tanks to more immobile form. The process involves a chemical separation of the radionuclides from industrial chemicals, and then making glass from the resulting small volume of highly radioactive waste. Removal of Sr, actinides, cesium, and technetium is discussed
Prioritized List of Research Needs to support MRWFD Case Study Flowsheet Advancement
Energy Technology Data Exchange (ETDEWEB)
Law, Jack Douglas [Idaho National Lab. (INL), Idaho Falls, ID (United States); Soelberg, Nicholas Ray [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-06-17
In FY-13, a case study evaluation was performed of full recycle technologies for both the processing of light-water reactor (LWR) used nuclear fuels as well as fast reactor (FR) fuel in the full recycle option. This effort focused on the identification of the case study processes and the initial preparation of material balance flowsheets for the identified technologies. In identifying the case study flowsheets, it was decided that two cases would be developed: one which identifies the flowsheet as currently developed and another near-term target flowsheet which identifies the flowsheet as envisioned within two years, pending the results of ongoing research. The case study focus is on homogeneous aqueous recycle of the U/TRU resulting from the processing of LWR fuel as feed for metal fuel fabrication. The metal fuel is utilized in a sodium-cooled fast reactor, and the used fast reactor fuel is processed using electrochemical separations. The recovered U/TRU from electrochemical separations is recycled to fuel fabrication and the fast reactor. Waste streams from the aqueous and electrochemical processing are treated and prepared for disposition. Off-gas from the separations and waste processing are also treated. As part of the FY-13 effort, preliminary process unknowns and research needs to advance the near-term target flowsheets were identified. In FY-14, these research needs were updated, expanded and prioritized. This report again updates the prioritized list of research needs based upon results to date in FY-15. The research needs are listed for each of the main portions of the flowsheet: 1) Aqueous headend, 2) Headend tritium pretreatment off-gas, 3) Aqueous U/Pu/Np recovery, 4) Aqueous TRU product solidification, 5) Aqueous actinide/lanthanide separation, 6) Aqueous off-gas treatment, 7) Aqueous HLW management, 8) Treatment of aqueous process wastes, 9) E-chem actinide separations, 10) E-chem off-gas, 11) E-chem HLW management. The identified research needs
Nuclear reaction matrix calculations with a shell-model Q
International Nuclear Information System (INIS)
Barrett, B.R.; McCarthy, R.J.
1976-01-01
Das Barrett-Hewitt-McCarthy (BHM) method for calculating the nuclear reaction matrix G is used to compute shell-model matrix elements for A = 18 nuclei. The energy denominators in intermediate states containing one unoccupied single-particle (s.p.) state and one valence s.p. state are treated correctly, in contrast to previous calculations. These corrections are not important for valence-shell matrix elements but are found to lead to relatively large changes in cross-shell matrix elements involved in core-polarization diagrams. (orig.) [de
Reactor burning calculations for a model reversed field pattern
International Nuclear Information System (INIS)
Yeung, B.C.; Long, J.W.; Newton, A.A.
1976-01-01
An outline pinch reactor scheme and a study of electrical engineering problems for cyclic operation has been further developed and a comparison of physics aspects and capital cost made with Tokamak which has many similar features. Since the properties of reversed field pinches (RFP) are now better understood more detailed studies have been made and first results of burn calculations given. Results of the burn calculations are summarised. These are based on a D-T burning model used for Tokamak with changes appropriate for RFP. (U.K.)
Modelling and parallel calculation of a kinetic boundary layer
International Nuclear Information System (INIS)
Perlat, Jean Philippe
1998-01-01
This research thesis aims at addressing reliability and cost issues in the calculation by numeric simulation of flows in transition regime. The first step has been to reduce calculation cost and memory space for the Monte Carlo method which is known to provide performance and reliability for rarefied regimes. Vector and parallel computers allow this objective to be reached. Here, a MIMD (multiple instructions, multiple data) machine has been used which implements parallel calculation at different levels of parallelization. Parallelization procedures have been adapted, and results showed that parallelization by calculation domain decomposition was far more efficient. Due to reliability issue related to the statistic feature of Monte Carlo methods, a new deterministic model was necessary to simulate gas molecules in transition regime. New models and hyperbolic systems have therefore been studied. One is chosen which allows thermodynamic values (density, average velocity, temperature, deformation tensor, heat flow) present in Navier-Stokes equations to be determined, and the equations of evolution of thermodynamic values are described for the mono-atomic case. Numerical resolution of is reported. A kinetic scheme is developed which complies with the structure of all systems, and which naturally expresses boundary conditions. The validation of the obtained 14 moment-based model is performed on shock problems and on Couette flows [fr
Modelling of Control Bars in Calculations of Boiling Water Reactors
International Nuclear Information System (INIS)
Khlaifi, A.; Buiron, L.
2004-01-01
The core of a nuclear reactor is generally composed of a neat assemblies of fissile material from where neutrons were descended. In general, the energy of fission is extracted by a fluid serving to cool clusters. A reflector is arranged around the assemblies to reduce escaping of neutrons. This is made outside the reactor core. Different mechanisms of reactivity are generally necessary to control the chain reaction. Manoeuvring of Boiling Water Reactor takes place by controlling insertion of absorbent rods to various places of the core. If no blocked assembly calculations are known and mastered, blocked assembly neutronic calculation are delicate and often treated by case to case in present studies [1]. Answering the question how to model crossbar for the control of a boiling water reactor ? requires the choice of a representation level for every chain of variables, the physical model, and its representing equations, etc. The aim of this study is to select the best applicable parameter serving to calculate blocked assembly of a Boiling Water Reactor. This will be made through a range of representative configurations of these reactors and used absorbing environment, in order to illustrate strategies of modelling in the case of an industrial calculation. (authors)
Application of nuclear models to neutron nuclear cross section calculations
International Nuclear Information System (INIS)
Young, P.G.
1983-01-01
Nuclear theory is used increasingly to supplement and extend the nuclear data base that is available for applied studies. Areas where theoretical calculations are most important include the determination of neutron cross sections for unstable fission products and transactinide nuclei in fission reactor or nuclear waste calculations and for meeting the extensive dosimetry, activation, and neutronic data needs associated with fusion reactor development, especially for neutron energies above 14 MeV. Considerable progress has been made in the use of nuclear models for data evaluation and, particularly, in the methods used to derive physically meaningful parameters for model calculations. Theoretical studies frequently involve use of spherical and deformed optical models, Hauser-Feshbach statistical theory, preequilibrium theory, direct-reaction theory and often make use of gamma-ray strength function models and phenomenological (or microscopic) level density prescriptions. The development, application and limitations of nuclear models for data evaluation are discussed in this paper, with emphasis on the 0.1 to 50 MeV energy range. (Auth.)
Sludge Batch 4 Simulant Flowsheet Studies with ARP and MCU: Impact of MCU Organics
Energy Technology Data Exchange (ETDEWEB)
Baich, M. A.; Herman, C. C.; Eibling, R. E.; Williams, M. F.; Smith, F. G.
2005-07-01
Two facilities for treating the salt currently being stored in the High Level Waste (HLW) tanks are currently planned to begin operations during the processing of Sludge Batch 4 (SB4). The Immobilization Technology Section (ITS) of the Savannah River National Laboratory (SRNL) was requested by the Defense Waste Processing Facility (DWPF) via Technical Task Request (TTR) HLW/DWPF/TTR-2004-0031 (Washburn, 2004) to evaluate the impacts on DWPF processing for streams from the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Side Extraction (CSSX) Unit (MCU). In particular, the TTR requests SRNL to validate the existing process flowsheet and establish a coupled operations flowsheet for use with SB4. The flowsheet runs are required so an evaluation of potential chemical processing issues, quantification of the potential hydrogen generation rates, and estimation of the required acid stoichiometry can be made. Previous testing (Baich et. al., 2003) was performed for incorporating ARP/MST in Sludge Batch 3 (SB3) and recommendations were made to DWPF on possible flowsheet options. However, since that time, some changes have occurred to the ARP facility processing strategy, and material balances have been revised (Subosits, 2004). Thus, testing with updated compositions was necessary. Since the MCU is a new design and project, no CPC flowsheet studies have been performed for this stream. This testing will validate the previously recommended ARP stream addition methods based on the new information and based on the need to also incorporate the MCU stream. The basic principle of solvent extraction is to use a sparingly soluble diluent material that carries an extractant that will complex with the cesium ions in the caustic HLW solution. The decontaminated aqueous stream (raffinate) is then sent to Saltstone for disposal. The cesium contained in the organic phase (solvent) can then be stripped into an aqueous phase ready for transfer to the DWPF. The solvent is
Investigation of Transformer Model for TRV Calculation by EMTP
Thein, Myo Min; Ikeda, Hisatoshi; Harada, Katsuhiko; Ohtsuka, Shinya; Hikita, Masayuki; Haginomori, Eiichi; Koshiduka, Tadashi
Analysis of the EMTP transformer model was performed with the 4kVA two windings low voltage transformer with the current injection (CIJ) measurement method to study a transient recovery voltage (TRV) at the transformer limited fault (TLF) current interrupting condition. Tested transformer's impedance was measured by the frequency response analyzer (FRA). From FRA measurement graphs leakage inductance, stray capacitance and resistance were calculated. The EMTP transformer model was constructed with those values. The EMTP simulation was done for a current injection circuit by using transformer model. The experiment and simulation results show a reasonable agreement.
A note on vector flux models for radiation dose calculations
International Nuclear Information System (INIS)
Kern, J.W.
1994-01-01
This paper reviews and extends modelling of anisotropic fluxes for radiation belt protons to provide closed-form equations for vector proton fluxes and proton flux anisotropy in terms of standard omnidirectional flux models. These equations provide a flexible alternative to the date-based vector flux models currently available. At higher energies, anisotropy of trapped proton flux in the upper atmosphere depends strongly on the variation of atmospheric density with altitude. Calculations of proton flux anisotropies using present models require specification of the average atmospheric density along trapped particle trajectories and its variation with mirror point altitude. For an isothermal atmosphere, calculations show that in a dipole magnetic field, the scale height of this trajectory-averaged density closely approximates the scale height of the atmosphere at the mirror point of the trapped particle. However, for the earth's magnetic field, the altitudes of mirror points vary for protons drifting in longitude. This results in a small increase in longitude-averaged scale heights compared to the atmospheric scale heights at minimum mirror point altitudes. The trajectory-averaged scale heights are increased by about 10-20% over scale heights from standard atmosphere models for protons mirroring at altitudes less than 500 km in the South Atlantic Anomaly Atmospheric losses of protons in the geomagnetic field minimum in the South Atlantic Anomaly control proton flux anisotropies of interest for radiation studies in low earth orbit. Standard atmosphere models provide corrections for diurnal, seasonal and solar activity-driven variations. Thus, determination of an ''equilibrium'' model of trapped proton fluxes of a given energy requires using a scale height that is time-averaged over the lifetime of the protons. The trajectory-averaged atmospheric densities calculated here lead to estimates for trapped proton lifetimes. These lifetimes provide appropriate time
TTS-Polttopuu - cost calculation model for fuelwood
International Nuclear Information System (INIS)
Naett, H.; Ryynaenen, S.
1998-01-01
The TTS-Institutes's Forestry Department has developed a computer based costcalculation model, 'TTS-Polttopuu', for the calculation of unit costs and resource needs in the harvesting systems for wood chips and split firewood. The model enables to determine the productivity and device cost per operating hour by each working stage of the harvesting system. The calculation model also enables the user to find out how changes in the productivity and cost bases of different harvesting chains influence the unit cost of the whole system. The harvesting chain includes the cutting of delimbed and non-delimbed fuelwood, forest haulage, road transportation chipping and chopping of longwood at storage. This individually operating software was originally developed to serve research needs, but it also serves the needs of the forestry and agricultural education, training and extension as well as individual firewood producers. The system requirements for this cost calculation model are at least 486-level processor with the Windows 95/98 -operating system, 16 MB of memory (RAM) and 5 MB of available hard-disk. This development work was carried out in conjunction with the nation-wide BIOENERGY Research Programme. (orig.)
The EDF/SEPTEN crisis team calculation tools and models
International Nuclear Information System (INIS)
De Magondeaux, B.; Grimaldi, X.
1993-01-01
Electricite de France (EDF) has developed a set of simplified tools and models called TOUTEC and CRISALIDE which are devoted to be used by the French utility National Crisis Team in order to perform the task of diagnosis and prognosis during an emergency situation. As a severe accident could have important radiological consequences, this method is focused on the diagnosis of the state of the safety barriers and on the prognosis of their behaviour. These tools allow the crisis team to deliver public authorities with information on the radiological risk and to provide advices to manage the accident on the damaged unit. At a first level, TOUTEC is intended to complement the hand-book with simplified calculation models and predefined relationships. It can avoid tedious calculation during stress conditions. The main items are the calculation of the primary circuit breach size and the evaluation of hydrogen over pressurization. The set of models called CRISALIDE is devoted to evaluate the following critical parameters: delay before core uncover, which would signify more severe consequences if it occurs, containment pressure behaviour and finally source term. With these models, crisis team comes able to take into account combinations of boundary conditions according to safety and auxiliary systems availability
Use of the Strong Collision Model to Calculate Spin Relaxation
Wang, D.; Chow, K. H.; Smadella, M.; Hossain, M. D.; MacFarlane, W. A.; Morris, G. D.; Ofer, O.; Morenzoni, E.; Salman, Z.; Saadaoui, H.; Song, Q.; Kiefl, R. F.
The strong collision model is used to calculate spin relaxation of a muon or polarized radioactive nucleus in contact with a fluctuating environment. We show that on a time scale much longer than the mean time between collisions (fluctuations) the longitudinal polarization decays exponentially with a relaxation rate equal to a sum of Lorentzians-one for each frequency component in the static polarization function ps(t).
Model and calculation of in situ stresses in anisotropic formations
Energy Technology Data Exchange (ETDEWEB)
Yuezhi, W.; Zijun, L.; Lixin, H. [Jianghan Petroleum Institute, (China)
1997-08-01
In situ stresses in transversely isotropic material in relation to wellbore stability have been investigated. Equations for three horizontal in- situ stresses and a new formation fracture pressure model were described, and the methodology for determining the elastic parameters of anisotropic rocks in the laboratory was outlined. Results indicate significantly smaller differences between theoretically calculated pressures and actual formation pressures than results obtained by using the isotropic method. Implications for improvements in drilling efficiency were reviewed. 13 refs., 6 figs.
Calculation of relativistic model stars using Regge calculus
International Nuclear Information System (INIS)
Porter, J.
1987-01-01
A new approach to the Regge calculus, developed in a previous paper, is used in conjunction with the velocity potential version of relativistic fluid dynamics due to Schutz [1970, Phys. Rev., D, 2, 2762] to calculate relativistic model stars. The results are compared with those obtained when the Tolman-Oppenheimer-Volkov equations are solved by other numerical methods. The agreement is found to be excellent. (author)
Energy Technology Data Exchange (ETDEWEB)
Elder, H.H.
2001-07-11
The HLW salt waste (salt cake and supernate) now stored at the SRS must be treated to remove insoluble sludge solids and reduce the soluble concentration of radioactive cesium radioactive strontium and transuranic contaminants (principally Pu and Np). These treatments will enable the salt solution to be processed for disposal as saltstone, a solid low-level waste.
Energy Technology Data Exchange (ETDEWEB)
Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.; Campbell, S.G.
2001-03-26
The High Level Waste (HLW) Salt Disposition Systems Engineering Team was formed on March 13, 1998, and chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted wasteform. This requirement arises because the existing In-Tank Precipitation process at the Savannah River Site, as currently configured, cannot simultaneously meet the HLW production and Authorization Basis safety requirements. This engineering study was performed in four phases. This document provides the technical bases, assumptions, and results of this engineering study.
International Nuclear Information System (INIS)
Dimenna, R.A.; Jacobs, R.A.; Taylor, G.A.; Durate, O.E.; Paul, P.K.; Elder, H.H.; Pike, J.A.; Fowler, J.R.; Rutland, P.L.; Gregory, M.V.; Smith III, F.G.; Hang, T.; Subosits, S.G.
2000-01-01
The High Level Waste Salt Disposition Systems Engineering Team was formed on March 13, 1998, under the sponsorship of the Westinghouse Savannah River Company High Level Waste (HLW) Vice President and General Manager. The Team is chartered to identify options, evaluate alternatives, and recommend a selected alternative(s) for processing HLW salt to a permitted waste form
Structure-dynamic model verification calculation of PWR 5 tests
International Nuclear Information System (INIS)
Engel, R.
1980-02-01
Within reactor safety research project RS 16 B of the German Federal Ministry of Research and Technology (BMFT), blowdown experiments are conducted at Battelle Institut e.V. Frankfurt/Main using a model reactor pressure vessel with a height of 11,2 m and internals corresponding to those in a PWR. In the present report the dynamic loading on the pressure vessel internals (upper perforated plate and barrel suspension) during the DWR 5 experiment are calculated by means of a vertical and horizontal dynamic model using the CESHOCK code. The equations of motion are resolved by direct integration. (orig./RW) [de
Mathematical model of kinetostatithic calculation of flat lever mechanisms
Directory of Open Access Journals (Sweden)
A. S. Sidorenko
2016-01-01
Full Text Available Currently widely used graphical-analytical methods of analysis largely obsolete, replaced by various analytical methods using computer technology. Therefore, of particular interest is the development of a mathematical model kinetostatical calculation mechanisms in the form of library procedures of calculation for all powered two groups Assyrians (GA and primary level. Before resorting to the appropriate procedure that computes all the forces in the kinematic pairs, you need to compute inertial forces, moments of forces of inertia and all external forces and moments acting on this GA. To this end shows the design diagram of the power analysis for each species GA of the second class, as well as the initial link. Finding reactions in the internal and external kinematic pairs based on equilibrium conditions with the account of forces of inertia and moments of inertia forces (Dalembert principle. Thus obtained equations of kinetostatical for their versatility have been solved by the Cramer rule. Thus, for each GA of the second class were found all 6 unknowns: the forces in the kinematic pairs, the directions of these forces as well as forces the shoulders. If we study kinetostatic mechanism with parallel consolidation of two GA in the initial link, in this case, power is the geometric sum of the forces acting on the primary link from the discarded GA. Thus, the obtained mathematical model kinetostatical calculation mechanisms in the form of libraries of mathematical procedures for determining reactions of all GA of the second class. The mathematical model kinetostatical calculation makes it relatively simple to implement its software implementation.
Freight Calculation Model: A Case Study of Coal Distribution
Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.
2018-03-01
Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.
Improved SVR Model for Multi-Layer Buildup Factor Calculation
International Nuclear Information System (INIS)
Trontl, K.; Pevec, D.; Smuc, T.
2006-01-01
The accuracy of point kernel method applied in gamma ray dose rate calculations in shielding design and radiation safety analysis is limited by the accuracy of buildup factors used in calculations. Although buildup factors for single-layer shields are well defined and understood, buildup factors for stratified shields represent a complex physical problem that is hard to express in mathematical terms. The traditional approach for expressing buildup factors of multi-layer shields is through semi-empirical formulas obtained by fitting the results of transport theory or Monte Carlo calculations. Such an approach requires an ad-hoc definition of the fitting function and often results with numerous and usually inadequately explained and defined correction factors added to the final empirical formula. Even more, finally obtained formulas are generally limited to a small number of predefined combinations of materials within relatively small range of gamma ray energies and shield thicknesses. Recently, a new approach has been suggested by the authors involving one of machine learning techniques called Support Vector Machines, i.e., Support Vector Regression (SVR). Preliminary investigations performed for double-layer shields revealed great potential of the method, but also pointed out some drawbacks of the developed model, mostly related to the selection of one of the parameters describing the problem (material atomic number), and the method in which the model was designed to evolve during the learning process. It is the aim of this paper to introduce a new parameter (single material buildup factor) that is to replace the existing material atomic number as an input parameter. The comparison of two models generated by different input parameters has been performed. The second goal is to improve the evolution process of learning, i.e., the experimental computational procedure that provides a framework for automated construction of complex regression models of predefined
2HDMC — two-Higgs-doublet model calculator
Eriksson, David; Rathsman, Johan; Stål, Oscar
2010-04-01
We describe version 1.0.6 of the public C++ code 2HDMC, which can be used to perform calculations in a general, CP-conserving, two-Higgs-doublet model (2HDM). The program features simple conversion between different parametrizations of the 2HDM potential, a flexible Yukawa sector specification with choices of different Z-symmetries or more general couplings, a decay library including all two-body — and some three-body — decay modes for the Higgs bosons, and the possibility to calculate observables of interest for constraining the 2HDM parameter space, as well as theoretical constraints from positivity and unitarity. The latest version of the 2HDMC code and full documentation is available from: http://www.isv.uu.se/thep/MC/2HDMC. New version program summaryProgram title: 2HDMC Catalogue identifier: AEFI_v1_1 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFI_v1_1.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU GPL No. of lines in distributed program, including test data, etc.: 12 110 No. of bytes in distributed program, including test data, etc.: 92 731 Distribution format: tar.gz Programming language: C++ Computer: Any computer running Linux Operating system: Linux RAM: 5 Mb Catalogue identifier of previous version: AEFI_v1_0 Journal reference of previous version: Comput. Phys. Comm. 180 (2010) 189 Classification: 11.1 External routines: GNU Scientific Library ( http://www.gnu.org/software/gsl/) Does the new version supersede the previous version?: Yes Nature of problem: Determining properties of the potential, calculation of mass spectrum, couplings, decay widths, oblique parameters, muon g-2, and collider constraints in a general two-Higgs-doublet model. Solution method: From arbitrary potential and Yukawa sector, tree-level relations are used to determine Higgs masses and couplings. Decay widths are calculated at leading order, including FCNC decays when applicable. Decays to off
EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION
Directory of Open Access Journals (Sweden)
André Carlos Silva
2012-12-01
Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.
Choice of flowsheet for heat storage plant of nuclear stations for distant heat and power supply
International Nuclear Information System (INIS)
Larin, E.A.; Lut'yanov, A.F.
1986-01-01
A rational utilization flowsheet of converted gas heat in heat-accumulating part (HAP) for NPP distant heat supply (NPDH), in which high-temperature helium heat of HTGR type reactor is accumulated in chemically bound state in the process of methane steam conversion, and then it is transported for long distances, is suggested. Technical and economical analysis has shown, that the NPDH HAP most effective flowsheet envisages the use of dry saturated steam, generated at the expense of converted gas heat, as a coolant for intermediate turbine superheater. The high-temperature heat can be more effectively used for gaseous heating of supply water and basic condensate of the turbine installation, and low-temperature one - for heat supply of the near-located users
Absorbed Dose and Dose Equivalent Calculations for Modeling Effective Dose
Welton, Andrew; Lee, Kerry
2010-01-01
While in orbit, Astronauts are exposed to a much higher dose of ionizing radiation than when on the ground. It is important to model how shielding designs on spacecraft reduce radiation effective dose pre-flight, and determine whether or not a danger to humans is presented. However, in order to calculate effective dose, dose equivalent calculations are needed. Dose equivalent takes into account an absorbed dose of radiation and the biological effectiveness of ionizing radiation. This is important in preventing long-term, stochastic radiation effects in humans spending time in space. Monte carlo simulations run with the particle transport code FLUKA, give absorbed and equivalent dose data for relevant shielding. The shielding geometry used in the dose calculations is a layered slab design, consisting of aluminum, polyethylene, and water. Water is used to simulate the soft tissues that compose the human body. The results obtained will provide information on how the shielding performs with many thicknesses of each material in the slab. This allows them to be directly applicable to modern spacecraft shielding geometries.
Actual waste demonstration of the nitric-glycolic flowsheet for sludge batch 9 qualification
Energy Technology Data Exchange (ETDEWEB)
Newell, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Pareizs, J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Martino, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Reboul, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Johnson, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-03-09
For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs qualification testing to demonstrate that the sludge batch is processable. Based on the results of this actual-waste qualification and previous simulant studies, SRNL recommends implementation of the nitric-glycolic acid flowsheet in DWPF. Other recommendations resulting from this demonstration are reported in section 5.0.
A modified microdosimetric kinetic model for relative biological effectiveness calculation
Chen, Yizheng; Li, Junli; Li, Chunyan; Qiu, Rui; Wu, Zhen
2018-01-01
In the heavy ion therapy, not only the distribution of physical absorbed dose, but also the relative biological effectiveness (RBE) weighted dose needs to be taken into account. The microdosimetric kinetic model (MKM) can predict the RBE value of heavy ions with saturation-corrected dose-mean specific energy, which has been used in clinical treatment planning at the National Institute of Radiological Sciences. In the theoretical assumption of the MKM, the yield of the primary lesion is independent of the radiation quality, while the experimental data shows that DNA double strand break (DSB) yield, considered as the main primary lesion, depends on the LET of the particle. Besides, the β parameter of the MKM is constant with LET resulting from this assumption, which also differs from the experimental conclusion. In this study, a modified MKM was developed, named MMKM. Based on the experimental DSB yield of mammalian cells under the irradiation of ions with different LETs, a RBEDSB (RBE for the induction of DSB)-LET curve was fitted as the correction factor to modify the primary lesion yield in the MKM, and the variation of the primary lesion yield with LET is considered in the MMKM. Compared with the present the MKM, not only the α parameter of the MMKM for mono-energetic ions agree with the experimental data, but also the β parameter varies with LET and the variation trend of the experimental result can be reproduced on the whole. Then a spread-out Bragg peaks (SOBP) distribution of physical dose was simulated with Geant4 Monte Carlo code, and the biological and clinical dose distributions were calculated, under the irradiation of carbon ions. The results show that the distribution of clinical dose calculated with the MMKM is closed to the distribution with the MKM in the SOBP, while the discrepancy before and after the SOBP are both within 10%. Moreover, the MKM might overestimate the clinical dose at the distal end of the SOBP more than 5% because of its
Sludge batch 9 simulant runs using the nitric-glycolic acid flowsheet
Energy Technology Data Exchange (ETDEWEB)
Lambert, D. P. [Savannah River Site (SRS), Aiken, SC (United States); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States); Brandenburg, C. H. [Savannah River Site (SRS), Aiken, SC (United States); Luther, M. C. [Savannah River Site (SRS), Aiken, SC (United States); Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States); Woodham, W. H. [Savannah River Site (SRS), Aiken, SC (United States)
2016-11-01
Testing was completed to develop a Sludge Batch 9 (SB9) nitric-glycolic acid chemical process flowsheet for the Defense Waste Processing Facility’s (DWPF) Chemical Process Cell (CPC). CPC simulations were completed using SB9 sludge simulant, Strip Effluent Feed Tank (SEFT) simulant and Precipitate Reactor Feed Tank (PRFT) simulant. Ten sludge-only Sludge Receipt and Adjustment Tank (SRAT) cycles and four SRAT/Slurry Mix Evaporator (SME) cycles, and one actual SB9 sludge (SRAT/SME cycle) were completed. As has been demonstrated in over 100 simulations, the replacement of formic acid with glycolic acid virtually eliminates the CPC’s largest flammability hazards, hydrogen and ammonia. Recommended processing conditions are summarized in section 3.5.1. Testing demonstrated that the interim chemistry and Reduction/Oxidation (REDOX) equations are sufficient to predict the composition of DWPF SRAT product and SME product. Additional reports will finalize the chemistry and REDOX equations. Additional testing developed an antifoam strategy to minimize the hexamethyldisiloxane (HMDSO) peak at boiling, while controlling foam based on testing with simulant and actual waste. Implementation of the nitric-glycolic acid flowsheet in DWPF is recommended. This flowsheet not only eliminates the hydrogen and ammonia hazards but will lead to shorter processing times, higher elemental mercury recovery, and more concentrated SRAT and SME products. The steady pH profile is expected to provide flexibility in processing the high volume of strip effluent expected once the Salt Waste Processing Facility starts up.
Energy Technology Data Exchange (ETDEWEB)
Peterman, Dean R. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Olson, Lonnie G. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groenewold, Gary S. [Idaho National Lab. (INL), Idaho Falls, ID (United States); McDowell, Rocklan G. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Tillotson, Richard D. [Idaho National Lab. (INL), Idaho Falls, ID (United States); Law, Jack D. [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2013-08-01
This report presents a summary of the work performed to meet the FCRD level 2 milestone M3FT-13IN0302053, “Identification of TRUEX Strip Degradation.” The INL radiolysis test loop has been used to identify radiolytically generated degradation products in the strip section of the TRUEX flowsheet. These data were used to evaluate impact of the formation of radiolytic degradation products in the strip section upon the efficacy of the TRUEX flowsheet for the recovery of trivalent actinides and lanthanides from acidic solution. The nominal composition of the TRUEX solvent used in this study is 0.2 M CMPO and 1.4 M TBP dissolved in n-dodecane and the nominal composition of the TRUEX strip solution is 1.5 M lactic acid and 0.050 M diethylenetriaminepentaacetic acid. Gamma irradiation of a mixture of TRUEX process solvent and stripping solution in the test loop does not adversely impact flowsheet performance as measured by stripping americium ratios. The observed increase in americium stripping distribution ratios with increasing absorbed dose indicates the radiolytic production of organic soluble degradation compounds.
Development and Testing of an Americium/Lanthanide Separation Flowsheet Using Sodium Bismuthate
Energy Technology Data Exchange (ETDEWEB)
Jack Law; Bruce Mincher; Troy Garn; Mitchell Greenhalgh; Nicholas Schmitt; Veronica Rutledge
2014-04-01
The separation of Am from the lanthanides and curium is a key step in proposed advanced fuel cycle scenarios. The partitioning and transmutation of Am is desirable to minimize the long-term heat load of material interred in a future high-level waste repository. A separation process amenable to process scale-up remains elusive. Given only subtle chemistry differences within and between the ions of the trivalent actinide and lanthanide series this separation is challenging ; however, higher oxidation states of americium can be prepared using sodium bismuthate and separated via solvent extraction using diamylamylphosphonate (DAAP) extraction. Among the other trivalent metals only Ce is also oxidized and extracted. Due to the long-term instability of Am(VI) , the loaded organic phase is readily selectively stripped to partition the actinide to a new acidic aqueous phase. Batch extraction distribution ratio measurements were used to design a flowsheet to accomplish this separation. Additionally, crossflow filtration was investigated as a method to filter the bismuthate solids from the feed solution prior to extraction. Results of the filtration studies, flowsheet development work and flowsheet performance testing using a centrifugal contactor are detailed.
HYBRID SULFUR CYCLE FLOWSHEETS FOR HYDROGEN PRODUCTION USING HIGH-TEMPERATURE GAS-COOLED REACTORS
Energy Technology Data Exchange (ETDEWEB)
Gorensek, M.
2011-07-06
Two hybrid sulfur (HyS) cycle process flowsheets intended for use with high-temperature gas-cooled reactors (HTGRs) are presented. The flowsheets were developed for the Next Generation Nuclear Plant (NGNP) program, and couple a proton exchange membrane (PEM) electrolyzer for the SO2-depolarized electrolysis step with a silicon carbide bayonet reactor for the high-temperature decomposition step. One presumes an HTGR reactor outlet temperature (ROT) of 950 C, the other 750 C. Performance was improved (over earlier flowsheets) by assuming that use of a more acid-tolerant PEM, like acid-doped poly[2,2'-(m-phenylene)-5,5'-bibenzimidazole] (PBI), instead of Nafion{reg_sign}, would allow higher anolyte acid concentrations. Lower ROT was accommodated by adding a direct contact exchange/quench column upstream from the bayonet reactor and dropping the decomposition pressure. Aspen Plus was used to develop material and energy balances. A net thermal efficiency of 44.0% to 47.6%, higher heating value basis is projected for the 950 C case, dropping to 39.9% for the 750 C case.
Basic theory and model calculations of the Venus ionosphere
Nagy, A. F.; Cravens, T. E.; Gombosi, T. I.
1983-01-01
An assessment is undertaken of current understanding of the physical and chemical processes that control Venus's ionospheric behavior, in view of the data that has been made available by the Venera and Pioneer Venus missions. Attention is given to the theoretical framework used in general planetary ionosphere studies, especially to the equations describing the controlling physical and chemical processes, and to the current status of the ion composition, density and thermal structure models developed to reproduce observed ionospheric behavior. No truly comprehensive and successful model of the nightside ionosphere has been published. Furthermore, although dayside energy balance calculations yield electron and ion temperature values that are in close agreement with measured values, the energetics of the night side eludes understanding.
Determination of appropriate models and parameters for premixing calculations
Energy Technology Data Exchange (ETDEWEB)
Park, Ik-Kyu; Kim, Jong-Hwan; Min, Beong-Tae; Hong, Seong-Wan
2008-03-15
The purpose of the present work is to use experiments that have been performed at Forschungszentrum Karlsruhe during about the last ten years for determining the most appropriate models and parameters for premixing calculations. The results of a QUEOS experiment are used to fix the parameters concerning heat transfer. The QUEOS experiments are especially suited for this purpose as they have been performed with small hot solid spheres. Therefore the area of heat exchange is known. With the heat transfer parameters fixed in this way, a PREMIX experiment is recalculated. These experiments have been performed with molten alumina (Al{sub 2}O{sub 3}) as a simulant of corium. Its initial temperature is 2600 K. With these experiments the models and parameters for jet and drop break-up are tested.
Recent Developments in No-Core Shell-Model Calculations
Energy Technology Data Exchange (ETDEWEB)
Navratil, P; Quaglioni, S; Stetcu, I; Barrett, B R
2009-03-20
We present an overview of recent results and developments of the no-core shell model (NCSM), an ab initio approach to the nuclear many-body problem for light nuclei. In this aproach, we start from realistic two-nucleon or two- plus three-nucleon interactions. Many-body calculations are performed using a finite harmonic-oscillator (HO) basis. To facilitate convergence for realistic inter-nucleon interactions that generate strong short-range correlations, we derive effective interactions by unitary transformations that are tailored to the HO basis truncation. For soft realistic interactions this might not be necessary. If that is the case, the NCSM calculations are variational. In either case, the ab initio NCSM preserves translational invariance of the nuclear many-body problem. In this review, we, in particular, highlight results obtained with the chiral two- plus three-nucleon interactions. We discuss efforts to extend the applicability of the NCSM to heavier nuclei and larger model spaces using importance-truncation schemes and/or use of effective interactions with a core. We outline an extension of the ab initio NCSM to the description of nuclear reactions by the resonating group method technique. A future direction of the approach, the ab initio NCSM with continuum, which will provide a complete description of nuclei as open systems with coupling of bound and continuum states is given in the concluding part of the review.
Modeling and calculation of open carbon dioxide refrigeration system
International Nuclear Information System (INIS)
Cai, Yufei; Zhu, Chunling; Jiang, Yanlong; Shi, Hong
2015-01-01
Highlights: • A model of open refrigeration system is developed. • The state of CO 2 has great effect on Refrigeration capacity loss by heat transfer. • Refrigeration capacity loss by remaining CO 2 has little relation to the state of CO 2 . • Calculation results are in agreement with the test results. - Abstract: Based on the analysis of the properties of carbon dioxide, an open carbon dioxide refrigeration system is proposed, which is responsible for the situation without external electricity unit. A model of open refrigeration system is developed, and the relationship between the storage environment of carbon dioxide and refrigeration capacity is conducted. Meanwhile, a test platform is developed to simulation the performance of the open carbon dioxide refrigeration system. By comparing the theoretical calculations and the experimental results, several conclusions are obtained as follows: refrigeration capacity loss by heat transfer in supercritical state is much more than that in two-phase region and the refrigeration capacity loss by remaining carbon dioxide has little relation to the state of carbon dioxide. The results will be helpful to the use of open carbon dioxide refrigeration
Energy Technology Data Exchange (ETDEWEB)
Kneur, J.L
2006-06-15
This document is divided into 2 parts. The first part describes a particular re-summation technique of perturbative series that can give a non-perturbative results in some cases. We detail some applications in field theory and in condensed matter like the calculation of the effective temperature of Bose-Einstein condensates. The second part deals with the minimal supersymmetric standard model. We present an accurate calculation of the mass spectrum of supersymmetric particles, a calculation of the relic density of supersymmetric black matter, and the constraints that we can infer from models.
Development of nuclear models for higher energy calculations
International Nuclear Information System (INIS)
Bozoian, M.; Siciliano, E.R.; Smith, R.D.
1988-01-01
Two nuclear models for higher energy calculations have been developed in the regions of high and low energy transfer, respectively. In the former, a relativistic hybrid-type preequilibrium model is compared with data ranging from 60 to 800 MeV. Also, the GNASH exciton preequilibrium-model code with higher energy improvements is compared with data at 200 and 318 MeV. In the region of low energy transfer, nucleon-nucleus scattering is predominately a direct reaction involving quasi-elastic collisions with one or more target nucleons. We discuss various aspects of quasi-elastic scattering which are important in understanding features of cross sections and spin observables. These include (1) contributions from multi-step processes; (2) damping of the continuum response from 2p-2h excitations; (3) the ''optimal'' choice of frame in which to evaluate the nucleon-nucleon amplitudes; and (4) the effect of optical and spin-orbit distortions, which are included in a model based on the RPA the DWIA and the eikonal approximation. 33 refs., 15 figs
Quantum plasmonics: from jellium models to ab initio calculations
Directory of Open Access Journals (Sweden)
Varas Alejandro
2016-08-01
Full Text Available Light-matter interaction in plasmonic nanostructures is often treated within the realm of classical optics. However, recent experimental findings show the need to go beyond the classical models to explain and predict the plasmonic response at the nanoscale. A prototypical system is a nanoparticle dimer, extensively studied using both classical and quantum prescriptions. However, only very recently, fully ab initio time-dependent density functional theory (TDDFT calculations of the optical response of these dimers have been carried out. Here, we review the recent work on the impact of the atomic structure on the optical properties of such systems. We show that TDDFT can be an invaluable tool to simulate the time evolution of plasmonic modes, providing fundamental understanding into the underlying microscopical mechanisms.
International Nuclear Information System (INIS)
Fitzgerald, C.L.; Vaughen, V.C.A.; Lamb, C.E.
1977-07-01
Experiments using High-Temperature Gas-Cooled Reactor (HTGR) fuel material, TRISO-coated (2.75 Th/U)C 2 --TRISO-coated ThC 2 and TRISO-coated UO 2 --BISO-coated ThO 2 , were performed in Building 4507 (the High-Level Chemical Development Facility) to determine the inventory and transport behavior of fission products and heavy metals from a grind-burn-leach process flowsheet. In addition, values calculated by the ORNL Isotope Generation and Depletion Code (ORIGEN, a computer program used for predicting quantities of activation products, actinides, and fission products from irradiation data and nuclear data libraries) are compared with values derived by chemical analyses (CA) and those measured by a gamma-scan nondestructive analytical (NDA) technique. Reasonable agreement was obtained between ORIGEN and NDA results for one of the tests, but the values obtained by chemical analysis were lower than either of the two other sets of values. With the exception of 234 U, isotopic uranium values determined by chemical analysis (mass spectrometry) agreed within 15 percent of the ORIGEN prediction
Selection of models to calculate the LLW source term
International Nuclear Information System (INIS)
Sullivan, T.M.
1991-10-01
Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab
Calculational models of close-spaced thermionic converters
International Nuclear Information System (INIS)
McVey, J.B.
1983-01-01
Two new calculational models have been developed in conjunction with the SAVTEC experimental program. These models have been used to analyze data from experimental close-spaced converters, providing values for spacing, electrode work functions, and converter efficiency. They have also been used to make performance predictions for such converters over a wide range of conditions. Both models are intended for use in the collisionless (Knudsen) regime. They differ from each other in that the simpler one uses a Langmuir-type formulation which only considers electrons emitted from the emitter. This approach is implemented in the LVD (Langmuir Vacuum Diode) computer program, which has the virtue of being both simple and fast. The more complex model also includes both Saha-Langmuir emission of positive cesium ions from the emitter and collector back emission. Computer implementation is by the KMD1 (Knudsen Mode Diode) program. The KMD1 model derives the particle distribution functions from the Vlasov equation. From these the particle densities are found for various interelectrode motive shapes. Substituting the particle densities into Poisson's equation gives a second order differential equation for potential. This equation can be integrated once analytically. The second integration, which gives the interelectrode motive, is performed numerically by the KMD1 program. This is complicated by the fact that the integrand is often singular at one end point of the integration interval. The program performs a transformation on the integrand to make it finite over the entire interval. Once the motive has been computed, the output voltage, current density, power density, and efficiency are found. The program is presently unable to operate when the ion richness ratio β is between about .8 and 1.0, due to the occurrence of oscillatory motives
Calculation of extreme wind atlases using mesoscale modeling. Final report
Energy Technology Data Exchange (ETDEWEB)
Larsen, X.G..; Badger, J.
2012-06-15
The objective of this project is to develop new methodologies for extreme wind atlases using mesoscale modeling. Three independent methodologies have been developed. All three methodologies are targeted at confronting and solving the problems and drawbacks in existing methods for extreme wind estimation regarding the use of modeled data (coarse resolution, limited representation of storms) and measurements (short period and technical issues). The first methodology is called the selective dynamical downscaling method. For a chosen area, we identify the yearly strongest storms through global reanalysis data at each model grid point and run a mesoscale model, here the Weather Research and Forecasting (WRF) model, for all storms identified. Annual maximum winds and corresponding directions from each mesoscale grid point are then collected, post-processed and used in Gumbel distribution to obtain the 50-year wind. The second methodology is called the statistical-dynamical downscaling method. For a chosen area, the geostrophic winds at a representative grid point from the global reanalysis data are used to obtain the annual maximum winds in 12 sectors for a period of 30 years. This results in 360 extreme geostrophic winds. Each of the 360 winds is used as a stationary forcing in a mesoscale model, here KAMM. For each mesoscale grid point the annual maximum winds are post-processed and used to a Gumbel fit to obtain the 50-year wind. For the above two methods, the post-processing is an essential part. It calculates the speedup effects using a linear computation model (LINCOM) and corrects the winds from the mesoscale modeling to a standard condition, i.e. 10 m above a homogeneous surface with a roughness length 5 cm. Winds of the standard condition can then be put into a microscale model to resolve the local terrain and roughness effects around particular turbine sites. By converting both the measured and modeled winds to the same surface conditions through the post
Computational techniques used in the development of coprocessing flowsheets
International Nuclear Information System (INIS)
Groenier, W.S.; Mitchell, A.D.; Jubin, R.T.
1979-01-01
The computer program SEPHIS, developed to aid in determining optimum solvent extraction conditions for the reprocessing of nuclear power reactor fuels by the Purex method, is described. The program employs a combination of approximate mathematical equilibrium expressions and a transient, stagewise-process calculational method to allow stage and product-stream concentrations to be predicted with accuracy and reliability. The possible applications to inventory control for nuclear material safeguards, nuclear criticality analysis, and process analysis and control are of special interest. The method is also applicable to other counntercurrent liquid--liquid solvent extraction processes having known chemical kinetics, that may involve multiple solutes and are performed in conventional contacting equipment
Calculating ε'/ε in the standard model
International Nuclear Information System (INIS)
Sharpe, S.R.
1988-01-01
The ingredients needed in order to calculate ε' and ε are described. Particular emphasis is given to the non-perturbative calculations of matrix elements by lattice methods. The status of the electromagnetic contribution to ε' is reviewed. 15 refs
Comparative analysis of calculation models of railway subgrade
Directory of Open Access Journals (Sweden)
I.O. Sviatko
2013-08-01
Full Text Available Purpose. In transport engineering structures design, the primary task is to determine the parameters of foundation soil and nuances of its work under loads. It is very important to determine the parameters of shear resistance and the parameters, determining the development of deep deformations in foundation soils, while calculating the soil subgrade - upper track structure interaction. Search for generalized numerical modeling methods of embankment foundation soil work that include not only the analysis of the foundation stress state but also of its deformed one. Methodology. The analysis of existing modern and classical methods of numerical simulation of soil samples under static load was made. Findings. According to traditional methods of analysis of ground masses work, limitation and the qualitative estimation of subgrade deformations is possible only indirectly, through the estimation of stress and comparison of received values with the boundary ones. Originality. A new computational model was proposed in which it will be applied not only classical approach analysis of the soil subgrade stress state, but deformed state will be also taken into account. Practical value. The analysis showed that for accurate analysis of ground masses work it is necessary to develop a generalized methodology for analyzing of the rolling stock - railway subgrade interaction, which will use not only the classical approach of analyzing the soil subgrade stress state, but also take into account its deformed one.
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use
Full waveform modelling and misfit calculation using the VERCE platform
Garth, Thomas; Spinuso, Alessandro; Casarotti, Emanuele; Magnoni, Federica; Krischner, Lion; Igel, Heiner; Schwichtenberg, Horst; Frank, Anton; Vilotte, Jean-Pierre; Rietbrock, Andreas
2016-04-01
simulated and recorded waveforms, enabling seismologists to specify and steer their misfit analyses using existing python tools and libraries such as Pyflex and the dispel4py data-intensive processing library. All these processes, including simulation, data access, pre-processing and misfit calculation, are presented to the users of the gateway as dedicated and interactive workspaces. The VERCE platform can also be used to produce animations of seismic wave propagation through the velocity model, and synthetic shake maps. We demonstrate the functionality of the VERCE platform with two case studies, using the pre-loaded velocity model and mesh for Chile and Northern Italy. It is envisioned that this tool will allow a much greater range of seismologists to access these full waveform inversion tools, and aid full waveform tomographic and source inversion, synthetic shake map production and other full waveform applications, in a wide range of tectonic settings.
Characterization of Neptunium Oxide Generated Using the HB-Line Phase II Flowsheet
Energy Technology Data Exchange (ETDEWEB)
Duffey, J
2003-08-29
Approximately 98 grams of neptunium(IV) oxide (NpO{sub 2}) were produced at the Savannah River Technology Center (SRTC) for use in gas generation tests to support the neptunium stabilization program at the Savannah River Site (SRS). The NpO{sub 2} was produced according to the anticipated HB-Line flowsheet consisting of anion exchange, oxalate precipitation, filtration, and calcination. Characterization of the NpO{sub 2} product to be used in gas generation tests included bulk and tap density measurements, X-ray diffraction, particle size distribution, specific surface area measurements, and moisture analysis.
Studies on the behaviour of heat transfer flowsheets with fluids
International Nuclear Information System (INIS)
Munser, H.
1989-01-01
A model is presented which provides a dynamic description of plant components in the secondary coolant circuit of WWER-1000-type nuclear power stations. These components employ combined heat exchangers as their main constituents which themselves are composed of simple water-water and steam-water heat exchangers. The simple heat exchangers are described by means of linearized sets of partial differential equations which can be solved by Laplace transformation. The code system DAW has been established for dynamic modulation of plant components. (author)
Improvements in the model of neutron calculations for research reactors
International Nuclear Information System (INIS)
Calzetta, O.; Leszczynski, F.
1987-01-01
Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results, are being researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements by means of one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author)
Improvements in the model of neutron calculations for research reactors
International Nuclear Information System (INIS)
Calzetta, Osvaldo; Leszczynski, Francisco
1987-01-01
Within the research program in the field of neutron physics calculations being carried out in the Nuclear Engineering Division at the Centro Atomico Bariloche, the errors which due to some typical approximations appear in the final results are researched. For research MTR type reactors, two approximations, for high and low enrichment are investigated: the treatment of the geometry and the method of few-group cell cross-sections calculation, particularly in the resonance energy region. Commonly, the cell constants used for the entire reactor calculation are obtained making an homogenization of the full fuel elements, by one-dimensional calculations. An improvement is made that explicitly includes the fuel element frames in the core calculation geometry. Besides, a detailed treatment-in energy and space- is used to find the resonance few-group cross sections, and a comparison of the results with detailed and approximated calculations is made. The least number and the best mesh of energy groups needed for cell calculations is fixed too. (Author) [es
40 CFR 600.207-93 - Calculation of fuel economy values for a model type.
2010-07-01
... Values § 600.207-93 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... those intended for sale in other states, he will calculate fuel economy values for each model type for...
Actual Waste Demonstration of the Nitric-Glycolic Flowsheet for Sludge Batch 9 Qualification
Energy Technology Data Exchange (ETDEWEB)
Newell, J. D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Pareizs, J. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Martino, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Reboul, S. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Coleman, C. J. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Johnson, F. C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-09-01
For each sludge batch that is processed in the Defense Waste Processing Facility (DWPF), the Savannah River National Laboratory (SRNL) performs qualification testing to demonstrate that the sludge batch is processable. Testing performed by the Savannah River National Laboratory has shown glycolic acid to be effective in replacing the function of formic acid in the DWPF chemical process. The nitric-glycolic flowsheet reduces mercury, significantly lowers the catalytic generation of hydrogen and ammonia which could allow purge reduction in the Sludge Receipt and Adjustment Tank (SRAT), stabilizes the pH and chemistry in the SRAT and the Slurry Mix Evaporator (SME), allows for effective rheology adjustment, and is favorable with respect to melter flammability. In order to implement the new flowsheet, SRAT and SME cycles, designated SC-18, were performed using a Sludge Batch (SB) 9 slurry blended from SB8 Tank 40H and Tank 51H samples. The SRAT cycle involved adding nitric and glycolic acids to the sludge, refluxing to steam strip mercury, and dewatering to a targeted solids concentration. Data collected during the SRAT cycle included offgas analyses, process temperatures, heat transfer, and pH measurements. The SME cycle demonstrated the addition of glass frit and the replication of six canister decontamination additions. The demonstration concluded with dewatering to a targeted solids concentration. Data collected during the SME cycle included offgas analyses, process temperatures, heat transfer, and pH measurements. Slurry and condensate samples were collected for subsequent analysis
Hydrogen generation in SRAT with nitric acid and late washing flowsheets
International Nuclear Information System (INIS)
Hsu, C.W.
1992-01-01
Melter feed preparation processes, incorporating a final wash of the precipitate slurry feed to Defense Waste Processing Facility (DWPF) and a partial substitution of the SRAT formic acid requirement with nitric acid, should not produce peak hydrogen generation rates during Cold Chemical Runs (CCR's) and radioactive operation greater than their current, respective hydrogen design bases of 0.024 lb/hr and 1.5 lb/hr. A single SRAT bench-scale process simulation for CCR-s produced a DWPF equivalent peak hydrogen generation rate of 0.004 lb/hr. During radioactive operation, the peak hydrogen generation rate will be dependent on the extent DWPF deviates from the nominal precipitate hydrolysis and melter feed preparation process operating parameters. Two actual radioactive sludges were treated according to the new flowsheets. The peak hydrogen evolution rates were equivalent to 0.038 and 0.20 lb/hr (DWPF scale) respectively. Compared to the formic acid -- HAN hydrolysis flowsheets, these peak rates were reduced by a factor of 2.5 and 3.4 for Tank 15 and Tank 11 sludges, respectively
Flowsheet Validation For The Permanganate Digestion Of REILLEX(trademark) HPQ Anion Resin
International Nuclear Information System (INIS)
Kyser, E.
2009-01-01
The flowsheet for the digestion of Reillex(trademark) HPQ was validated both under the traditional alkaline conditions and under strongly acidic conditions. Due to difficulty in performing a pH adjustment in the large tank where this flowsheet must be performed, the recommended digestion conditions were changed from pH 8-10 to 8 M HNO 3 . Thus, no pH adjustment of the solution is required prior to performing the permanganate addition and digestion and the need to sample the digestion tank to confirm appropriate pH range for digestion may be avoided. Neutralization of the acidic digestion solution will be performed after completion of the resin digestion cycle. The amount of permanganate required for this type of resin (Reillex(trademark) HPQ) was increased from 1 kg/L resin to 4 kg/L resin to reduce the amount of residual resin solids to a minimal amount ( 2 ) solids (1.71 kg/L resin) and involves the generation of a significant liquid volume due to the low solubility of permanganate. However, since only two batches of resin (40 L each) are expected to be digested, the total waste generated is limited.
DWPF FLOWSHEET STUDIES WITH SIMULANTS TO DETERMINE MCU SOLVENT BUILD-UP IN CONTINOUS RUNS
Energy Technology Data Exchange (ETDEWEB)
Lambert, D; Frances Williams, F; S Crump, S; Russell Eibling, R; Thomas02 White, T; David Best, D
2006-05-25
The Actinide Removal Process (ARP) facility and the Modular Caustic Side Solvent Extraction Unit (MCU) are scheduled to begin processing salt waste in fiscal year 2007. A portion of the streams generated in these salt processing facilities will be transferred to the Defense Waste Processing Facility (DWPF) to be incorporated in the glass matrix. Before the streams are introduced, a combination of impact analyses and research and development studies must be performed to quantify the impacts on DWPF processing. The Process Science & Engineering (PS&E) section of the Savannah River National Laboratory (SRNL) was requested via Technical Task Request (TTR) HLW/DWPF/TTR-2004-0031 to evaluate the impacts on DWPF processing. Simulant Chemical Process Cell (CPC) flowsheet studies have been performed using previous composition and projected volume estimates for the ARP sludge/monosodium titanate (MST) stream. Initial MCU incorporation testing for the DWPF flowsheet indicated unacceptable levels of Isopar{reg_sign}L were collecting in the Sludge Receipt and Adjustment Tank (SRAT) condenser system and unanticipated quantities of modifier were carrying over into the SRAT condenser system. This work was performed as part of Sludge Batch 4 (SB4) flowsheet testing and was reported by Baich et al. Due to changes in the flammability control strategy for DWPF for salt processing, the incorporation strategy for ARP changed and additional ARP flowsheet tests were necessary to validate the new processing strategy. The last round of ARP testing included the incorporation of the MCU stream and identified potential processing issues with the MCU solvent. The identified issues included the potential carry-over and accumulation of the MCU solvent components in the CPC condensers and in the recycle stream to the Tank Farm. Solvent retention in the DWPF condensers contradicts the DWPF solvent control strategy. Therefore, DWPF requested SRNL to perform additional MCU flowsheet studies to better
Calculational advance in the modeling of fuel-coolant interactions
International Nuclear Information System (INIS)
Bohl, W.R.
1982-01-01
A new technique is applied to numerically simulate a fuel-coolant interaction. The technique is based on the ability to calculate separate space- and time-dependent velocities for each of the participating components. In the limiting case of a vapor explosion, this framework allows calculation of the pre-mixing phase of film boiling and interpenetration of the working fluid by hot liquid, which is required for extrapolating from experiments to a reactor hypothetical accident. Qualitative results are compared favorably to published experimental data where an iron-alumina mixture was poured into water. Differing results are predicted with LMFBR materials
Comparison of Calculation Models for Bucket Foundation in Sand
DEFF Research Database (Denmark)
Vaitkunaite, Evelina; Molina, Salvador Devant; Ibsen, Lars Bo
The possibility of fast and rather precise preliminary offshore foundation design is desirable. The ultimate limit state of bucket foundation is investigated using three different geotechnical calculation tools: [Ibsen 2001] an analytical method, LimitState:GEO and Plaxis 3D. The study has focuse...
National Stormwater Calculator - Version 1.1 (Model)
EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico). The SWC estimates runoff at a site based on available information ...
Perturbation theory calculations of model pair potential systems
Energy Technology Data Exchange (ETDEWEB)
Gong, Jianwu [Iowa State Univ., Ames, IA (United States)
2016-01-01
Helmholtz free energy is one of the most important thermodynamic properties for condensed matter systems. It is closely related to other thermodynamic properties such as chemical potential and compressibility. It is also the starting point for studies of interfacial properties and phase coexistence if free energies of different phases can be obtained. In this thesis, we will use an approach based on the Weeks-Chandler-Anderson (WCA) perturbation theory to calculate the free energy of both solid and liquid phases of Lennard-Jones pair potential systems and the free energy of liquid states of Yukawa pair potentials. Our results indicate that the perturbation theory provides an accurate approach to the free energy calculations of liquid and solid phases based upon comparisons with results from molecular dynamics (MD) and Monte Carlo (MC) simulations.
Performance Calculations - and Appendix I - Model XC-120 (M-107)
1950-09-25
and cargo and& point. Drop nack and return to bass. Take-off cargo Fnd return to Cross weight defined at base without pack. Takel halfway point. off...Steciolonditions or Standard Airaraft Chearsoterim tios Performance pressented herein ir. tha~t requiredby roferernee ()for Standard. Airaraft...horsepower available as used in the performance calculations of this report in defined an: THP : ) ) -• re : BiP = engine brake horsepower from engine
International Nuclear Information System (INIS)
Barnes, C.M.; Lauerhass, L.; Olson, A.L.; Taylor, D.D.; Valentine, J.H.; Lockie, K.A.
2002-01-01
The High-Level Waste (HLW) Program at the Idaho National Engineering and Environmental Laboratory (INEEL) must implement technologies and processes to treat and qualify radioactive wastes located at the Idaho Nuclear Technology and Engineering Center (INTEC) for permanent disposal. This paper describes the approach and accomplishments to date for completing development of a baseline vitrification treatment flowsheet for sodium-bearing waste (SBW), including development of a relational database used to manage the associated process assumptions. A process baseline has been developed that includes process requirements, basis and assumptions, process flow diagrams, a process description, and a mass balance. In the absence of actual process or experimental results, mass and energy balance data for certain process steps are based on assumptions. Identification, documentation, validation, and overall management of the flowsheet assumptions are critical to ensuring an integrated, focused program. The INEEL HLW Program initially used a roadmapping methodology, developed through the INEEL Environmental Management Integration Program, to identify, document, and assess the uncertainty and risk associated with the SBW flowsheet process assumptions. However, the mass balance assumptions, process configuration and requirements should be accessible to all program participants. This need resulted in the creation of a relational database that provides formal documentation and tracking of the programmatic uncertainties related to the SBW flowsheet
Calculation of single chain cellulose elasticity using fully atomistic modeling
Xiawa Wu; Robert J. Moon; Ashlie Martini
2011-01-01
Cellulose nanocrystals, a potential base material for green nanocomposites, are ordered bundles of cellulose chains. The properties of these chains have been studied for many years using atomic-scale modeling. However, model predictions are difficult to interpret because of the significant dependence of predicted properties on model details. The goal of this study is...
A modified calculation model for groundwater flowing to horizontal ...
Indian Academy of Sciences (India)
The simulation models for groundwater flowing to horizontal seepage wells proposed by Wang and Zhang (2007) are based on the theory of coupled seepage-pipe flow model which treats the well pipe as a highly permeable medium. However, the limitations of the existing model were found during applications. Specifically ...
Comparison of the performance of net radiation calculation models
DEFF Research Database (Denmark)
Kjærsgaard, Jeppe Hvelplund; Cuenca, R.H.; Martinez-Cob, A.
2009-01-01
Daily values of net radiation are used in many applications of crop-growth modeling and agricultural water management. Measurements of net radiation are not part of the routine measurement program at many weather stations and are commonly estimated based on other meteorological parameters. Daily....... The performance of the empirical models was nearly identical at all sites. Since the empirical models were easier to use and simpler to calibrate than the physically based models, the results indicate that the empirical models can be used as a good substitute for the physically based ones when available...
Energy Technology Data Exchange (ETDEWEB)
Meeks, E.; Chou, C. -P.; Garratt, T.
2013-03-31
Engineering simulations of coal gasifiers are typically performed using computational fluid dynamics (CFD) software, where a 3-D representation of the gasifier equipment is used to model the fluid flow in the gasifier and source terms from the coal gasification process are captured using discrete-phase model source terms. Simulations using this approach can be very time consuming, making it difficult to imbed such models into overall system simulations for plant design and optimization. For such system-level designs, process flowsheet software is typically used, such as Aspen Plus® [1], where each component where each component is modeled using a reduced-order model. For advanced power-generation systems, such as integrated gasifier/gas-turbine combined-cycle systems (IGCC), the critical components determining overall process efficiency and emissions are usually the gasifier and combustor. Providing more accurate and more computationally efficient reduced-order models for these components, then, enables much more effective plant-level design optimization and design for control. Based on the CHEMKIN-PRO and ENERGICO software, we have developed an automated methodology for generating an advanced form of reduced-order model for gasifiers and combustors. The reducedorder model offers representation of key unit operations in flowsheet simulations, while allowing simulation that is fast enough to be used in iterative flowsheet calculations. Using high-fidelity fluiddynamics models as input, Reaction Design’s ENERGICO® [2] software can automatically extract equivalent reactor networks (ERNs) from a CFD solution. For the advanced reduced-order concept, we introduce into the ERN a much more detailed kinetics model than can be included practically in the CFD simulation. The state-of-the-art chemistry solver technology within CHEMKIN-PRO allows that to be accomplished while still maintaining a very fast model turn-around time. In this way, the ERN becomes the basis for
International Nuclear Information System (INIS)
Law, J.D.; Wood, D.J.; Todd, T.A.
1999-01-01
Laboratory experimentation has indicated that the SREX process is effective for partitioning 90 Sr from acidic radioactive waste solutions located at the Idaho Nuclear Technology and Engineering Center. These laboratory results were used to develop a flowsheet for countercurrent testing of the SREX process with dissolved pilot plant calcine. Testing was performed using 24 stages of 2-cm diameter centrifugal contactors which are installed in the Remote Analytical Laboratory hot cell. Dissolved Run No.64 pilot plant calcine spiked with 85 Sr was used as feed solution for the testing. The flowsheet tested consisted of an extraction section (0.15 M 4prime,4prime(5prime)-di-(tert-butylcyclohexo)-18-crown-6 and 1.5 M TBP in Isopar-L.), a 1.0 M NaNO 3 scrub section to remove extracted K from the SREX solvent, a 0.01 M HNO 3 strip section for the removal of Sr from the SREX solvent, a 0.25 M Na2CO 3 wash section to remove degradation products from the solvent, and a 0.1 M HNO 3 rinse section. The behavior of 85 Sr, Na, K, Al, B, Ca, Cr, Fe, Ni, and Zr was evaluated. The described flowsheet successfully extracted 85 Sr from the dissolved pilot plant calcine with a removal efficiency of 99.6%. Distribution coefficients for 85 Sr ranged from 3.6 to 4.5 in the extraction section. With these distribution coefficients a removal efficiency of approximately >99.99% was expected. It was determined that the lower than expected removal efficiency can be attributed to a stage efficiency of only 60% in the extraction section. Extracted K was effectively scrubbed from the SREX solvent with the 1.0 M NaNO 3 resulting in only 6.4% of the K in the HLW strip product. Sodium was not extracted from the dissolved calcine by the SREX solvent; however, the use of a 1.0 M NaNO 3 scrub solution resulted in a Na concentration of 70 mg/L (12.3% of the feed concentration) in the HLW strip product. Al, B, Ca, Cr, Fe, Ni, and Zr were determined to be essentially inextractable
International Nuclear Information System (INIS)
Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II
1982-03-01
This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)
Extraproximal approach to calculating equilibriums in pure exchange models
Antipin, A. S.
2006-10-01
Models of economic equilibrium are a powerful tool of mathematical modeling of various markets. However, according to many publications, there are as yet no universal techniques for finding equilibrium prices that are solutions to such models. A technique of this kind that is a natural implementation of the Walras idea of tatonnements (i.e., groping for equilibrium prices) is proposed, and its convergence is proved.
Expanding of reactor power calculation model of RELAP5 code
International Nuclear Information System (INIS)
Lin Meng; Yang Yanhua; Chen Yuqing; Zhang Hong; Liu Dingming
2007-01-01
For better analyzing of the nuclear power transient in rod-controlled reactor core by RELAP5 code, a nuclear reactor thermal-hydraulic best-estimate system code, it is expected to get the nuclear power using not only the point neutron kinetics model but also one-dimension neutron kinetics model. Thus an existing one-dimension nuclear reactor physics code was modified, to couple its neutron kinetics model with the RELAP5 thermal-hydraulic model. The detailed example test proves that the coupling is valid and correct. (authors)
A Monte Carlo model of complex spectra of opacity calculations
International Nuclear Information System (INIS)
Klapisch, M.; Duffy, P.; Goldstein, W.H.
1991-01-01
We are developing a Monte Carlo method for calculating opacities of complex spectra. It should be faster than atomic structure codes and is more accurate than the UTA method. We use the idea that wavelength-averaged opacities depend on the overall properties, but not the details, of the spectrum; our spectra have the same statistical properties as real ones but the strength and energy of each line is random. In preliminary tests we can get Rosseland mean opacities within 20% of actual values. (orig.)
Carbon dioxide fluid-flow modeling and injectivity calculations
Burke, Lauri
2011-01-01
At present, the literature lacks a geologic-based assessment methodology for numerically estimating injectivity, lateral migration, and subsequent long-term containment of supercritical carbon dioxide that has undergone geologic sequestration into subsurface formations. This study provides a method for and quantification of first-order approximations for the time scale of supercritical carbon dioxide lateral migration over a one-kilometer distance through a representative volume of rock. These calculations provide a quantified foundation for estimating injectivity and geologic storage of carbon dioxide.
Long-Term Calculations with Large Air Pollution Models
DEFF Research Database (Denmark)
Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.
1999-01-01
Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...
Numerical calculation of path integrals : The small-polaron model
Raedt, Hans De; Lagendijk, Ad
1983-01-01
The thermodynamic properties of the small-polaron model are studied by means of a discrete version of the Feynman path-integral representation of the partition function. This lattice model describes a fermion interacting with a boson field. The bosons are treated analytically, the fermion
A review of Higgs mass calculations in supersymmetric models
DEFF Research Database (Denmark)
Draper, P.; Rzehak, H.
2016-01-01
The discovery of the Higgs boson is both a milestone achievement for the Standard Model and an exciting probe of new physics beyond the SM. One of the most important properties of the Higgs is its mass, a number that has proven to be highly constraining for models of new physics, particularly those...... related to the electroweak hierarchy problem. Perhaps the most extensively studied examples are supersymmetric models, which, while capable of producing a 125 GeV Higgs boson with SM-like properties, do so in non-generic parts of their parameter spaces. We review the computation of the Higgs mass...
FY13 GLYCOLIC-NITRIC ACID FLOWSHEET DEMONSTRATIONS OF THE DWPF CHEMICAL PROCESS CELL WITH SIMULANTS
Energy Technology Data Exchange (ETDEWEB)
Lambert, D.; Zamecnik, J.; Best, D.
2014-03-13
Savannah River Remediation is evaluating changes to its current Defense Waste Processing Facility flowsheet to replace formic acid with glycolic acid in order to improve processing cycle times and decrease by approximately 100x the production of hydrogen, a potentially flammable gas. Higher throughput is needed in the Chemical Processing Cell since the installation of the bubblers into the melter has increased melt rate. Due to the significant maintenance required for the safety significant gas chromatographs and the potential for production of flammable quantities of hydrogen, eliminating the use of formic acid is highly desirable. Previous testing at the Savannah River National Laboratory has shown that replacing formic acid with glycolic acid allows the reduction and removal of mercury without significant catalytic hydrogen generation. Five back-to-back Sludge Receipt and Adjustment Tank (SRAT) cycles and four back-to-back Slurry Mix Evaporator (SME) cycles were successful in demonstrating the viability of the nitric/glycolic acid flowsheet. The testing was completed in FY13 to determine the impact of process heels (approximately 25% of the material is left behind after transfers). In addition, back-to-back experiments might identify longer-term processing problems. The testing was designed to be prototypic by including sludge simulant, Actinide Removal Product simulant, nitric acid, glycolic acid, and Strip Effluent simulant containing Next Generation Solvent in the SRAT processing and SRAT product simulant, decontamination frit slurry, and process frit slurry in the SME processing. A heel was produced in the first cycle and each subsequent cycle utilized the remaining heel from the previous cycle. Lower SRAT purges were utilized due to the low hydrogen generation. Design basis addition rates and boilup rates were used so the processing time was shorter than current processing rates.
What do business models do? Narratives, calculation and market exploration
Liliana Doganova; Marie Renault
2008-01-01
http://www.csi.ensmp.fr/Items/WorkingPapers/Download/DLWP.php?wp=WP_CSI_012.pdf; CSI WORKING PAPERS SERIES 012; International audience; Building on a case study of an entrepreneurial venture, we investigate the role played by business models in the innovation process. Rather than debating their accuracy and efficiency, we adopt a pragmatic approach to business models -- we examine them as market devices, focusing on their materiality, use and dynamics. Taking into account the variety of its f...
A simple model for calculating air pollution within street canyons
Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.
2014-04-01
This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
An hydrodynamic model for the calculation of oil spills trajectories
Energy Technology Data Exchange (ETDEWEB)
Paladino, Emilio Ernesto; Maliska, Clovis Raimundo [Santa Catarina Univ., Florianopolis, SC (Brazil). Dept. de Engenharia Mecanica. Lab. de Dinamica dos Fluidos Computacionais]. E-mails: emilio@sinmec.ufsc.br; maliska@sinmec.ufsc.br
2000-07-01
The aim of this paper is to present a mathematical model and its numerical treatment to forecast oil spills trajectories in the sea. The knowledge of the trajectory followed by an oil slick spilled on the sea is of fundamental importance in the estimation of potential risks for pipeline and tankers route selection, and in combating the pollution using floating barriers, detergents, etc. In order to estimate these slicks trajectories a new model, based on the mass and momentum conservation equations is presented. The model considers the spreading in the regimes when the inertial and viscous forces counterbalance gravity and takes into account the effects of winds and water currents. The inertial forces are considered for the spreading and the displacement of the oil slick, i.e., is considered its effects on the movement of the mass center of the slick. The mass loss caused by oil evaporation is also taken into account. The numerical model is developed in generalized coordinates, making the model easily applicable to complex coastal geographies. (author)
Uncertain hybrid model for the response calculation of an alternator
International Nuclear Information System (INIS)
Kuczkowiak, Antoine
2014-01-01
The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)
40 CFR 600.207-86 - Calculation of fuel economy values for a model type.
2010-07-01
... Values § 600.207-86 Calculation of fuel economy values for a model type. (a) Fuel economy values for a... update sales projections at the time any model type value is calculated for a label value. (iii) The... the projected sales and fuel economy values for each base level within the model type. (1) If the...
Model for calculation of concentration and load on behalf of accidents with radioactive materials
International Nuclear Information System (INIS)
Janssen, L.A.M.; Heugten, W.H.H. van
1987-04-01
In the project 'Information- and calculation-system for disaster combatment', by order of the Dutch government, a demonstration model has been developed for a diagnosis system for accidents. In this demonstration a model is used to calculate the concentration- and dose-distributions caused by incidental emissions of limited time. This model is described in this report. 4 refs.; 2 figs.; 3 tabs
A modified calculation model for groundwater flowing to horizontal ...
Indian Academy of Sciences (India)
well pipe and aquifer couples the turbulent flow inside the horizontal seepage well with laminar flow in the aquifer. .... In the well pipe, the relationship between hydraulic head loss and flow velocity .... the steady-state mathematic model is developed for groundwater flowing to the horizontal seepage well under a river valley.
Source data for modeling of thermal engineering calculations
Directory of Open Access Journals (Sweden)
Charvátová Pavlína
2018-01-01
Full Text Available Increasing demands on thermal insulation. Their more accurate assessment by computers lead to increasingly bigger differences between computational models and reality. The result is an increasingly problematic optimization of building design. One of the key initial parameters is climatological data.
A calculation model for a HTR core seismic response
International Nuclear Information System (INIS)
Buland, P.; Berriaud, C.; Cebe, E.; Livolant, M.
1975-01-01
The paper presents the experimental results obtained at Saclay on a HTGR core model and comparisons with analytical results. Two series of horizontal tests have been performed on the shaking table VESUVE: sinusoidal test and time history response. Acceleration of graphite blocks, forces on the boundaries, relative displacement of the core and PCRB model, impact velocity of the blocks on the boundaries were recorded. These tests have shown the strongly non-linear dynamic behaviour of the core. The resonant frequency of the core is dependent on the level of the excitation. These phenomena have been explained by a computer code, which is a lumped mass non-linear model. Good correlation between experimental and analytical results was obtained for impact velocities and forces on the boundaries. This comparison has shown that the damping of the core is a critical parameter for the estimation of forces and velocities. Time history displacement at the level of PCRV was reproduced on the shaking table. The analytical model was applied to this excitation and good agreement was obtained for forces and velocities. (orig./HP) [de
Calculation of benchmarks with a shear beam model
Hendriks, M.A.N.; Boer, A.; Rots, J.G.; Ferreira, D.
2015-01-01
Fiber models for beam and shell elements allow for relatively rapid finite element analysis of concrete structures and structural elements. This project aims at the development of the formulation of such elements and a pilot implementation. Standard nonlinear fiber beam formulations do not account
Reactor accident calculation models in use in the Nordic countries
International Nuclear Information System (INIS)
Tveten, U.
1984-01-01
The report relates to a subproject under a Nordic project called ''Large reactor accidents - consequences and mitigating actions''. In the first part of the report short descriptions of the various models are given. A systematic list by subject is then given. In the main body of the report chapter and subchapter headings are by subject. (Auth.)
Semiclassical calculation for collision induced dissociation. II. Morse oscillator model
International Nuclear Information System (INIS)
Rusinek, I.; Roberts, R.E.
1978-01-01
A recently developed semiclassical procedure for calculating collision induced dissociation probabilities P/sup diss/ is applied to the collinear collision between a particle and a Morse oscillator diatomic. The particle--diatom interaction is described with a repulsive exponential potential function. P/sup diss/ is reported for a system of three identical particles, as a function of collision energy E/sub t/ and initial vibrational state of the diatomic n 1 . The results are compared with the previously reported values for the collision between a particle and a truncated harmonic oscillator. The two studies show similar features, namely: (a) there is an oscillatory structure in the P/sup diss/ energy profiles, which is directly related to n 1 ; (b) P/sup diss/ becomes noticeable (> or approx. =10 -3 ) for E/sub t/ values appreciably higher than the energetic threshold; (c) vibrational enhancement (inhibition) of collision induced dissociation persists at low (high) energies; and (d) good agreement between the classical and semiclassical results is found above the classical dynamic threshold. Finally, the convergence of P/sup diss/ for increasing box length is shown to be rapid and satisfactory
Approximate models for neutral particle transport calculations in ducts
International Nuclear Information System (INIS)
Ono, Shizuca
2000-01-01
The problem of neutral particle transport in evacuated ducts of arbitrary, but axially uniform, cross-sectional geometry and isotropic reflection at the wall is studied. The model makes use of basis functions to represent the transverse and azimuthal dependences of the particle angular flux in the duct. For the approximation in terms of two basis functions, an improvement in the method is implemented by decomposing the problem into uncollided and collided components. A new quadrature set, more suitable to the problem, is developed and generated by one of the techniques of the constructive theory of orthogonal polynomials. The approximation in terms of three basis functions is developed and implemented to improve the precision of the results. For both models of two and three basis functions, the energy dependence of the problem is introduced through the multigroup formalism. The results of sample problems are compared to literature results and to results of the Monte Carlo code, MCNP. (author)
Generic model for calculating carbon footprint of milk using four different LCA modelling approaches
DEFF Research Database (Denmark)
Dalgaard, Randi; Schmidt, Jannick Højrup; Flysjö, Anna
2014-01-01
is LCA. The model includes switches that enables for, within the same scope, transforming the results to comply with 1) consequential LCA, 2) allocation/average modelling (or ‘attributional LCA’), 3) PAS 2050 and 4) The International Dairy Federations (IDF) guide to standard life cycle assessment......The aim of the study is to develop a tool, which can be used for calculation of carbon footprint (using a life cycle assessment (LCA) approach) of milk both at a farm level and at a national level. The functional unit is ‘1 kg energy corrected milk (ECM) at farm gate’ and the applied methodology...
The curvature calculation mechanism based on simple cell model.
Yu, Haiyang; Fan, Xingyu; Song, Aiqi
2017-07-20
A conclusion has not yet been reached on how exactly the human visual system detects curvature. This paper demonstrates how orientation-selective simple cells can be used to construct curvature-detecting neural units. Through fixed arrangements, multiple plurality cells were constructed to simulate curvature cells with a proportional output to their curvature. In addition, this paper offers a solution to the problem of narrow detection range under fixed resolution by selecting an output value under multiple resolution. Curvature cells can be treated as concrete models of an end-stopped mechanism, and they can be used to further understand "curvature-selective" characteristics and to explain basic psychophysical findings and perceptual phenomena in current studies.
Accurate modeling of defects in graphene transport calculations
Linhart, Lukas; Burgdörfer, Joachim; Libisch, Florian
2018-01-01
We present an approach for embedding defect structures modeled by density functional theory into large-scale tight-binding simulations. We extract local tight-binding parameters for the vicinity of the defect site using Wannier functions. In the transition region between the bulk lattice and the defect the tight-binding parameters are continuously adjusted to approach the bulk limit far away from the defect. This embedding approach allows for an accurate high-level treatment of the defect orbitals using as many as ten nearest neighbors while keeping a small number of nearest neighbors in the bulk to render the overall computational cost reasonable. As an example of our approach, we consider an extended graphene lattice decorated with Stone-Wales defects, flower defects, double vacancies, or silicon substitutes. We predict distinct scattering patterns mirroring the defect symmetries and magnitude that should be experimentally accessible.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
International Nuclear Information System (INIS)
Smith, F.
2016-01-01
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 ''Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site''.
User Guide for GoldSim Model to Calculate PA/CA Doses and Limits
Energy Technology Data Exchange (ETDEWEB)
Smith, F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-10-31
A model to calculate doses for solid waste disposal at the Savannah River Site (SRS) and corresponding disposal limits has been developed using the GoldSim commercial software. The model implements the dose calculations documented in SRNL-STI-2015-00056, Rev. 0 “Dose Calculation Methodology and Data for Solid Waste Performance Assessment (PA) and Composite Analysis (CA) at the Savannah River Site”.
Investigation of a model to verify software for 3-D static force calculation
Takahashi, Norio; Nakata, Takayoshi; Morishige, H.
1994-01-01
Requirements for a model to verify software for 3-D static force calculation are examined, and a 3-D model for static force calculation is proposed. Some factors affecting the analysis and experiments are investigated in order to obtain accurate and reproducible results
A model for calculating expected performance of the Apollo unified S-band (USB) communication system
Schroeder, N. W.
1971-01-01
A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.
Cost calculation model concerning small-scale production of chips and split firewood
International Nuclear Information System (INIS)
Ryynaenen, S.; Naett, H.; Valkonen, J.
1995-01-01
The TTS-Institute's Forestry Department has developed a computer-based cost calculation model for the production of wood chips and split firewood. This development work was carried out in conjunction with the nation-wide BIOENERGY -research programme. The said calculation model eases and speeds up the calculation of unit costs and resource needs in harvesting systems for wood chips and split firewood. The model also enables the user to find out how changes in the productivity and costs bases of different harvesting chains influences the unit costs of the system as a whole. The undertaking was composed of the following parts: clarification and modification of productivity bases for application in the model as mathematical models, clarification of machine and device costs bases, designing of the structure and functions of the calculation model, construction and testing of the model's 0-version, model calculations concerning typical chains, review of calculation bases, and charting of development needs focusing on the model. The calculation model was developed to serve research needs, but with further development it could be useful as a tool in forestry and agricultural extension work, related schools and colleges, and in the hands of firewood producers. (author)
Energy Technology Data Exchange (ETDEWEB)
Eager, K.M.; Penwell, D.L.; Knutson, B.J.
1994-12-01
This preliminary flowsheet document describes an ion exchange process which uses Duolite{trademark} CS-100 resin to remove cesium from Hanford Tank waste. The flowsheet describes one possible equipment configuration, and contains mass balances based on that configuration with feeds of Neutralized Current Acid Waste, and Double Shell Slurry Feed. Process alternatives, unresolved issues, and development needs are discussed which relate to the process.
Er, Li; Xiangying, Zeng
2014-01-01
To simulate the variation of biochemical oxygen demand (BOD) in the tidal Foshan River, inverse calculations based on time domain are applied to the longitudinal dispersion coefficient (E(x)) and BOD decay rate (K(x)) in the BOD model for the tidal Foshan River. The derivatives of the inverse calculation have been respectively established on the basis of different flow directions in the tidal river. The results of this paper indicate that the calculated values of BOD based on the inverse calculation developed for the tidal Foshan River match the measured ones well. According to the calibration and verification of the inversely calculated BOD models, K(x) is more sensitive to the models than E(x) and different data sets of E(x) and K(x) hardly affect the precision of the models.
Directory of Open Access Journals (Sweden)
Darya Sergeevna Simonenkova
2013-09-01
Full Text Available The subject of the research is analysis of various models of the information system constructed with the use of technologies of cloud calculations. Analysis of models is required for constructing a new reference model which will be used for develop a security threats model.
Implementation of the neutronics model of HEXTRAN/HEXBU-3D into APROS for WWER calculations
International Nuclear Information System (INIS)
Rintala, J.
2008-01-01
A new three-dimensional nodal model for neutronics calculation is currently under implementation into APROS - Advanced PROcess Simulation environment - to conform the increasing accuracy requirements. The new model is based on an advanced nodal code HEXTRAN and its static version HEXBU-3D by VTT, Technical Research Centre of Finland. Currently the new APROS is under a testing programme. Later a systematic validation will be performed. In the first phase, a goal is to obtain a fully validated model for VVER-440 calculations. Thus, all the current test calculations are performed by using Loviisa NPP's VVER-440 model of APROS. In future, the model is planned to be applied for the calculations of VVER-1000 type reactors as well as in rectangular fuel geometry. The paper outlines first the general aspects of the method, and then the current situation of the implementation. Because of the identical model with the models of HEXTRAN and HEXBU-3D, the results in the test calculations are compared to the results of those. In the paper, results of two static test calculations are shown. Currently the model works well already in static analyses. Only minor problems with the control assemblies of VVER-440 type reactor still exist but the reasons are known and will be corrected in near future. Dynamical characteristics of the model are up to now tested only by some empirical tests. (author)
AFS-2 FLOWSHEET MODIFICATIONS TO ADDRESS THE INGROWTH OF PU(VI) DURING METAL DISSOLUTION
Energy Technology Data Exchange (ETDEWEB)
Crapse, K.; Rudisill, T.; O' Rourke, P.; Kyser, E.
2014-07-02
In support of the Alternate Feed Stock Two (AFS-2) PuO{sub 2} production campaign, Savannah River National Laboratory (SRNL) conducted a series of experiments concluding that dissolving Pu metal at 95°C using a 6–10 M HNO{sub 3} solution containing 0.05–0.2 M KF and 0–2 g/L B could reduce the oxidation of Pu(IV) to Pu(VI) as compared to dissolving Pu metal under the same conditions but at or near the boiling temperature. This flowsheet was demonstrated by conducting Pu metal dissolutions at 95°C to ensure that PuO{sub 2} solids were not formed during the dissolution. These dissolution parameters can be used for dissolving both Aqueous Polishing (AP) and MOX Process (MP) specification materials. Preceding the studies reported herein, two batches of Pu metal were dissolved in the H-Canyon 6.1D dissolver to prepare feed solution for the AFS-2 PuO{sub 2} production campaign. While in storage, UV-visible spectra obtained from an at-line spectrophotometer indicated the presence of Pu(VI). Analysis of the solutions also showed the presence of Fe, Ni, and Cr. Oxidation of Pu(IV) produced during metal dissolution to Pu(VI) is a concern for anion exchange purification. Anion exchange requires Pu in the +4 oxidation state for formation of the anionic plutonium(IV) hexanitrato complex which absorbs onto the resin. The presence of Pu(VI) in the anion feed solution would require a valence adjustment step to prevent losses. In addition, the presence of Cr(VI) would result in absorption of chromate ion onto the resin and could limit the purification of Pu from Cr which may challenge the purity specification of the final PuO{sub 2} product. Initial experiments were performed to quantify the rate of oxidation of Pu(IV) to Pu(VI) (presumed to be facilitated by Cr(VI)) as functions of the HNO{sub 3} concentration and temperature in simulated dissolution solutions containing Cr, Fe, and Ni. In these simulated Pu dissolutions studies, lowering the temperature from near boiling
Significance of mineralogy in the development of flowsheets for processing uranium ores
International Nuclear Information System (INIS)
1980-01-01
This report has been prepared from material developed at and subsequent to a consultants' meeting held in Vienna in January 1978. The main purpose of the meeting was to prepare a document in the form of a guide for planning and developing treatment flowsheets for uranium ore processing. It was apparent that ore mineralogy, analysed, described and interpreted in ways most meaningful to the metallurgist, is the most essential information required for forming the basis of such planning. This topic, here termed metallurgical mineralogy, is therefore a major theme of this publication. In preparing the report the Agency has borne in mind the important need to impart the experience and knowledge gained in the more developed countries to those who are in the early stages of exploiting their uranium resources. The contents may be criticized as lacking, in some respects, the requisite depth and detail of treatment. The Agency and the consultants are conscious of the need to expand the information in a number of ways. However, the report is presented in its present form in the belief that, as the first attempt to correlate, on a world-wide basis, ore type with processing, it will be considered as a useful basis for future development of these themes
Calculation of DC Arc Plasma Torch Voltage- Current Characteristics Based on Steebeck Model
International Nuclear Information System (INIS)
Gnedenko, V.G.; Ivanov, A.A.; Pereslavtsev, A.V.; Tresviatsky, S.S.
2006-01-01
The work is devoted to the problem of the determination of plasma torches parameters and power sources parameters (working voltage and current of plasma torch) at the predesigning stage. The sequence of calculation of voltage-current characteristics of DC arc plasma torch is proposed. It is shown that the simple Steenbeck model of arc discharge in cylindrical channel makes it possible to carry out this calculation. The results of the calculation are confirmed by the experiments
International Nuclear Information System (INIS)
Gasco, C.; Anton, M. P.; Ampudia, J.
2003-01-01
The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs
Energy Technology Data Exchange (ETDEWEB)
Zhang, Zhen [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Xia, Changliang [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Yan, Yan, E-mail: yanyan@tju.edu.cn [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China); Geng, Qiang [Tianjin Engineering Center of Electric Machine System Design and Control, Tianjin 300387 (China); Shi, Tingna [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2017-08-01
Highlights: • A hybrid analytical model is developed for field calculation of multilayer IPM machines. • The rotor magnetic field is calculated by the magnetic equivalent circuit method. • The field in the stator and air-gap is calculated by subdomain technique. • The magnetic scalar potential on rotor surface is modeled as trapezoidal distribution. - Abstract: Due to the complicated rotor structure and nonlinear saturation of rotor bridges, it is difficult to build a fast and accurate analytical field calculation model for multilayer interior permanent magnet (IPM) machines. In this paper, a hybrid analytical model suitable for the open-circuit field calculation of multilayer IPM machines is proposed by coupling the magnetic equivalent circuit (MEC) method and the subdomain technique. In the proposed analytical model, the rotor magnetic field is calculated by the MEC method based on the Kirchhoff’s law, while the field in the stator slot, slot opening and air-gap is calculated by subdomain technique based on the Maxwell’s equation. To solve the whole field distribution of the multilayer IPM machines, the coupled boundary conditions on the rotor surface are deduced for the coupling of the rotor MEC and the analytical field distribution of the stator slot, slot opening and air-gap. The hybrid analytical model can be used to calculate the open-circuit air-gap field distribution, back electromotive force (EMF) and cogging torque of multilayer IPM machines. Compared with finite element analysis (FEA), it has the advantages of faster modeling, less computation source occupying and shorter time consuming, and meanwhile achieves the approximate accuracy. The analytical model is helpful and applicable for the open-circuit field calculation of multilayer IPM machines with any size and pole/slot number combination.
Calculation Method of Kinetic Constants for the Mathematical Model Peat Pyrolysis
Directory of Open Access Journals (Sweden)
Plakhova Tatyana
2014-01-01
Full Text Available Relevance of the work is related to necessity to simplify the calculation of kinetic constants for the mathematical model peat pyrolysis. Execute transformations of formula Arrhenius law. Degree of conversion is expressed in terms mass changes of sample. The obtained formulas help to calculate the kinetic constants for any type of solid organic fuels
Modeling for Dose Rate Calculation of the External Exposure to Gamma Emitters in Soil
International Nuclear Information System (INIS)
Allam, K. A.; El-Mongy, S. A.; El-Tahawy, M. S.; Mohsen, M. A.
2004-01-01
Based on the model proposed and developed in Ph.D thesis of the first author of this work, the dose rate conversion factors (absorbed dose rate in air per specific activity of soil in nGy.hr - 1 per Bq.kg - 1) are calculated 1 m above the ground for photon emitters of natural radionuclides uniformly distributed in the soil. This new and simple dose rate calculation software was used for calculation of the dose rate in air 1 m above the ground. Then the results were compared with those obtained by five different groups. Although the developed model is extremely simple, the obtained results of calculations, based on this model, show excellent agreement with those obtained by the above-mentioned models specially that one adopted by UNSCEAR. (authors)
Model to Calculate the Effectiveness of an Airborne Jammer on Analog Communications
National Research Council Canada - National Science Library
Vingson, Narciso A., Jr; Muhammad, Vaqar
2005-01-01
The objective of this study is to develop a statistical model to calculate the effectiveness of an airborne jammer on analog communication and broadcast receivers, such as AM and FM Broadcast Radio...
On thermal vibration effects in diffusion model calculations of blocking dips
International Nuclear Information System (INIS)
Fuschini, E.; Ugozzoni, A.
1983-01-01
In the framework of the diffusion model, a method for calculating blocking dips is suggested that takes into account thermal vibrations of the crystal lattice. Results of calculations of the diffusion factor and the transverse energy distribution taking into accoUnt scattering of the channeled particles at thermal vibrations of lattice nuclei, are presented. Calculations are performed for α-particles with the energy of 2.12 MeV at 300 K scattered by Al crystal. It is shown that calculations performed according to the above method prove the necessity of taking into account effects of multiple scattering under blocking conditions
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
Calculation of delayed-neutron energy spectra in a QRPA-Hauser-Feshbach model
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Los Alamos National Laboratory; Moller, Peter [Los Alamos National Laboratory; Wilson, William B [Los Alamos National Laboratory
2008-01-01
Theoretical {beta}-delayed-neutron spectra are calculated based on the Quasiparticle Random-Phase Approximation (QRPA) and the Hauser-Feshbach statistical model. Neutron emissions from an excited daughter nucleus after {beta} decay to the granddaughter residual are more accurately calculated than in previous evaluations, including all the microscopic nuclear structure information, such as a Gamow-Teller strength distribution and discrete states in the granddaughter. The calculated delayed-neutron spectra agree reasonably well with those evaluations in the ENDF decay library, which are based on experimental data. The model was adopted to generate the delayed-neutron spectra for all 271 precursors.
Shell model calculations for the mass 18 nuclei in the sd-shell
International Nuclear Information System (INIS)
Hamoudi, A.
1997-01-01
A simple effective nucleon-nucleon interaction for shell model calculations in the sd-shell is derived from the Reid soft-core potential folded with two-body correlation functions which take account of the strong short-range repulsion and large tensor component in the Reid force. Calculations of binding energies and low-lying spectra are performed for the mass A=18 with T=0 and 1 nuclei using this interaction. The results of this shell model calculations show a reasonable agreement with experiment
Nuclear model calculations below 200 MeV and evaluation prospects
International Nuclear Information System (INIS)
Koning, A.J.; Bersillon, O.; Delaroche, J.P.
1994-08-01
A computational method is outlined for the quantum-mechanical prediction of the whole double-differential energy spectrum. Cross sections as calculated with the code system MINGUS are presented for (n,xn) and (p,xn) reactions on 208 Pb and 209 Bi. Our approach involves a dispersive optical model, comprehensive discrete state calculations, renormalized particle-hole state densities, a combined MSD/MSC model for pre-equilibrium reactions and compound nucleus calculations. The relation with the evaluation of nuclear data files is discussed. (orig.)
Thermal-hydraulic feedback model to calculate the neutronic cross-section in PWR reactions
International Nuclear Information System (INIS)
Santiago, Daniela Maiolino Norberto
2011-01-01
In neutronic codes,it is important to have a thermal-hydraulic feedback module. This module calculates the thermal-hydraulic feedback of the fuel, that feeds the neutronic cross sections. In the neutronic co de developed at PEN / COPPE / UFRJ, the fuel temperature is obtained through an empirical model. This work presents a physical model to calculate this temperature. We used the finite volume technique of discretized the equation of temperature distribution, while calculation the moderator coefficient of heat transfer, was carried out using the ASME table, and using some of their routines to our program. The model allows one to calculate an average radial temperature per node, since the thermal-hydraulic feedback must follow the conditions imposed by the neutronic code. The results were compared with to the empirical model. Our results show that for the fuel elements near periphery, the empirical model overestimates the temperature in the fuel, as compared to our model, which may indicate that the physical model is more appropriate to calculate the thermal-hydraulic feedback temperatures. The proposed model was validated by the neutronic simulator developed in the PEN / COPPE / UFRJ for analysis of PWR reactors. (author)
DEFF Research Database (Denmark)
Vermesi, Izabella; Rein, Guillermo; Colella, Francesco
2017-01-01
in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...
International Nuclear Information System (INIS)
Thykier-Nielsen, S.
1980-07-01
A brief description is given of the model used at Risoe for calculating the consequences of releases of radioactive material to the atmosphere. The model is based on the Gaussian plume model, and it provides possibilities for calculation of: doses to individuals, collective doses, contamination of the ground, probability distribution of doses, and the consequences of doses for give dose-risk relationships. The model is implemented as a computer program PLUCON2, written in ALGOL for the Burroughs B6700 computer at Risoe. A short description of PLUCON2 is given. (author)
Energy Technology Data Exchange (ETDEWEB)
Scheel, Joerg; Dib, Ramzi [Fachhochschule Giessen-Friedberg, Friedberg (Germany); Sassmannshausen, Achim [DB Energie GmbH, Frankfurt (Main) (Germany). Arbeitsgebiet Bahnstromleitungen Energieerzeugungs- und Uebertragungssysteme; Riedl, Markus [Eon Netz GmbH, Bayreuth (Germany). Systemtechnik Leitungen
2010-12-13
Increasingly, high-temperature cables are used in high-voltage grids. Beyond a given temperature level, their slack span cannot be calculated accurately by conventional simple linear methods. The contribution investigates the behaviour of composite cables at high operating temperatures and its influence on the slack span and presents a more accurate, bilingual calculation method. (orig.)
A heterogeneous model for burnup calculation in high temperature gas-cooled reactors
International Nuclear Information System (INIS)
Perfetti, C. M.; Angahie, S.; Baxter, A.; Ellis, C.
2008-01-01
A high resolution MCNPX model is developed to simulate nuclear design characteristics and fuel cycle features of High Temperature Gas-Cooled Reactors. Contrary to the conventional approach in the MCNPX model, fuel regions containing TRISO particles are not homogenized. A cube corner distribution approximation is used to directly model randomly dispersed TRISO fuel particles in a graphite matrix. The universe filling technique is used cover the entire range of fuel particles in the core. The heterogeneous MCNPX model is applied to simulate and analyze the complete fuel cycle of the General Atomics Plutonium-Consumption Modular Helium Reactor (PC-MHR). The PC-MHR reactor design is a variation of the General Atomic MHR design and is designed for the consumption or burning of excess Russian weapons plutonium. The MCNPX burnup calculation of the PC-MHR includes the simulation of a 260 effective full-power day fuel cycle at 600 MWt. Results of the MCNPX calculations suggest that during 260 effective full-power day cycle, 40% reduction in the whole core Pu-239 inventory could be achieved. Results of heterogeneous MCNPX burnup calculations in PC-MHR are compared with results of deterministically calculated values obtained from DIF3D codes. For the 260 effective full-power day cycle, the difference in mass Pu-239 mass reduction calculation using heterogeneous MCNPX and homogeneous DIF3D models is 6%. The difference in MCNPX and DIF3D calculated results for higher actinides are mostly higher than 6%. (authors)
Formation of decontamination cost calculation model for severe accident consequence assessment
International Nuclear Information System (INIS)
Silva, Kampanart; Promping, Jiraporn; Okamoto, Koji; Ishiwatari, Yuki
2014-01-01
In previous studies, the authors developed an index “cost per severe accident” to perform a severe accident consequence assessment that can cover various kinds of accident consequences, namely health effects, economic, social and environmental impacts. Though decontamination cost was identified as a major component, it was taken into account using simple and conservative assumptions, which make it difficult to have further discussions. The decontamination cost calculation model was therefore reconsidered. 99 parameters were selected to take into account all decontamination-related issues, and the decontamination cost calculation model was formed. The distributions of all parameters were determined. A sensitivity analysis using the Morris method was performed in order to identify important parameters that have large influence on the cost per severe accident and large extent of interactions with other parameters. We identified 25 important parameters, and fixed most negligible parameters to the median of their distributions to form a simplified decontamination cost calculation model. Calculations of cost per severe accident with the full model (all parameters distributed), and with the simplified model were performed and compared. The differences of the cost per severe accident and its components were not significant, which ensure the validity of the simplified model. The simplified model is used to perform a full scope calculation of the cost per severe accident and compared with the previous study. The decontamination cost increased its importance significantly. (author)
Report for Treating Hanford LAW and WTP SW Simulants: Pilot Plant Mineralizing Flowsheet
International Nuclear Information System (INIS)
Olson, Arlin
2012-01-01
The US Department of Energy is responsible for managing the disposal of radioactive liquid waste in underground storage tanks at the Hanford site in Washington State. The Hanford waste treatment and immobilization plant (WPT) will separate the waste into a small volume of high level waste (HLW), containing most of the radioactive constituents, and a larger volume of low activity waste (LAW), containing most of the non-radioactive chemical and hazardous constituents. The HLW and LAW will be converted into immobilized waste forms for disposal. Currently there is inadequate LAW vitrification capacity planned at the WTP to complete the mission within the required timeframe. Therefore additional LAW capacity is required. One candidate supplemental treatment technology is the fluidized bed steam reformer process (FBSR). This report describes the demonstration testing of the FBSR process using a mineralizing flowsheet for treating simulated Hanford LAW and secondary waste from the WTP (WTP SW). The FBSR testing project produced leach-resistant solid products and environmentally compliant gaseous effluents. The solid products incorporated normally soluble ions into an alkali alumino-silicate (NaS) mineral matrix. Gaseous emissions were found to be within regulatory limits. Cesium and rhenium were captured in the mineralized products with system removal efficiencies of 99.999% and 99.998 respectively. The durability and leach performance of the FBSR granular solid were superior to the low activity reference material (LMR) glass standards. Normalized product consistency test (PCT) release rates for constituents of concern were approximately 2 orders of magnitude less than that of sodium in the Hanford glass [standard].
Report for Treating Hanford LAW and WTP SW Simulants: Pilot Plant Mineralizing Flowsheet
Energy Technology Data Exchange (ETDEWEB)
Arlin Olson
2012-02-28
The US Department of Energy is responsible for managing the disposal of radioactive liquid waste in underground storage tanks at the Hanford site in Washington State. The Hanford waste treatment and immobilization plant (WPT) will separate the waste into a small volume of high level waste (HLW), containing most of the radioactive constituents, and a larger volume of low activity waste (LAW), containing most of the non-radioactive chemical and hazardous constituents. The HLW and LAW will be converted into immobilized waste forms for disposal. Currently there is inadequate LAW vitrification capacity planned at the WTP to complete the mission within the required timeframe. Therefore additional LAW capacity is required. One candidate supplemental treatment technology is the fluidized bed steam reformer process (FBSR). This report describes the demonstration testing of the FBSR process using a mineralizing flowsheet for treating simulated Hanford LAW and secondary waste from the WTP (WTP SW). The FBSR testing project produced leach-resistant solid products and environmentally compliant gaseous effluents. The solid products incorporated normally soluble ions into an alkali alumino-silicate (NaS) mineral matrix. Gaseous emissions were found to be within regulatory limits. Cesium and rhenium were captured in the mineralized products with system removal efficiencies of 99.999% and 99.998 respectively. The durability and leach performance of the FBSR granular solid were superior to the low activity reference material (LMR) glass standards. Normalized product consistency test (PCT) release rates for constituents of concern were approximately 2 orders of magnitude less than that of sodium in the Hanford glass [standard].
Directory of Open Access Journals (Sweden)
M. Ridolfi
2014-12-01
Full Text Available We review the main factors driving the calculation of the tangent height of spaceborne limb measurements: the ray-tracing method, the refractive index model and the assumed atmosphere. We find that commonly used ray tracing and refraction models are very accurate, at least in the mid-infrared. The factor with largest effect in the tangent height calculation is the assumed atmosphere. Using a climatological model in place of the real atmosphere may cause tangent height errors up to ± 200 m. Depending on the adopted retrieval scheme, these errors may have a significant impact on the derived profiles.
Power Loss Calculation and Thermal Modelling for a Three Phase Inverter Drive System
Directory of Open Access Journals (Sweden)
Z. Zhou
2005-12-01
Full Text Available Power losses calculation and thermal modelling for a three-phase inverter power system is presented in this paper. Aiming a long real time thermal simulation, an accurate average power losses calculation based on PWM reconstruction technique is proposed. For carrying out the thermal simulation, a compact thermal model for a three-phase inverter power module is built. The thermal interference of adjacent heat sources is analysed using 3D thermal simulation. The proposed model can provide accurate power losses with a large simulation time-step and suitable for a long real time thermal simulation for a three phase inverter drive system for hybrid vehicle applications.
3D Printing of Molecular Models with Calculated Geometries and p Orbital Isosurfaces
Carroll, Felix A.; Blauch, David N.
2017-01-01
3D printing was used to prepare models of the calculated geometries of unsaturated organic structures. Incorporation of p orbital isosurfaces into the models enables students in introductory organic chemistry courses to have hands-on experience with the concept of orbital alignment in strained and unstrained p systems.
A model for bootstrap current calculations with bounce averaged Fokker-Planck codes
Westerhof, E.; Peeters, A.G.
1996-01-01
A model is presented that allows the calculation of the neoclassical bootstrap current originating from the radial electron density and pressure gradients in standard (2+1)D bounce averaged Fokker-Planck codes. The model leads to an electron momentum source located almost exclusively at the
Development of a risk-based mine closure cost calculation model
CSIR Research Space (South Africa)
Du
2006-06-01
Full Text Available The study summarised in this paper focused on expanding existing South African mine closure cost calculation models to provide a new model that incorporates risks, which could have an effect on the closure costs during the life cycle of the mine...
On the applicability of nearly free electron model for resistivity calculations in liquid metals
International Nuclear Information System (INIS)
Gorecki, J.; Popielawski, J.
1982-09-01
The calculations of resistivity based on the nearly free electron model are presented for many noble and transition liquid metals. The triple ion correlation is included in resistivity formula according to SCQCA approximation. Two different methods for describing the conduction band are used. The problem of applicability of the nearly free electron model for different metals is discussed. (author)
Diameter structure modeling and the calculation of plantation volume of black poplar clones
Directory of Open Access Journals (Sweden)
Andrašev Siniša
2004-01-01
Full Text Available A method of diameter structure modeling was applied in the calculation of plantation (stand volume of two black poplar clones in the section Aigeiros (Duby: 618 (Lux and S1-8. Diameter structure modeling by Weibull function makes it possible to calculate the plantation volume by volume line. Based on the comparison of the proposed method with the existing methods, the obtained error of plantation volume was less than 2%. Diameter structure modeling and the calculation of plantation volume by diameter structure model, by the regularity of diameter distribution, enables a better analysis of the production level and assortment structure and it can be used in the construction of yield and increment tables.
Energy Technology Data Exchange (ETDEWEB)
Lambert, D.; Stone, M.; Newell, J.; Best, D.
2012-05-07
Savannah River Remediation (SRR) is evaluating changes to its current DWPF flowsheet to improve processing cycle times. This will enable the facility to support higher canister production while maximizing waste loading. Higher throughput is needed in the CPC since the installation of the bubblers into the melter has increased melt rate. Due to the significant maintenance required for the DWPF gas chromatographs (GC) and the potential for production of flammable quantities of hydrogen, reducing or eliminating the amount of formic acid used in the CPC is being developed. Earlier work at Savannah River National Laboratory has shown that replacing formic acid with an 80:20 molar blend of glycolic and formic acids has the potential to remove mercury in the SRAT without any significant catalytic hydrogen generation. This report summarizes the research completed to determine the feasibility of processing without formic acid. In earlier development of the glycolic-formic acid flowsheet, one run (GF8) was completed without formic acid. It is of particular interest that mercury was successfully removed in GF8, no formic acid at 125% stoichiometry. Glycolic acid did not show the ability to reduce mercury to elemental mercury in initial screening studies, which is why previous testing focused on using the formic/glycolic blend. The objective of the testing detailed in this document is to determine the viability of the nitric-glycolic acid flowsheet in processing sludge over a wide compositional range as requested by DWPF. This work was performed under the guidance of Task Technical and Quality Assurance Plan (TT and QAP). The details regarding the simulant preparation and analysis have been documented previously.
Energy Technology Data Exchange (ETDEWEB)
Lambert, D.; Stone, M.; Newell, J.; Best, D.; Zamecnik, J.
2012-08-28
Savannah River Remediation (SRR) is evaluating changes to its current Defense Waste Processing Facility (DWPF) flowsheet to improve processing cycle times. This will enable the facility to support higher canister production while maximizing waste loading. Higher throughput is needed in the Chemical Process Cell (CPC) since the installation of the bubblers into the melter has increased melt rate. Due to the significant maintenance required for the DWPF gas chromatographs (GC) and the potential for production of flammable quantities of hydrogen, reducing or eliminating the amount of formic acid used in the CPC is being developed. Earlier work at Savannah River National Laboratory has shown that replacing formic acid with an 80:20 molar blend of glycolic and formic acids has the potential to remove mercury in the SRAT without any significant catalytic hydrogen generation. This report summarizes the research completed to determine the feasibility of processing without formic acid. In earlier development of the glycolic-formic acid flowsheet, one run (GF8) was completed without formic acid. It is of particular interest that mercury was successfully removed in GF8, no formic acid at 125% stoichiometry. Glycolic acid did not show the ability to reduce mercury to elemental mercury in initial screening studies, which is why previous testing focused on using the formic/glycolic blend. The objective of the testing detailed in this document is to determine the viability of the nitric-glycolic acid flowsheet in processing sludge over a wide compositional range as requested by DWPF. This work was performed under the guidance of Task Technical and Quality Assurance Plan (TT&QAP). The details regarding the simulant preparation and analysis have been documented previously.
Liu, Long; Liu, Wei
2018-04-01
A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.
Skolubovich, Yuriy; Skolubovich, Aleksandr; Voitov, Evgeniy; Soppa, Mikhail; Chirkunov, Yuriy
2017-10-01
The article considers the current questions of technological modeling and calculation of the new facility for cleaning natural waters, the clarifier reactor for the optimal operating mode, which was developed in Novosibirsk State University of Architecture and Civil Engineering (SibSTRIN). A calculation technique based on well-known dependences of hydraulics is presented. A calculation example of a structure on experimental data is considered. The maximum possible rate of ascending flow of purified water was determined, based on the 24 hour clarification cycle. The fractional composition of the contact mass was determined with minimal expansion of contact mass layer, which ensured the elimination of stagnant zones. The clarification cycle duration was clarified by the parameters of technological modeling by recalculating maximum possible upward flow rate of clarified water. The thickness of the contact mass layer was determined. Likewise, clarification reactors can be calculated for any other lightening conditions.
Efficient matrix-vector products for large-scale nuclear Shell-Model calculations
Toivanen, J.
2006-01-01
A method to accelerate the matrix-vector products of j-scheme nuclear Shell-Model Configuration Interaction (SMCI) calculations is presented. The method takes advantage of the matrix product form of the j-scheme proton-neutron Hamiltonian matrix. It is shown that the method can speed up unrestricted large-scale pf-shell calculations by up to two orders of magnitude compared to previously existing related j-scheme method. The new method allows unrestricted SMCI calculations up to j-scheme dime...
International Nuclear Information System (INIS)
Collins, E.D.; Bigelow, J.E.; Campbell, D.O.; King, L.J.; Knauer, J.B.
1980-01-01
Several chemical processing flowsheets were considered for the decontamination of high-activity-level water at the Three Mile Island (TMI) Unit 2. A zeolite ion exchange process was evaluated and recommended for absorption of the bulk of the highly radioactive cesium and strontium. Standard organic ion-exchange resins were selected to remove the remaining traces of radioactive nuclides (except tritium which cannot be removed by any practical process). Process conditions were evaluated using both synthetic, tracer-level solutions and samples of actual, high-activity level water from TMI Unit 2
SITE-94. Adaptation of mechanistic sorption models for performance assessment calculations
International Nuclear Information System (INIS)
Arthur, R.C.
1996-10-01
Sorption is considered in most predictive models of radionuclide transport in geologic systems. Most models simulate the effects of sorption in terms of empirical parameters, which however can be criticized because the data are only strictly valid under the experimental conditions at which they were measured. An alternative is to adopt a more mechanistic modeling framework based on recent advances in understanding the electrical properties of oxide mineral-water interfaces. It has recently been proposed that these 'surface-complexation' models may be directly applicable to natural systems. A possible approach for adapting mechanistic sorption models for use in performance assessments, using this 'surface-film' concept, is described in this report. Surface-acidity parameters in the Generalized Two-Layer surface complexation model are combined with surface-complexation constants for Np(V) sorption ob hydrous ferric oxide to derive an analytical model enabling direct calculation of corresponding intrinsic distribution coefficients as a function of pH, and Ca 2+ , Cl - , and HCO 3 - concentrations. The surface film concept is then used to calculate whole-rock distribution coefficients for Np(V) sorption by altered granitic rocks coexisting with a hypothetical, oxidized Aespoe groundwater. The calculated results suggest that the distribution coefficients for Np adsorption on these rocks could range from 10 to 100 ml/g. Independent estimates of K d for Np sorption in similar systems, based on an extensive review of experimental data, are consistent, though slightly conservative, with respect to the calculated values. 31 refs
International Nuclear Information System (INIS)
Cliffe, K.A.; Morris, S.T.; Porter, J.D.
1998-05-01
NAMMU is a computer program for modelling groundwater flow and transport through porous media. This document provides an overview of the use of the program for geosphere modelling in performance assessment calculations and gives a detailed description of the program itself. The aim of the document is to give an indication of the grounds for having confidence in NAMMU as a performance assessment tool. In order to achieve this the following topics are discussed. The basic premises of the assessment approach and the purpose of and nature of the calculations that can be undertaken using NAMMU are outlined. The concepts of the validation of models and the considerations that can lead to increased confidence in models are described. The physical processes that can be modelled using NAMMU and the mathematical models and numerical techniques that are used to represent them are discussed in some detail. Finally, the grounds that would lead one to have confidence that NAMMU is fit for purpose are summarised
Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma
Directory of Open Access Journals (Sweden)
DONG Jing
2015-06-01
Full Text Available Hepatitis B virus (HBV-related hepatocellular carcinoma (HCC is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significance of currently used predictive models for HBV-associated HCC and assesses the advantages and limits of risk calculators. Updated REACH-B model and LSM-HCC model show better negative predictive values and have better performance in predicting the outcomes of patients with chronic hepatitis B (CHB. These models can be applied to stratified screening of HCC and, meanwhile, become an assessment tool for the management of CHB patients.
International Nuclear Information System (INIS)
Webb, G.A.M.; Grimwood, P.D.
1976-12-01
This report describes an oceanographic model which has been developed for the use in calculating the capacity of the oceans to accept radioactive wastes. One component is a relatively short-term diffusion model which is based on that described in an earlier report (Webb et al., NRPB-R14(1973)), but which has been generalised to some extent. Another component is a compartment model which is used to calculate long-term widespread water concentrations. This addition overcomes some of the short comings of the earlier diffusion model. Incorporation of radioactivity into deep ocean sediments is included in this long-term model as a removal mechanism. The combined model is used to provide a conservative (safe) estimate of the maximum concentrations of radioactivity in water as a function of time after the start of a continuous disposal operation. These results can then be used to assess the limiting capacity of an ocean to accept radioactive waste. (author)
Comparison of Steady-State SVC Models in Load Flow Calculations
DEFF Research Database (Denmark)
Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte
2008-01-01
This paper compares in a load flow calculation three existing steady-state models of static var compensator (SVC), i.e. the generator-fixed susceptance model, the total susceptance model and the firing angle model. The comparison is made in terms of the voltage at the SVC regulated bus, equivalent...... SVC susceptance at the fundamental frequency and the load flow convergence rate both when SVC is operating within and on the limits. The latter two models give inaccurate results of the equivalent SVC susceptance as compared to the generator model due to the assumption of constant voltage when the SVC...... of the calculated SVC susceptance while retaining acceptable load flow convergence rate....
International Nuclear Information System (INIS)
Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.; Sanuki, T.
2007-01-01
Using the 'modified DPMJET-III' model explained in the previous paper [T. Sanuki et al., preceding Article, Phys. Rev. D 75, 043005 (2007).], we calculate the atmospheric neutrino flux. The calculation scheme is almost the same as HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).], but the usage of the 'virtual detector' is improved to reduce the error due to it. Then we study the uncertainty of the calculated atmospheric neutrino flux summarizing the uncertainties of individual components of the simulation. The uncertainty of K-production in the interaction model is estimated using other interaction models: FLUKA'97 and FRITIOF 7.02, and modifying them so that they also reproduce the atmospheric muon flux data correctly. The uncertainties of the flux ratio and zenith angle dependence of the atmospheric neutrino flux are also studied
International Nuclear Information System (INIS)
Allam, Kh. A.
2017-01-01
In this work, a new methodology is developed based on Monte Carlo simulation for tunnels and mines external dose calculation. Tunnels external dose evaluation model of a cylindrical shape of finite thickness with an entrance and with or without exit. A photon transportation model was applied for exposure dose calculations. A new software based on Monte Carlo solution was designed and programmed using Delphi programming language. The variation of external dose due to radioactive nuclei in a mine tunnel and the corresponding experimental data lies in the range 7.3 19.9%. The variation of specific external dose rate with position in, tunnel building material density and composition were studied. The given new model has more flexible for real external dose in any cylindrical tunnel structure calculations. (authors)
A model for calculating the quantum potential for time-varying multi-slit systems
Bracken, P
2003-01-01
A model is proposed and applied to the single and double slit experiments. The model is designed to take into account a change in the experimental setup. This includes opening and closing the slits in some way, or by introducing some object which can be thought of as having a perturbing effect on the space-time background. The single and double slits could be closed simultaneously or one after the other in such a way as to transform from one arrangement to the other. The model consists in using modified free particle propagators in such a way that the required integrals for calculating the overall wave function can be calculated. It is supposed that these constants reflect the ambient structure as the experimental situation is modified, and might be calculable with regard to a more fundamental theory.
AMORPHOUS SILICON ELECTRONIC STRUCTURE MODELING AND BASIC ELECTRO-PHYSICAL PARAMETERS CALCULATION
Directory of Open Access Journals (Sweden)
B. A. Golodenko
2014-01-01
Full Text Available Summary. The amorphous semiconductor has any unique processing characteristics and it is perspective material for electronic engineering. However, we have not authentic information about they atomic structure and it is essential knot for execution calculation they electronic states and electro physical properties. The author's methods give to us decision such problem. This method allowed to calculation the amorphous silicon modeling cluster atomics Cartesian coordinates, determined spectrum and density its electronic states and calculation the basics electro physical properties of the modeling cluster. At that determined numerical means of the energy gap, energy Fermi, electron concentration inside valence and conduction band for modeling cluster. The find results provides real ability for purposeful control to type and amorphous semiconductor charge carriers concentration and else provides relation between atomic construction and other amorphous substance physical properties, for example, heat capacity, magnetic susceptibility and other thermodynamic sizes.
Development of a model for the primary system CAREM reactor's stationary thermohydraulic calculation
International Nuclear Information System (INIS)
Gaspar, C.; Abbate, P.
1990-01-01
The ESCAREM program oriented to CAREM reactors' stationary thermohydraulic calculation is presented. As CAREM gives variations in relation to models for BWR (Boiling Water Reactors)/PWR (Pressurized Water Reactors) reactors, it was decided to develop a suitable model which allows to calculate: a) if the Steam Generator design is adequate to transfer the power required; b) the circulation flow that occurs in the Primary System; c) the temperature at the entrance (cool branch) and d) the contribution of each component to the pressure drop in the circulation connection. Results were verified against manual calculations and alternative numerical models. An experimental validation at the Thermohydraulic Essays Laboratory is suggested. A parametric analysis series is presented on CAREM 25 reactor, demonstrating operating conditions, at different power levels, as well as the influence of different design aspects. (Author) [es
Tabulation of Mie scattering calculation results for microwave radiative transfer modeling
Yeh, Hwa-Young M.; Prasad, N.
1988-01-01
In microwave radiative transfer model simulations, the Mie calculations usually consume the majority of the computer time necessary for the calculations (70 to 86 percent for frequencies ranging from 6.6 to 183 GHz). For a large array of atmospheric profiles, the repeated calculations of the Mie codes make the radiative transfer computations not only expensive, but sometimes impossible. It is desirable, therefore, to develop a set of Mie tables to replace the Mie codes for the designated ranges of temperature and frequency in the microwave radiative transfer calculation. Results of using the Mie tables in the transfer calculations show that the total CPU time (IBM 3081) used for the modeling simulation is reduced by a factor of 7 to 16, depending on the frequency. The tables are tested by computing the upwelling radiance of 144 atmospheric profiles generated by a 3-D cloud model (Tao, 1986). Results are compared with those using Mie quantities computed from the Mie codes. The bias and root-mean-square deviation (RMSD) of the model results using the Mie tables, in general, are less than 1 K except for 37 and 90 GHz. Overall, neither the bias nor RMSD is worse than 1.7 K for any frequency and any viewing angle.
Weck, Philippe F; Kim, Eunja; Wang, Yifeng; Kruichak, Jessica N; Mills, Melissa M; Matteo, Edward N; Pellenq, Roland J-M
2017-08-01
Molecular structures of kerogen control hydrocarbon production in unconventional reservoirs. Significant progress has been made in developing model representations of various kerogen structures. These models have been widely used for the prediction of gas adsorption and migration in shale matrix. However, using density functional perturbation theory (DFPT) calculations and vibrational spectroscopic measurements, we here show that a large gap may still remain between the existing model representations and actual kerogen structures, therefore calling for new model development. Using DFPT, we calculated Fourier transform infrared (FTIR) spectra for six most widely used kerogen structure models. The computed spectra were then systematically compared to the FTIR absorption spectra collected for kerogen samples isolated from Mancos, Woodford and Marcellus formations representing a wide range of kerogen origin and maturation conditions. Limited agreement between the model predictions and the measurements highlights that the existing kerogen models may still miss some key features in structural representation. A combination of DFPT calculations with spectroscopic measurements may provide a useful diagnostic tool for assessing the adequacy of a proposed structural model as well as for future model development. This approach may eventually help develop comprehensive infrared (IR)-fingerprints for tracing kerogen evolution.
Application of the mathematical modelling and human phantoms for calculation of the organ doses
International Nuclear Information System (INIS)
Kluson, J.; Cechak, T.
2005-01-01
Increasing power of the computers hardware and new versions of the software for the radiation transport simulation and modelling of the complex experimental setups and geometrical arrangement enable to dramatically improve calculation of organ or target volume doses ( dose distributions) in the wide field of medical physics and radiation protection applications. Increase of computers memory and new software features makes it possible to use not only analytical (mathematical) phantoms but also allow constructing the voxel models of human or phantoms with voxels fine enough (e.g. 1·1·1 mm) to represent all required details. CT data can be used for the description of such voxel model geometry .Advanced scoring methods are available in the new software versions. Contribution gives the overview of such new possibilities in the modelling and doses calculations, discusses the simulation/approximation of the dosimetric quantities ( especially dose ) and calculated data interpretation. Some examples of application and demonstrations will be shown, compared and discussed. Present computational tools enables to calculate organ or target volumes doses with new quality of large voxel models/phantoms (including CT based patient specific model ), approximating the human body with high precision. Due to these features has more and more importance and use in the fields of medical and radiological physics, radiation protection, etc. (authors)
Calculation of the band structure of 2d conducting polymers using the network model
International Nuclear Information System (INIS)
Sabra, M. K.; Suman, H.
2007-01-01
the network model has been used to calculate the band structure the gap energy and Fermi level of conducting polymers in two dimensions. For this purpose, a geometrical classification of possible polymer chains configurations in two dimensions has been introduced leading to a classification of the unit cells based on the number of bonds in them. The model has been applied to graphite in 2D, represented by a three bonds unit cell, and, as a new case, the anti-parallel Polyacetylene chains (PA) in two dimensions, represented by a unit cell with four bons. The results are in good agreement with the first principles calculations. (author)
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
DEFF Research Database (Denmark)
Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper
2010-01-01
scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion: We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program...... CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, Torus...
A calculation of the ZH → γ H decay in the Littlest Higgs Model
International Nuclear Information System (INIS)
Aranda, J I; Ramirez-Zavaleta, F; Tututi, E S; Cortés-Maldonado, I
2016-01-01
New heavy neutral gauge bosons are predicted in many extensions of the Standard Model, those new bosons are associated with additional gauge symmetries. We present a preliminary calculation of the branching ratio decay for heavy neutral gauge bosons ( Z h ) into γ H in the most popular version of the Little Higgs models. The calculation involves the main contributions at one-loop level induced by fermions, scalars and gauge bosons. Preliminary results show a very suppressed branching ratio of the order of 10 -6 . (paper)
Recoilless fractions calculated with the nearest-neighbour interaction model by Kagan and Maslow
Kemerink, G. J.; Pleiter, F.
1986-08-01
The recoilless fraction is calculated for a number of Mössbauer atoms that are natural constituents of HfC, TaC, NdSb, FeO, NiO, EuO, EuS, EuSe, EuTe, SnTe, PbTe and CsF. The calculations are based on a model developed by Kagan and Maslow for binary compounds with rocksalt structure. With the exception of SnTe and, to a lesser extent, PbTe, the results are in reasonable agreement with the available experimental data and values derived from other models.
Numerical calculation of flashing from long pipes using a two-field model
International Nuclear Information System (INIS)
Rivard, W.C.; Torrey, M.D.
1976-05-01
A two-field model for two-phase flows, in which the vapor and liquid phases have different densities, velocities, and temperatures, has been used to calculate the flashing of water from long pipes. The IMF (Implicit Multifield) technique is used to numerically solve the transient equations that govern the dynamics of each phase. The flow physics is described with finite rate phase transitions, interfacial friction, heat transfer, pipe wall friction, and appropriate state equations. The results of the calculations are compared with measured histories of pressure, temperature, and void fraction. A parameter study indicates the relative sensitivity of the results to the various physical models that are used
Mukundan, Vrinda; Bhardwaj, Anil
2018-01-01
A one dimensional photochemical model for the dayside ionosphere of Titan has been developed for calculating the density profiles of ions and electrons under steady state photochemical equilibrium condition. We concentrated on the T40 flyby of Cassini orbiter and used the in-situ measurements from instruments onboard Cassini as input to the model. An energy deposition model is employed for calculating the attenuated photon flux and photoelectron flux at different altitudes in Titan's ionosphere. We used the Analytical Yield Spectrum approach for calculating the photoelectron fluxes. Volume production rates of major primary ions, like, N2+, N+ , CH4+, CH3+, etc due to photon and photoelectron impact are calculated and used as input to the model. The modeled profiles are compared with the Cassini Ion Neutral Mass Spectrometer (INMS) and Langmuir Probe (LP) measurements. The calculated electron density is higher than the observation by a factor of 2 to 3 around the peak. We studied the impact of different model parameters, viz. photoelectron flux, ion production rates, electron temperature, dissociative recombination rate coefficients, neutral densities of minor species, and solar flux on the calculated electron density to understand the possible reasons for this discrepancy. Recent studies have shown that there is an overestimation in the modeled photoelectron flux and N2+ ion production rates which may contribute towards this disagreement. But decreasing the photoelectron flux (by a factor of 3) and N2+ ion production rate (by a factor of 2) decreases the electron density only by 10 to 20%. Reduction in the measured electron temperature by a factor of 5 provides a good agreement between the modeled and observed electron density. The change in HCN and NH3 densities affects the calculated densities of the major ions (HCNH+ , C2H5+, and CH5+); however the overall impact on electron density is not appreciable ( < 20%). Even though increasing the dissociative
DEFF Research Database (Denmark)
Mattsson, T.R.; Wahnström, G.; Bengtsson, L.
1997-01-01
First-principles density-functional calculations of hydrogen adsorption on the Ni (001) surface have been performed in order to get a better understanding of adsorption and diffusion of hydrogen on metal surfaces. We find good agreement with experiments for the adsorption energy, binding distance...
A steady-state target calculation method based on "point" model for integrating processes.
Pang, Qiang; Zou, Tao; Zhang, Yanyan; Cong, Qiumei
2015-05-01
Aiming to eliminate the influences of model uncertainty on the steady-state target calculation for integrating processes, this paper presented an optimization method based on "point" model and a method determining whether or not there is a feasible solution of steady-state target. The optimization method resolves the steady-state optimization problem of integrating processes under the framework of two-stage structure, which builds a simple "point" model for the steady-state prediction, and compensates the error between "point" model and real process in each sampling interval. Simulation results illustrate that the outputs of integrating variables can be restricted within the constraints, and the calculation errors between actual outputs and optimal set-points are small, which indicate that the steady-state prediction model can predict the future outputs of integrating variables accurately. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Lu Xiaogang [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)]. E-mail: xiaogang@thermocalc.se; Selleby, Malin [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden); Sundman, Bo [Department of Materials Science and Engineering, Royal Institute of Technology, SE-100 44 Stockholm (Sweden)
2007-02-15
The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter {gamma} were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald {gamma} can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus.
International Nuclear Information System (INIS)
Lu Xiaogang; Selleby, Malin; Sundman, Bo
2007-01-01
The thermal expansivities and heat capacities of MX (M = Ti, Zr, Hf, V, Nb, Ta; X = C, N) carbides and nitrides with NaCl structure were calculated using the Debye-Grueneisen model combined with ab initio calculations. Two different approximations for the Grueneisen parameter γ were used in the Debye-Grueneisen model, i.e. the expressions proposed by Slater and by Dugdale and MacDonald. The thermal electronic contribution was evaluated from ab initio calculations of the electronic density of states. The calculated results were compared with CALPHAD assessments and experimental data. It was found that the calculations using the Dugdale-MacDonald γ can account for most of the experimental data. By fitting experimental heat capacity and thermal expansivity data below the Debye temperatures, an estimation of Poisson's ratio was obtained and Young's and shear moduli were evaluated. In order to reach a reasonable agreement with experimental data, it was necessary to use the logarithmic averaged mass of the constituent atoms. The agreements between the calculated and the experimental values for the bulk and Young's moduli are generally better than the agreement for shear modulus
Random geometry model in criticality calculations of solutions containing Raschig rings
International Nuclear Information System (INIS)
Teng, S.P.; Lindstrom, D.G.
1979-01-01
The criticality constants of fissile solutions containing borated Raschig rings are evaluated using the Monte Carlo code KENO IV with various geometry models. In addition to those used by other investigators, a new geometry model, the random geometry model, is presented to simulate the system of randomly oriented Raschig rings in solution. A technique to obtain the material thickness distribution functions of solution and rings for use in the random geometry model is also presented. Comparison between the experimental data and the calculated results using Monte Carlo method with various geometry models indicates that the random geometry model is a reasonable alternative to models previously used in describing the system of Raschig-ring-filled solution. The random geometry model also provides a solution to the problem of describing an array containing Raschig-ring-filled tanks that is not available to techniques using other models
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
Hydroelastic model of PWR reactor internals SAFRAN 1 - Validation of a vibration calculation method
International Nuclear Information System (INIS)
Epstein, A.; Gibert, R.J.; Jeanpierre, F.; Livolant, M.
1978-01-01
The SAFRAN 1 test loop consists of an hydroelastic similitude of a 1/8 scale model of a 3 loop P.W.R. Vibrations of the main internals (thermal shield and core barrel) and pressure fluctuations in water thin sections between vessel and internals, and in inlet and outlet pipes, have been measured. The calculation method consists of: an evaluation of the main vibration and acoustic sources owing to the flow (unsteady jet impingement on the core barrel, turbulent flow in a water thin section). A calculation of the internal modal parameters taking into account the inertial effects of fluid (the computer codes AQUAMODE and TRISTANA have been used). A calculation of the acoustic response of the circuit (the computer code VIBRAPHONE has been used). The good agreement between the calculation and the experimental results allows using this method with better security for the prediction of the vibration levels of full scale P.W.R. internals
Unified description of pf-shell nuclei by the Monte Carlo shell model calculations
Energy Technology Data Exchange (ETDEWEB)
Mizusaki, Takahiro; Otsuka, Takaharu [Tokyo Univ. (Japan). Dept. of Physics; Honma, Michio
1998-03-01
The attempts to solve shell model by new methods are briefed. The shell model calculation by quantum Monte Carlo diagonalization which was proposed by the authors is a more practical method, and it became to be known that it can solve the problem with sufficiently good accuracy. As to the treatment of angular momentum, in the method of the authors, deformed Slater determinant is used as the basis, therefore, for making angular momentum into the peculiar state, projected operator is used. The space determined dynamically is treated mainly stochastically, and the energy of the multibody by the basis formed as the result is evaluated and selectively adopted. The symmetry is discussed, and the method of decomposing shell model space into dynamically determined space and the product of spin and isospin spaces was devised. The calculation processes are shown with the example of {sup 50}Mn nuclei. The calculation of the level structure of {sup 48}Cr with known exact energy can be done with the accuracy of peculiar absolute energy value within 200 keV. {sup 56}Ni nuclei are the self-conjugate nuclei of Z=N=28. The results of the shell model calculation of {sup 56}Ni nucleus structure by using the interactions of nuclear models are reported. (K.I.)
International Nuclear Information System (INIS)
Birdsell, K.H.; Campbell, K.; Eggert, K.G.; Travis, B.J.
1989-01-01
This paper presents preliminary transport calculations for radionuclide movement at Yucca Mountain using preliminary data for mineral distributions, retardation parameter distributions, and hypothetical recharge scenarios. These calculations are not performance assessments, but are used to study the effectiveness of the geochemical barriers at the site at mechanistic level. The preliminary calculations presented have many shortcomings and should be viewed only as a demonstration of the modeling methodology. The simulations were run with TRACRN, a finite-difference porous flow and radionuclide transport code developed for the Yucca Mountain Project. Approximately 30,000 finite-difference nodes are used to represent the unsaturated and saturated zones underlying the repository in three dimensions. Sorption ratios for the radionuclides modeled are assumed to be functions of mineralogic assemblages of the underlying rock. These transport calculations present a representative radionuclide cation, 135 Cs and anion, 99 Tc. The effects on transport of many of the processes thought to be active at Yucca Mountain may be examined using this approach. The model provides a method for examining the integration of flow scenarios, transport, and retardation processes as currently understood for the site. It will also form the basis for estimates of the sensitivity of transport calculations to retardation processes. 11 refs., 17 figs., 1 tab
Energy Technology Data Exchange (ETDEWEB)
Jantzen, C. M. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, M. S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Edwards, T. B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Trivelpiece, C. L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Ramsey, W. G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-06-14
Control of the REDuction/OXidation (REDOX) state of glasses containing high concentrations of transition metals, such as High Level Waste (HLW) glasses, is critical in order to eliminate processing difficulties caused by overly reduced or overly oxidized melts. Operation of a HLW melter at Fe^{+2}/ΣFe ratios of between 0.09 and 0.33, retains radionuclides in the melt and thus the final glass. Specifically, long-lived radioactive ^{99}Tc species are less volatile in the reduced Tc^{4+} state as TcO_{2} than as NaTcO_{4} or Tc_{2}O_{7}, and ruthenium radionuclides in the reduced Ru^{4+} state are insoluble RuO_{2} in the melt which are not as volatile as NaRuO_{4} where the Ru is in the +7 oxidation state. Similarly, hazardous volatile Cr^{6+} occurs in oxidized melt pools as Na_{2}CrO_{4} or Na_{2}Cr_{2}O_{7}, while the Cr^{+3} state is less volatile and remains in the melt as NaCrO_{2} or precipitates as chrome rich spinels. The melter REDOX control balances the oxidants and reductants from the feed and from processing additives such as antifoam.
Energy Technology Data Exchange (ETDEWEB)
Shripad T. Revankar; Nicholas R. Brown; Cheikhou Kane; Seungmin Oh
2010-05-01
The realization of the hydrogen as an energy carrier for future power sources relies on a practical method of producing hydrogen in large scale with no emission of green house gases. Hydrogen is an energy carrier which can be produced by a thermochemical water splitting process. The Sulfur-Iodine (SI) process is an example of a water splitting method using iodine and sulfur as recycling agents.
Program realization of mathematical model of kinetostatical calculation of flat lever mechanisms
Directory of Open Access Journals (Sweden)
M. A. Vasechkin
2016-01-01
Full Text Available Global computerization determined the dominant position of the analytical methods for the study of mechanisms. As a result, kinetostatics analysis of mechanisms using software packages is an important part of scientific and practical activities of engineers and designers. Therefore, software implementation of mathematical models kinetostatical calculating mechanisms is of practical interest. The mathematical model obtained in [1]. In the language of Turbo Pascal developed a computer procedure that calculates the forces in kinematic pairs in groups Assur (GA and a balancing force at the primary level. Before use appropriate computational procedures it is necessary to know all external forces and moments acting on the GA and to determine the inertial forces and moments of inertia forces. The process of calculations and constructions of the provisions of the mechanism can be summarized as follows. Organized cycle in which to calculate the position of an initial link of the mechanism. Calculate the position of the remaining links of the mechanism by referring to relevant procedures module DIADA in GA [2,3]. Using the graphics mode of the computer displaying on the display the position of the mechanism. The computed inertial forces and moments of inertia forces. Turning to the corresponding procedures of the module, calculated all the forces in kinematic pairs and the balancing force at the primary level. In each kinematic pair build forces and their direction with the help of simple graphical procedures. The magnitude of these forces and their direction are displayed in a special window with text mode. This work contains listings of the test programs MyTеst, is an example of using computing capabilities of the developed module. As a check on the calculation procedures of module in the program is reproduced an example of calculating the balancing forces according to the method of Zhukovsky (Zhukovsky lever.
Huiles usagées : schémas IFP de raraffinage Ifp Flowsheets for Rerefining Used Oils
Directory of Open Access Journals (Sweden)
Audibert F.
2006-11-01
Full Text Available Tout bailleur de licence de procédés de reraffinage des huiles doit rester averti des multiples problèmes concernant ce sujet. Le schéma proposé doit assurer la flexibilité requise du fait des qualités variables de charges et de la législation sur l'environnement. Les schémas complets proposés par l'Institut Français du Pétrole (IFP comprennent un premier flash, une purification physique en profondeur et une étape de raffinage adaptée à la demande locale. Des détails sont donnés sur chaque séquence ainsi que les analyses des produits. Any licensor must be aware of the problems involved in the rerefining of used ails. The proposed flowsheet should ensure the desired flexibility with regard ta feedstockquality variations and environmental - regulation requirements. The complete flowsheets proposed by Institut Français du Pétrole (IFP include a first flash, an in-depth physical purification and a refining step adapted to the local demands. Details are given for each séquence along with product analyses.
Microscopic calculation of level densities: the shell model Monte Carlo approach
International Nuclear Information System (INIS)
Alhassid, Yoram
2012-01-01
The shell model Monte Carlo (SMMC) approach provides a powerful technique for the microscopic calculation of level densities in model spaces that are many orders of magnitude larger than those that can be treated by conventional methods. We discuss a number of developments: (i) Spin distribution. We used a spin projection method to calculate the exact spin distribution of energy levels as a function of excitation energy. In even-even nuclei we find an odd-even staggering effect (in spin). Our results were confirmed in recent analysis of experimental data. (ii) Heavy nuclei. The SMMC approach was extended to heavy nuclei. We have studied the crossover between vibrational and rotational collectivity in families of samarium and neodymium isotopes in model spaces of dimension approx. 10 29 . We find good agreement with experimental results for both state densities and 2 > (where J is the total spin). (iii) Collective enhancement factors. We have calculated microscopically the vibrational and rotational enhancement factors of level densities versus excitation energy. We find that the decay of these enhancement factors in heavy nuclei is correlated with the pairing and shape phase transitions. (iv) Odd-even and odd-odd nuclei. The projection on an odd number of particles leads to a sign problem in SMMC. We discuss a novel method to calculate state densities in odd-even and odd-odd nuclei despite the sign problem. (v) State densities versus level densities. The SMMC approach has been used extensively to calculate state densities. However, experiments often measure level densities (where levels are counted without including their spin degeneracies.) A spin projection method enables us to also calculate level densities in SMMC. We have calculated the SMMC level density of 162 Dy and found it to agree well with experiments
Spin-splitting calculation for zincblende semiconductors using an atomic bond-orbital model.
Kao, Hsiu-Fen; Lo, Ikai; Chiang, Jih-Chen; Chen, Chun-Nan; Wang, Wan-Tsang; Hsu, Yu-Chi; Ren, Chung-Yuan; Lee, Meng-En; Wu, Chieh-Lung; Gau, Ming-Hong
2012-10-17
We develop a 16-band atomic bond-orbital model (16ABOM) to compute the spin splitting induced by bulk inversion asymmetry in zincblende materials. This model is derived from the linear combination of atomic-orbital (LCAO) scheme such that the characteristics of the real atomic orbitals can be preserved to calculate the spin splitting. The Hamiltonian of 16ABOM is based on a similarity transformation performed on the nearest-neighbor LCAO Hamiltonian with a second-order Taylor expansion k at the Γ point. The spin-splitting energies in bulk zincblende semiconductors, GaAs and InSb, are calculated, and the results agree with the LCAO and first-principles calculations. However, we find that the spin-orbit coupling between bonding and antibonding p-like states, evaluated by the 16ABOM, dominates the spin splitting of the lowest conduction bands in the zincblende materials.
Calculational model for condensation of water vapor during an underground nuclear detonation
International Nuclear Information System (INIS)
Knox, R.J.
1975-01-01
An empirally derived mathematical model was developed to calculate the pressure and temperature history during condensation of water vapor in an underground-nuclear-explosion cavity. The condensation process is non-isothermal. Use has been made of the Clapeyron-Clausius equation as a basis for development of the model. Analytic fits to the vapor pressure and the latent heat of vaporization for saturated-water vapor, together with an estimated value for the heat-transfer coefficient, have been used to describe the phenomena. The calculated pressure-history during condensation has been determined to be exponential, with a time constant somewhat less than that observed during the cooling of the superheated steam from the explosion. The behavior of the calculated condensation-pressure compares well with the observed-pressure record (until just prior to cavity collapse) for a particular nuclear-detonation event for which data is available
International Nuclear Information System (INIS)
Artemov, V.G.; Gusev, V.I.; Zinatullin, R.E.; Karpov, A.S.
2007-01-01
Using modeled WWER cram rod drop experiments, performed at the Rostov NPP, as an example, the influence of delayed neutron parameters on the modeling results was investigated. The delayed neutron parameter values were taken from both domestic and foreign nuclear databases. Numerical modeling was carried out on the basis of SAPFIR 9 5andWWERrogram package. Parameters of delayed neutrons were acquired from ENDF/B-VI and BNAB-78 validated data files. It was demonstrated that using delay fraction data from different databases in reactivity meters led to significantly different reactivity results. Based on the results of numerically modeled experiments, delayed neutron parameters providing the best agreement between calculated and measured data were selected and recommended for use in reactor calculations (Authors)
Investigation of the influence of the open cell foam models geometry on hydrodynamic calculation
Soloveva, O. V.; Solovev, S. A.; Khusainov, R. R.; Popkova, O. S.; Panenko, D. O.
2018-01-01
A geometrical model of the open cell foam was created as an ordered set of intersecting spheres. The proposed model closely describes a real porous cellular structure. The hydrodynamics flow was calculated on the basis of a simple model in the ANSYS Fluent software package. A pressure drop was determined, the value of which was compared with the experimental data of other authors. As a result of the conducted studies, we found that a porous structure with smoothed faces provides the smallest pressure drop with the same porosity of the package. Analysis of the calculated data demonstrated that the approximation of an elementary porous cell substantially distorts the flow field. This is undesirable in detailed modeling of the open cell foam.
Calculation model for 16N transit time in the secondary side of steam generators
International Nuclear Information System (INIS)
Liu Songyu; Xu Jijun; Xu Ming
1998-01-01
The 16 N transit time is essential to determine the leak-rate of steam generator tubes leaks with 16 N monitoring system, which is a new technique. A model was developed for calculation 16 N transit time in the secondary side of steam generators. According to the flow characters of secondary side fluid, the transit times divide into four sectors from tube sheet to the sensor on steam line. The model assumes that 16 N is moving as vapor phase in the secondary-side. So the model for vapor velocity distribution in tube bundle is presented in detail. The 16 N transit time calculation results of this model compare with these of EDF on steam generator of Qinshan NPP
Calculations of Inflaton Decays and Reheating: with Applications to No-Scale Inflation Models
Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A
2015-01-01
We discuss inflaton decays and reheating in no-scale Starobinsky-like models of inflation, calculating the effective equation-of-state parameter, $w$, during the epoch of inflaton decay, the reheating temperature, $T_{\\rm reh}$, and the number of inflationary e-folds, $N_*$, comparing analytical approximations with numerical calculations. We then illustrate these results with applications to models based on no-scale supergravity and motivated by generic string compactifications, including scenarios where the inflaton is identified as an untwisted-sector matter field with direct Yukawa couplings to MSSM fields, and where the inflaton decays via gravitational-strength interactions. Finally, we use our results to discuss the constraints on these models imposed by present measurements of the scalar spectral index $n_s$ and the tensor-to-scalar perturbation ratio $r$, converting them into constraints on $N_*$, the inflaton decay rate and other parameters of specific no-scale inflationary models.
A new simulation model for calculating the internal exposure of some radionuclides
Directory of Open Access Journals (Sweden)
Mahrous Ayman
2009-01-01
Full Text Available A new model based on a series of mathematical functions for estimating excretion rates following the intake of nine different radionuclides is presented in this work. The radionuclides under investigation are: cobalt, iodine, cesium, strontium, ruthenium, radium, thorium, plutonium, and uranium. The committed effective dose has been calculated by our model so as to obtain the urinary and faecal excretion rates for each radionuclide. The said model is further validated by a comparison with the widely spread Mondal software and a simulation program. The results obtained show a harmony between the Mondal package and the model we have constructed.
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T. A.; Rowe, D. J.
2014-01-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functi...
Calculation of spherical models of lead with a source of 14 MeV-neutrons
International Nuclear Information System (INIS)
Markovskij, D.V.; Borisov, A.A.
1989-01-01
Neutron transport calculations for spherical models of lead have been done with the one-dimensional code BLANK realizing the direct Monte Carlo method in the whole range of neutron energies and they are compared with the experimental results. 6 refs, 10 figs, 3 tabs
DEFF Research Database (Denmark)
Blurton, Steven Paul; Kesselmeier, M.; Gondan, Matthias
2012-01-01
We propose an improved method for calculating the cumulative first-passage time distribution in Wiener diffusion models with two absorbing barriers. This distribution function is frequently used to describe responses and error probabilities in choice reaction time tasks. The present work extends ...
Black Hole Entropy Calculation in a Modified Thin Film Model Jingyi ...
Indian Academy of Sciences (India)
Abstract. The thin film model is modified to calculate the black hole entropy. The difference from the original method is that the Parikh–. Wilczek tunnelling framework is introduced and the self-gravitation of the emission particles is taken into account. In terms of our improvement, if the entropy is still proportional to the area, ...
Model-Independent Calculation of Radiative Neutron Capture on Lithium-7
Rupak, Gautam; Higa, Renato
2011-01-01
The radiative neutron capture on lithium-7 is calculated model independently using a low-energy halo effective field theory. The cross section is expressed in terms of scattering parameters directly related to the S-matrix elements. It depends on the poorly known p-wave effective range parameter
Eliseev, A. V.; Coumou, D.; Chernokulsky, A. V.; Petoukhov, V.; Petri, S.
2013-01-01
In this study we present a scheme for calculating the characteristics of multi-layer cloudiness and precipitation for Earth system models of intermediate complexity (EMICs). This scheme considers three-layer stratiform cloudiness and single-column convective clouds. It distinguishes between ice and
SHARC, a model for calculating atmospheric infrared radiation under non-equilibrium conditions
Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Matthew, M. W.; Adler-Golden, S. M.; Robertson, D. C.; Sharma, R. D.; Brown, J. H.; Healey, R. J.
A new computer model, SHARC, has been developed by the U.S. Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1 cm 4. Comprehensive coverage of the 2 to 40 μm (250 to 5,000 cm-1) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in state populations by explicitly modeling the detailed production, loss, and energy transfer processes among the contributing molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach calculates LOS radiance values which are accurate to ±10% and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO+ in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE and FWI field experiments are presented.
Recursive calculation of matrix elements for the generalized seniority shell model
International Nuclear Information System (INIS)
Luo, F.Q.; Caprio, M.A.
2011-01-01
A recursive calculational scheme is developed for matrix elements in the generalized seniority scheme for the nuclear shell model. Recurrence relations are derived which permit straightforward and efficient computation of matrix elements of one-body and two-body operators and basis state overlaps.
Ab initio calculation of the sound velocity of dense hydrogen: implications for models of Jupiter
Alavi, A.; Parrinello, M.; Frenkel, D.
1995-01-01
First-principles molecular dynamics simulations were used to calculate the sound velocity of dense hydrogen, and the results were compared with extrapolations of experimental data that currently conflict with either astrophysical models or data obtained from recent global oscillation measurements of
Improved method for the cutting coefficients calculation in micromilling force modeling
Li, P.; Oosterling, J.A.J.; Hoogstrate, A.M.; Langen, H.H.
2008-01-01
This paper discusses the influence of runout on the calculation of the coefficients of mechanistic force models in micromilling. A runout mode is used to study the change of chip thickness, tool angles, and immersion period of two cutting edges of micro endmills due to runout. A new method to find
A new timing model for calculating the intrinsic timing resolution of a scintillator detector
International Nuclear Information System (INIS)
Shao Yiping
2007-01-01
The coincidence timing resolution is a critical parameter which to a large extent determines the system performance of positron emission tomography (PET). This is particularly true for time-of-flight (TOF) PET that requires an excellent coincidence timing resolution (<<1 ns) in order to significantly improve the image quality. The intrinsic timing resolution is conventionally calculated with a single-exponential timing model that includes two parameters of a scintillator detector: scintillation decay time and total photoelectron yield from the photon-electron conversion. However, this calculation has led to significant errors when the coincidence timing resolution reaches 1 ns or less. In this paper, a bi-exponential timing model is derived and evaluated. The new timing model includes an additional parameter of a scintillator detector: scintillation rise time. The effect of rise time on the timing resolution has been investigated analytically, and the results reveal that the rise time can significantly change the timing resolution of fast scintillators that have short decay time constants. Compared with measured data, the calculations have shown that the new timing model significantly improves the accuracy in the calculation of timing resolutions
On large-scale shell-model calculations in sup 4 He
Energy Technology Data Exchange (ETDEWEB)
Bishop, R.F.; Flynn, M.F. (Manchester Univ. (UK). Inst. of Science and Technology); Bosca, M.C.; Buendia, E.; Guardiola, R. (Granada Univ. (Spain). Dept. de Fisica Moderna)
1990-03-01
Most shell-model calculations of {sup 4}He require very large basis spaces for the energy spectrum to stabilise. Coupled cluster methods and an exact treatment of the centre-of-mass motion dramatically reduce the number of configurations. We thereby obtain almost exact results with small bases, but which include states of very high excitation energy. (author).
Covariance matrices for nuclear cross sections derived from nuclear model calculations
International Nuclear Information System (INIS)
Smith, D. L.
2005-01-01
The growing need for covariance information to accompany the evaluated cross section data libraries utilized in contemporary nuclear applications is spurring the development of new methods to provide this information. Many of the current general purpose libraries of evaluated nuclear data used in applications are derived either almost entirely from nuclear model calculations or from nuclear model calculations benchmarked by available experimental data. Consequently, a consistent method for generating covariance information under these circumstances is required. This report discusses a new approach to producing covariance matrices for cross sections calculated using nuclear models. The present method involves establishing uncertainty information for the underlying parameters of nuclear models used in the calculations and then propagating these uncertainties through to the derived cross sections and related nuclear quantities by means of a Monte Carlo technique rather than the more conventional matrix error propagation approach used in some alternative methods. The formalism to be used in such analyses is discussed in this report along with various issues and caveats that need to be considered in order to proceed with a practical implementation of the methodology
MODEL OF TAKEOFF AND LANDING OPERATIONS FOR CALCULATING OF AERODROME capacity
Directory of Open Access Journals (Sweden)
I. Yu. Agafonova
2014-01-01
Full Text Available The procedures for takeoff and landing of aircraft flow are discussed. An approach to the construction of a model for calculation of aerodrome capacity is proposed. Decomposition of model is conducted and one of its elements - the approach mode is investigated. The estimation of the time interval for this mode and limitations on the minimum distances between aircraft in the stream are shown.
Significance of predictive models/risk calculators for HBV-related hepatocellular carcinoma
DONG Jing
2015-01-01
Hepatitis B virus (HBV)-related hepatocellular carcinoma (HCC) is a major public health problem in Southeast Asia. In recent years, researchers from Hong Kong and Taiwan have reported predictive models or risk calculators for HBV-associated HCC by studying its natural history, which, to some extent, predicts the possibility of HCC development. Generally, risk factors of each model involve age, sex, HBV DNA level, and liver cirrhosis. This article discusses the evolution and clinical significa...
A three-dimensional model for calculating the micro disk laser resonant-modes
International Nuclear Information System (INIS)
Sabetjoo, H.; Bahrampor, A.; Farrahi-Moghaddam, R.
2006-01-01
In this article, a semi-analytical model for theoretical analysis of micro disk lasers is presented. Using this model, the necessary conditions for the existence of loss less and low-loss modes of micro-resonators are obtained. The resonance frequency of the resonant modes and also the attenuation of low-loss modes are calculated. By comparing the results with results of finite difference method, their validity is certified.
A mathematical model of the nine-month pregnant woman for calculating specific absorbed fractions
International Nuclear Information System (INIS)
Watson, E.E.; Stabin, M.G.
1986-01-01
Existing models that allow calculation of internal doses from radionuclide intakes by both men and women are based on a mathematical model of Reference Man. No attempt has been made to allow for the changing geometric relationships that occur during pregnancy which would affect the doses to the mother's organs and to the fetus. As pregnancy progresses, many of the mother's abdominal organs are repositioned, and their shapes may be somewhat changed. Estimation of specific absorbed fractions requires that existing mathematical models be modified to accommodate these changes. Specific absorbed fractions for Reference Woman at three, six, and nine months of pregnancy should be sufficient for estimating the doses to the pregnant woman and the fetus. This report describes a model for the pregnant woman at nine months. An enlarged uterus was incorporated into a model for Reference Woman. Several abdominal organs as well as the exterior of the trunk were modified to accommodate the new uterus. This model will allow calculation of specific absorbed fractions for the fetus from photon emitters in maternal organs. Specific absorbed fractions for the repositioned maternal organs from other organs can also be calculated. 14 refs., 2 figs
International Nuclear Information System (INIS)
Androsenko, A.A.; Androsenko, P.A.; Deeva, V.V.; Prokof'eva, Z.A.
1990-01-01
Analysis is made for the effect of mathematical model accuracy of the system concerned on the calculation results using the BRAND program system. Consideration is given to the impact of the following factors: accuracy of neutron source energy-angular characteristics description, various degrees of system geometry approximation, adequacy of Monte-Carlo method estimation to a real physical neutron detector. The calculation results analysis is made on the basis of the experiments on leakage neutron spectra measurement in spherical lead assemblies with the 14 MeV-neutron source in the centre. 4 refs.; 2 figs.; 10 tabs
International Nuclear Information System (INIS)
Strenge, D.L.; Watson, E.C.; Droppo, J.G.
1976-06-01
The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Watson, E.C.; Droppo, J.G.
1976-06-01
The development of technological bases for siting nuclear fuel cycle facilities requires calculational models and computer codes for the evaluation of risks and the assessment of environmental impact of radioactive effluents. A literature search and review of available computer programs revealed that no one program was capable of performing all of the great variety of calculations (i.e., external dose, internal dose, population dose, chronic release, accidental release, etc.). Available literature on existing computer programs has been reviewed and a description of each program reviewed is given.
Shell model calculation for Te and Sn isotopes in the vicinity of {sup 100}Sn
Energy Technology Data Exchange (ETDEWEB)
Yakhelef, A.; Bouldjedri, A. [Physics Department, Farhat abbas University, Setif (Algeria); Physics Department, Hadj Lakhdar University, Batna (Algeria)
2012-06-27
New Shell Model calculations for even-even isotopes {sup 104-108}Sn and {sup 106,108}Te, in the vicinity of {sup 100}Sn have been performed. The calculations have been carried out using the windows version of NuShell-MSU. The two body matrix elements TBMEs of the effective interaction between valence nucleons are obtained from the renormalized two body effective interaction based on G-matrix derived from the CD-bonn nucleon-nucleon potential. The single particle energies of the proton and neutron valence spaces orbitals are defined from the available spectra of lightest odd isotopes of Sb and Sn respectively.
A brief look at model-based dose calculation principles, practicalities, and promise.
Sloboda, Ron S; Morrison, Hali; Cawston-Grant, Brie; Menon, Geetha V
2017-02-01
Model-based dose calculation algorithms (MBDCAs) have recently emerged as potential successors to the highly practical, but sometimes inaccurate TG-43 formalism for brachytherapy treatment planning. So named for their capacity to more accurately calculate dose deposition in a patient using information from medical images, these approaches to solve the linear Boltzmann radiation transport equation include point kernel superposition, the discrete ordinates method, and Monte Carlo simulation. In this overview, we describe three MBDCAs that are commercially available at the present time, and identify guidance from professional societies and the broader peer-reviewed literature intended to facilitate their safe and appropriate use. We also highlight several important considerations to keep in mind when introducing an MBDCA into clinical practice, and look briefly at early applications reported in the literature and selected from our own ongoing work. The enhanced dose calculation accuracy offered by a MBDCA comes at the additional cost of modelling the geometry and material composition of the patient in treatment position (as determined from imaging), and the treatment applicator (as characterized by the vendor). The adequacy of these inputs and of the radiation source model, which needs to be assessed for each treatment site, treatment technique, and radiation source type, determines the accuracy of the resultant dose calculations. Although new challenges associated with their familiarization, commissioning, clinical implementation, and quality assurance exist, MBDCAs clearly afford an opportunity to improve brachytherapy practice, particularly for low-energy sources.
A model for the calculation of the radiation dose from natural radionuclides in The Netherlands
International Nuclear Information System (INIS)
Ackers, J.G.
1986-02-01
A model has been developed to calculate the radiation dose incurred from natural radioactivity indoors and outdoors, expressed in effective dose equivalence/year. The model is applied on a three rooms dwelling characterized by interconnecting air flows and on a dwelling with crawlspace. In this model the distinct parameters are variable in order to allow the investigation of the relative influence. The calculated effective dose equivalent for an adult in the dwelling was calculated to be about 1.7 mSv/year, composed of 15% from cosmic radiation, 35% from terrestrial radioactivity, 20% from radioactivity in the body and 30% from natural radionuclides in building materials. The calculations show an enhancement of about a factor of two in radon concentration in air in a room which is ventilated by air from an adjacent room. It is also shown that the attachment rate of radon products to aerosols and the plate-out effect are relatively important parameters influencing the magnitude of the dose rate. (Auth.)
International Nuclear Information System (INIS)
Poston, J.W.
1989-01-01
This presentation will review and describe the development of pediatric phantoms for use in radiation dose calculations . The development of pediatric models for dose calculations essentially paralleled that of the adult. In fact, Snyder and Fisher at the Oak Ridge National Laboratory reported on a series of phantoms for such calculations in 1966 about two years before the first MIRD publication on the adult human phantom. These phantoms, for a newborn, one-, five-, ten-, and fifteen-year old, were derived from the adult phantom. The ''pediatric'' models were obtained through a series of transformations applied to the major dimensions of the adult, which were specified in a Cartesian coordinate system. These phantoms suffered from the fact that no real consideration was given to the influence of these mathematical transformations on the actual organ sizes in the other models nor to the relation of the resulting organ masses to those in humans of the particular age. Later, an extensive effort was invested in designing ''individual'' pediatric phantoms for each age based upon a careful review of the literature. Unfortunately, the phantoms had limited use and only a small number of calculations were made available to the user community. Examples of the phantoms, their typical dimensions, common weaknesses, etc. will be discussed
A new model for the accurate calculation of natural gas viscosity
Directory of Open Access Journals (Sweden)
Xiaohong Yang
2017-03-01
Full Text Available Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental data, the diagram showing the variation of viscosity along with temperature and density is prepared, showing that: ① the gas viscosity increases with the increase of density as well as the increase of temperature in the low density region; ② the gas viscosity increases with the decrease of temperature in high density region. With this new model, the viscosity of 9 natural gas samples was calculated precisely. The average relative deviation between these calculated values and 1539 experimental data measured at 250–450 K and 0.10–140.0 MPa is less than 1.9%. Compared with the 793 experimental data with a measurement error less than 0.5%, the maximum relative deviation is less than 0.98%. It is concluded that this new model is more advantageous than the previous 8 models in terms of simplicity, accuracy, fast calculation, and direct applicability to the CO2 bearing gas samples.
Development of a model to calculate the economic implications of improving the indoor climate
DEFF Research Database (Denmark)
Jensen, Kasper Lynge
in the indoor environment. Office workers exposed to the same indoor environment conditions will in many cases wear different clothing, have different metabolic rates, experience micro environment differences etc. all factors that make it difficult to estimate the effects of the indoor environment...... have been developed; one model estimating the effects of indoor temperature on mental performance and one model estimating the effects of air quality on mental performance. Combined with dynamic building simulations and dose-response relationships, the derived models were used to calculate the total...... on performance. The Bayesian Network uses a probabilistic approach by which a probability distribution can take this variation of the different indoor variables into account. The result from total building economy calculations indicated that depending on the indoor environmental change (improvement...
OPT13B and OPTIM4 - computer codes for optical model calculations
International Nuclear Information System (INIS)
Pal, S.; Srivastava, D.K.; Mukhopadhyay, S.; Ganguly, N.K.
1975-01-01
OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)
Polystochastic Models for Complexity
Iordache, Octavian
2010-01-01
This book is devoted to complexity understanding and management, considered as the main source of efficiency and prosperity for the next decades. Divided into six chapters, the book begins with a presentation of basic concepts as complexity, emergence and closure. The second chapter looks to methods and introduces polystochastic models, the wave equation, possibilities and entropy. The third chapter focusing on physical and chemical systems analyzes flow-sheet synthesis, cyclic operations of separation, drug delivery systems and entropy production. Biomimetic systems represent the main objective of the fourth chapter. Case studies refer to bio-inspired calculation methods, to the role of artificial genetic codes, neural networks and neural codes for evolutionary calculus and for evolvable circuits as biomimetic devices. The fifth chapter, taking its inspiration from systems sciences and cognitive sciences looks to engineering design, case base reasoning methods, failure analysis, and multi-agent manufacturing...
A four-equation friction model for water hammer calculation in quasi-rigid pipelines
International Nuclear Information System (INIS)
Ghodhbani, Abdelaziz; Haj Taïeb, Ezzeddine
2017-01-01
Friction coupling affects water hammer evolution in pipelines according to the initial flow regime. Unsteady friction models are only validated with uncoupled formulation. On the other hand, coupled models such as four-equation model, provide more accurate prediction of water hammer since fluid-structure interaction (FSI) is taken into account, but they are limited to steady-state friction formulation. This paper deals with the creation of the “four-equation friction model” which is based on the incorporation of the unsteady head loss given by an unsteady friction model into the four-equation model. For transient laminar flow cases, the Zielke model is considered. The proposed model is applied to a quasi-rigid pipe with axial moving valve, and then calculated by the method of characteristics (MOC). Damping and shape of the numerical solution are in good agreement with experimental data. Thus, the proposed model can be incorporated into a new computer code. - Highlights: • Both Zielke model and four-equation model are insufficient to predict water hammer. • The four-equation friction model proposed is obtained by incorporating the unsteady head loss in the four-equation model. • The solution obtained by the proposed model is in good agreement with experimental data. • The wave-speed adjustment scheme is more efficient than interpolations schemes.
The High Level Mathematical Models in Calculating Aircraft Gas Turbine Engine Parameters
Directory of Open Access Journals (Sweden)
Yu. A. Ezrokhi
2017-01-01
Full Text Available The article describes high-level mathematical models developed to solve special problems arising at later stages of design with regard to calculation of the aircraft gas turbine engine (GTE under real operating conditions. The use of blade row mathematics models, as well as mathematical models of a higher level, including 2D and 3D description of the working process in the engine units and components, makes it possible to determine parameters and characteristics of the aircraft engine under conditions significantly different from the calculated ones.The paper considers application of mathematical modelling methods (MMM for solving a wide range of practical problems, such as forcing the engine by injection of water into the flowing part, estimate of the thermal instability effect on the GTE characteristics, simulation of engine start-up and windmill starting condition, etc. It shows that the MMM use, when optimizing the laws of the compressor stator control, as well as supplying cooling air to the hot turbine components in the motor system, can significantly improve the integral traction and economic characteristics of the engine in terms of its gas-dynamic stability, reliability and resource.It ought to bear in mind that blade row mathematical models of the engine are designed to solve purely "motor" problems and do not replace the existing models of various complexity levels used in calculation and design of compressors and turbines, because in “quality” a description of the working processes in these units is inevitably inferior to such specialized models.It is shown that the choice of the mathematical modelling level of an aircraft engine for solving a particular problem arising in its designing and computational study is to a large extent a compromise problem. Despite the significantly higher "resolution" and information ability the motor mathematical models containing 2D and 3D approaches to the calculation of flow in blade machine
Energy Technology Data Exchange (ETDEWEB)
Lambert, Dan P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Woodham, Wesley H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Williams, Matthew S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Newell, J. David [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Luther, Michelle C. [Auburn Univ., AL (United States); Brandenburg, Clayton H. [Univ.of South Carolina, Columbia, SC (United States)
2016-09-27
Testing was completed to develop a chemical processing flowsheet for the Defense Waste Processing Facility (DWPF), designed to vitrify and stabilize high level radioactive waste. DWPF processing uses a reducing acid (formic acid) and an oxidizing acid (nitric acid) to rheologically thin the slurry and complete the necessary acid base and reduction reactions (primarily mercury and manganese). Formic acid reduces mercuric oxide to elemental mercury, allowing the mercury to be removed during the boiling phase of processing through steam stripping. In runs with active catalysts, formic acid can decompose to hydrogen and nitrate can be reduced to ammonia, both flammable gases, due to rhodium and ruthenium catalysis. Replacement of formic acid with glycolic acid eliminates the generation of rhodium- and ruthenium-catalyzed hydrogen and ammonia. In addition, mercury reduction is still effective with glycolic acid. Hydrogen, ammonia and mercury are discussed in the body of the report. Ten abbreviated tests were completed to develop the operating window for implementation of the flowsheet and determine the impact of changes in acid stoichiometry and the blend of nitric and glycolic acid as it impacts various processing variables over a wide processing region. Three full-length 4-L lab-scale simulations demonstrated the viability of the flowsheet under planned operating conditions. The flowsheet is planned for implementation in early 2017.
Fast Pencil Beam Dose Calculation for Proton Therapy Using a Double-Gaussian Beam Model.
da Silva, Joakim; Ansorge, Richard; Jena, Rajesh
2015-01-01
The highly conformal dose distributions produced by scanned proton pencil beams (PBs) are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real-time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a PB algorithm running on graphics processing units (GPUs) intended specifically for online dose calculation. Here, we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such PB algorithm for proton therapy running on a GPU. We employ two different parameterizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of PBs in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included while prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Furthermore, the calculation time is relatively unaffected by the parameterization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
Fast pencil beam dose calculation for proton therapy using a double-Gaussian beam model
Directory of Open Access Journals (Sweden)
Joakim eda Silva
2015-12-01
Full Text Available The highly conformal dose distributions produced by scanned proton pencil beams are more sensitive to motion and anatomical changes than those produced by conventional radiotherapy. The ability to calculate the dose in real time as it is being delivered would enable, for example, online dose monitoring, and is therefore highly desirable. We have previously described an implementation of a pencil beam algorithm running on graphics processing units (GPUs intended specifically for online dose calculation. Here we present an extension to the dose calculation engine employing a double-Gaussian beam model to better account for the low-dose halo. To the best of our knowledge, it is the first such pencil beam algorithm for proton therapy running on a GPU. We employ two different parametrizations for the halo dose, one describing the distribution of secondary particles from nuclear interactions found in the literature and one relying on directly fitting the model to Monte Carlo simulations of pencil beams in water. Despite the large width of the halo contribution, we show how in either case the second Gaussian can be included whilst prolonging the calculation of the investigated plans by no more than 16%, or the calculation of the most time-consuming energy layers by about 25%. Further, the calculation time is relatively unaffected by the parametrization used, which suggests that these results should hold also for different systems. Finally, since the implementation is based on an algorithm employed by a commercial treatment planning system, it is expected that with adequate tuning, it should be able to reproduce the halo dose from a general beam line with sufficient accuracy.
Measurement-based aerosol forcing calculations: The influence of model complexity
Directory of Open Access Journals (Sweden)
Manfred Wendisch
2001-03-01
Full Text Available On the basis of ground-based microphysical and chemical aerosol measurements a simple 'two-layer-single-wavelength' and a complex 'multiple-layer-multiple-wavelength' radiative transfer model are used to calculate the local solar radiative forcing of black carbon (BC and (NH42SO4 (ammonium sulfate particles and mixtures (external and internal of both materials. The focal points of our approach are (a that the radiative forcing calculations are based on detailed aerosol measurements with special emphasis of particle absorption, and (b the results of the radiative forcing calculations with two different types of models (with regards to model complexity are compared using identical input data. The sensitivity of the radiative forcing due to key input parameters (type of particle mixture, particle growth due to humidity, surface albedo, solar zenith angle, boundary layer height is investigated. It is shown that the model results for external particle mixtures (wet and dry only slightly differ from those of the corresponding internal mixture. This conclusion is valid for the results of both model types and for both surface albedo scenarios considered (grass and snow. Furthermore, it is concluded that the results of the two model types approximately agree if it is assumed that the aerosol particles are composed of pure BC. As soon as a mainly scattering substance is included alone or in (internal or external mixture with BC, the differences between the radiative forcings of both models become significant. This discrepancy results from neglecting multiple scattering effects in the simple radiative transfer model.
Study on the Calculation Models of Bus Delay at Bays Using Queueing Theory and Markov Chain
Directory of Open Access Journals (Sweden)
Feng Sun
2015-01-01
Full Text Available Traffic congestion at bus bays has decreased the service efficiency of public transit seriously in China, so it is crucial to systematically study its theory and methods. However, the existing studies lack theoretical model on computing efficiency. Therefore, the calculation models of bus delay at bays are studied. Firstly, the process that buses are delayed at bays is analyzed, and it was found that the delay can be divided into entering delay and exiting delay. Secondly, the queueing models of bus bays are formed, and the equilibrium distribution functions are proposed by applying the embedded Markov chain to the traditional model of queuing theory in the steady state; then the calculation models of entering delay are derived at bays. Thirdly, the exiting delay is studied by using the queueing theory and the gap acceptance theory. Finally, the proposed models are validated using field-measured data, and then the influencing factors are discussed. With these models the delay is easily assessed knowing the characteristics of the dwell time distribution and traffic volume at the curb lane in different locations and different periods. It can provide basis for the efficiency evaluation of bus bays.
Zhang, Haiyang; Tan, Tianwei; van der Spoel, David
2015-11-10
Evaluation of solvation (binding) free energies with implicit solvent models in different dielectric environments for biological simulations as well as high throughput ligand screening remain challenging endeavors. In order to address how well implicit solvent models approximate explicit ones we examined four generalized Born models (GB(Still), GB(HCT), GB(OBC)I, and GB(OBC)II) for determining the dimerization free energy (ΔG(0)) of β-cyclodextrin monomers in 17 implicit solvents with dielectric constants (D) ranging from 5 to 80 and compared the results to previous free energy calculations with explicit solvents ( Zhang et al. J. Phys. Chem. B 2012 , 116 , 12684 - 12693 ). The comparison indicates that neglecting the environmental dependence of Born radii appears acceptable for such calculations involving cyclodextrin and that the GB(Still) and GB(OBC)I models yield a reasonable estimation of ΔG(0), although the details of binding are quite different from explicit solvents. Large discrepancies between implicit and explicit solvent models occur in high-dielectric media with strong hydrogen bond (HB) interruption properties. ΔG(0) with the GB models is shown to correlate strongly to 2(D-1)/(2D+1) (R(2) ∼ 0.90) in line with the Onsager reaction field ( Onsager J. Am. Chem. Soc. 1936 , 58 , 1486 - 1493 ) but to be very sensitive to D (D J. Chem. Inf. Model . 2015 , 55 , 1192 - 1201 ) reproduce the weak experimental correlations with 2(D-1)/(2D+1) very well.
DIDEM - An integrated model for comparative health damage costs calculation of air pollution
Ravina, Marco; Panepinto, Deborah; Zanetti, Maria Chiara
2018-01-01
Air pollution represents a continuous hazard to human health. Administration, companies and population need efficient indicators of the possible effects given by a change in decision, strategy or habit. The monetary quantification of health effects of air pollution through the definition of external costs is increasingly recognized as a useful indicator to support decision and information at all levels. The development of modelling tools for the calculation of external costs can provide support to analysts in the development of consistent and comparable assessments. In this paper, the DIATI Dispersion and Externalities Model (DIDEM) is presented. The DIDEM model calculates the delta-external costs of air pollution comparing two alternative emission scenarios. This tool integrates CALPUFF's advanced dispersion modelling with the latest WHO recommendations on concentration-response functions. The model is based on the impact pathway method. It was designed to work with a fine spatial resolution and a local or national geographic scope. The modular structure allows users to input their own data sets. The DIDEM model was tested on a real case study, represented by a comparative analysis of the district heating system in Turin, Italy. Additional advantages and drawbacks of the tool are discussed in the paper. A comparison with other existing models worldwide is reported.
Iron -chromium alloys and free surfaces: from ab initio calculations to thermodynamic modeling
International Nuclear Information System (INIS)
Levesque, M.
2010-11-01
Ferritic steels possibly strengthened by oxide dispersion are candidates as structural materials for generation IV and fusion nuclear reactors. Their use is limited by incomplete knowledge of the iron-chromium phase diagram at low temperatures and of the phenomena inducing preferential segregation of one element at grain boundaries or at surfaces. In this context, this work contributes to the multi-scale study of the model iron-chromium alloy and their free surfaces by numerical simulations. This study begins with ab initio calculations of properties related to the mixture of atoms of iron and chromium. We highlight complex dependency of the magnetic moments of the chromium atoms on their local chemical environment. Surface properties are also proving sensitive to magnetism. This is the case of impurity segregation of chromium in iron and of their interactions near the surface. In a second step, we construct a simple energy model for high numerical efficiency. It is based on pair interactions on a rigid lattice to which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that we compare to Monte Carlo simulations. The last step of our work is to introduce free surfaces in our model. We then study the effect of ab initio calculated bulk and surface properties on surface segregation.Finally, we calculate segregation isotherms. We therefore propose an evolution model of surface composition of iron-chromium alloys as a function of bulk composition. which are given local chemical environment and temperature dependencies. With this model, we reproduce the ab initio results at zero temperature and experimental results at high temperature. We also deduce the solubility limits at all intermediate temperatures with mean field approximations that
Optimal Calculation of Residuals for ARMAX Models with Applications to Model Verification
DEFF Research Database (Denmark)
Knudsen, Torben
1997-01-01
Residual tests for sufficient model orders are based on the assumption that prediction errors are white when the model is correct. If an ARMAX system has zeros in the MA part which are close to the unit circle, then the standard predictor can have large transients. Even when the correct model...
Development and application of the PBMR fission product release calculation model
International Nuclear Information System (INIS)
Merwe, J.J. van der; Clifford, I.
2008-01-01
At PBMR, long-lived fission product release from spherical fuel spheres is calculated using the German legacy software product GETTER. GETTER is a good tool when performing calculations for fuel spheres under controlled operating conditions, including irradiation tests and post-irradiation heat-up experiments. It has proved itself as a versatile reactor analysis tool, but is rather cumbersome when used for accident and sensitivity analysis. Developments in depressurized loss of forced cooling (DLOFC) accident analysis using GETTER led to the creation of FIssion Product RElease under accident (X) conditions (FIPREX), and later FIPREX-GETTER. FIPREX-GETTER is designed as a wrapper around GETTER so that calculations can be carried out for large numbers of fuel spheres with design and operating parameters that can be stochastically varied. This allows full Monte Carlo sensitivity analyses to be performed for representative cores containing many fuel spheres. The development process and application of FIPREX-GETTER in reactor analysis at PBMR is explained and the requirements for future developments of the code are discussed. Results are presented for a sample PBMR core design under normal operating conditions as well as a suite of design-base accident events, illustrating the functionality of FIPREX-GETTER. Monte Carlo sensitivity analysis principles are explained and presented for each calculation type. The plan and current status of verification and validation (V and V) is described. This is an important and necessary process for all software and calculation model development at PBMR
MATHEMATICAL MODEL FOR CALCULATION OF INFORMATION RISKS FOR INFORMATION AND LOGISTICS SYSTEM
Directory of Open Access Journals (Sweden)
A. G. Korobeynikov
2015-05-01
Full Text Available Subject of research. The paper deals with mathematical model for assessment calculation of information risks arising during transporting and distribution of material resources in the conditions of uncertainty. Meanwhile information risks imply the danger of origin of losses or damage as a result of application of information technologies by the company. Method. The solution is based on ideology of the transport task solution in stochastic statement with mobilization of mathematical modeling theory methods, the theory of graphs, probability theory, Markov chains. Creation of mathematical model is performed through the several stages. At the initial stage, capacity on different sites depending on time is calculated, on the basis of information received from information and logistic system, the weight matrix is formed and the digraph is under construction. Then there is a search of the minimum route which covers all specified vertexes by means of Dejkstra algorithm. At the second stage, systems of differential Kolmogorov equations are formed using information about the calculated route. The received decisions show probabilities of resources location in concrete vertex depending on time. At the third stage, general probability of the whole route passing depending on time is calculated on the basis of multiplication theorem of probabilities. Information risk, as time function, is defined by multiplication of the greatest possible damage by the general probability of the whole route passing. In this case information risk is measured in units of damage which corresponds to that monetary unit which the information and logistic system operates with. Main results. Operability of the presented mathematical model is shown on a concrete example of transportation of material resources where places of shipment and delivery, routes and their capacity, the greatest possible damage and admissible risk are specified. The calculations presented on a diagram showed
Analytical calculation of detailed model parameters of cast resin dry-type transformers
International Nuclear Information System (INIS)
Eslamian, M.; Vahidi, B.; Hosseinian, S.H.
2011-01-01
Highlights: → In this paper high frequency behavior of cast resin dry-type transformers was simulated. → Parameters of detailed model were calculated using analytical method and compared with FEM results. → A lab transformer was constructed in order to compare theoretical and experimental results. -- Abstract: Non-flammable characteristic of cast resin dry-type transformers make them suitable for different kind of usages. This paper presents an analytical method of how to obtain parameters of detailed model of these transformers. The calculated parameters are compared and verified with the corresponding FEM results and if it was necessary, correction factors are introduced for modification of the analytical solutions. Transient voltages under full and chopped test impulses are calculated using the obtained detailed model. In order to validate the model, a setup was constructed for testing on high-voltage winding of cast resin dry-type transformer. The simulation results were compared with the experimental data measured from FRA and impulse tests.
SHARC, a model for calculating atmospheric and infrared radiation under non-equilibrium conditions
Sundberg, R. L.; Duff, J. W.; Gruninger, J. H.; Bernstein, L. S.; Sharma, R. D.
1994-01-01
A new computer model, SHARC, has been developed by the Air Force for calculating high-altitude atmospheric IR radiance and transmittance spectra with a resolution of better than 1/cm. Comprehensive coverage of the 2 to 40 microns (250/cm to 5,000/cm) wavelength region is provided for arbitrary lines of sight in the 50-300 km altitude regime. SHARC accounts for the deviation from local thermodynamic equilibrium (LTE) in vibrational state populations by explicitly modeling the detailed production, loss, and energy transfer process among the important molecular vibrational states. The calculated vibrational populations are found to be similar to those obtained from other non-LTE codes. The radiation transport algorithm is based on a single-line equivalent width approximation along with a statistical correction for line overlap. This approach is reasonably accurate for most applications and is roughly two orders of magnitude faster than the traditional LBL methods which explicitly integrate over individual line shapes. In addition to quiescent atmospheric processes, this model calculates the auroral production and excitation of CO2, NO, and NO(+) in localized regions of the atmosphere. Illustrative comparisons of SHARC predictions to other models and to data from the CIRRIS, SPIRE, and FWI field experiments are presented.
Modelling of electron contamination in clinical photon beams for Monte Carlo dose calculation
International Nuclear Information System (INIS)
Yang, J; Li, J S; Qin, L; Xiong, W; Ma, C-M
2004-01-01
The purpose of this work is to model electron contamination in clinical photon beams and to commission the source model using measured data for Monte Carlo treatment planning. In this work, a planar source is used to represent the contaminant electrons at a plane above the upper jaws. The source size depends on the dimensions of the field size at the isocentre. The energy spectra of the contaminant electrons are predetermined using Monte Carlo simulations for photon beams from different clinical accelerators. A 'random creep' method is employed to derive the weight of the electron contamination source by matching Monte Carlo calculated monoenergetic photon and electron percent depth-dose (PDD) curves with measured PDD curves. We have integrated this electron contamination source into a previously developed multiple source model and validated the model for photon beams from Siemens PRIMUS accelerators. The EGS4 based Monte Carlo user code BEAM and MCSIM were used for linac head simulation and dose calculation. The Monte Carlo calculated dose distributions were compared with measured data. Our results showed good agreement (less than 2% or 2 mm) for 6, 10 and 18 MV photon beams
Caon, Martin
2013-09-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5% but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6%, for CT abdomen (by 9.5%), for CT chest + abdomen + pelvis (by 6%), for CT chest + abdomen (by 9.6%), for CT chest (by 10.1%) and for cardiac CT (by 11.5%). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
International Nuclear Information System (INIS)
Caon, Martin
2013-01-01
The ADELAIDE voxel model of paediatric anatomy was used with the EGSnrc Monte Carlo code to compare effective dose from computed tomography (CT) calculated with both the ICRP103 and ICRP60 definitions which are different in their tissue weighting factors and in the included tissues. The new tissue weighting factors resulted in a lower effective dose for pelvis CT (than if calculated using ICRP60 tissue weighting factors), by 6.5 % but higher effective doses for all other examinations. ICRP103 calculated effective dose for CT abdomen + pelvis was higher by 4.6 %, for CT abdomen (by 9.5 %), for CT chest + abdomen + pelvis (by 6 %), for CT chest + abdomen (by 9.6 %), for CT chest (by 10.1 %) and for cardiac CT (by 11.5 %). These values, along with published values of effective dose from CT that were calculated for both sets of tissue weighting factors were used to determine single values for the ratio ICRP103:ICRP60 calculated effective doses from CT, for seven CT examinations. The following values for ICRP103:ICRP60 are suggested for use to convert ICRP60 calculated effective dose to ICRP103 calculated effective dose for the following CT examinations: Pelvis CT, 0.75; for abdomen CT, abdomen + pelvis CT, chest + abdomen + pelvis CT, 1.00; for chest + abdomen CT, and for chest CT. 1.15; for cardiac CT 1.25.
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
Directory of Open Access Journals (Sweden)
Stovgaard Kasper
2010-08-01
Full Text Available Abstract Background Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function, which is paramount for structure determination based on statistical inference. Results We present a method for the efficient calculation of accurate SAXS curves based on the Debye formula and a set of scattering form factors for dummy atom representations of amino acids. Such a method avoids the computationally costly iteration over all atoms. We estimated the form factors using generated data from a set of high quality protein structures. No ad hoc scaling or correction factors are applied in the calculation of the curves. Two coarse-grained representations of protein structure were investigated; two scattering bodies per amino acid led to significantly better results than a single scattering body. Conclusion We show that the obtained point estimates allow the calculation of accurate SAXS curves from coarse-grained protein models. The resulting curves are on par with the current state-of-the-art program CRYSOL, which requires full atomic detail. Our method was also comparable to CRYSOL in recognizing native structures among native-like decoys. As a proof-of-concept, we combined the coarse-grained Debye calculation with a previously described probabilistic model of protein structure, TorusDBN. This resulted in a significant improvement in the decoy recognition performance. In conclusion, the presented method shows great promise for
Application of a Monte Carlo linac model in routine verifications of dose calculations
International Nuclear Information System (INIS)
Linares Rosales, H. M.; Alfonso Laguardia, R.; Lara Mas, E.; Popescu, T.
2015-01-01
The analysis of some parameters of interest in Radiotherapy Medical Physics based on an experimentally validated Monte Carlo model of an Elekta Precise lineal accelerator, was performed for 6 and 15 Mv photon beams. The simulations were performed using the EGSnrc code. As reference for simulations, the optimal beam parameters values (energy and FWHM) previously obtained were used. Deposited dose calculations in water phantoms were done, on typical complex geometries commonly are used in acceptance and quality control tests, such as irregular and asymmetric fields. Parameters such as MLC scatter, maximum opening or closing position, and the separation between them were analyzed from calculations in water. Similarly simulations were performed on phantoms obtained from CT studies of real patients, making comparisons of the dose distribution calculated with EGSnrc and the dose distribution obtained from the computerized treatment planning systems (TPS) used in routine clinical plans. All the results showed a great agreement with measurements, finding all of them within tolerance limits. These results allowed the possibility of using the developed model as a robust verification tool for validating calculations in very complex situation, where the accuracy of the available TPS could be questionable. (Author)
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B.; Jia, Xun
2015-10-01
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
An analytic linear accelerator source model for GPU-based Monte Carlo dose calculations.
Tian, Zhen; Li, Yongbao; Folkerts, Michael; Shi, Feng; Jiang, Steve B; Jia, Xun
2015-10-21
Recently, there has been a lot of research interest in developing fast Monte Carlo (MC) dose calculation methods on graphics processing unit (GPU) platforms. A good linear accelerator (linac) source model is critical for both accuracy and efficiency considerations. In principle, an analytical source model should be more preferred for GPU-based MC dose engines than a phase-space file-based model, in that data loading and CPU-GPU data transfer can be avoided. In this paper, we presented an analytical field-independent source model specifically developed for GPU-based MC dose calculations, associated with a GPU-friendly sampling scheme. A key concept called phase-space-ring (PSR) was proposed. Each PSR contained a group of particles that were of the same type, close in energy and reside in a narrow ring on the phase-space plane located just above the upper jaws. The model parameterized the probability densities of particle location, direction and energy for each primary photon PSR, scattered photon PSR and electron PSR. Models of one 2D Gaussian distribution or multiple Gaussian components were employed to represent the particle direction distributions of these PSRs. A method was developed to analyze a reference phase-space file and derive corresponding model parameters. To efficiently use our model in MC dose calculations on GPU, we proposed a GPU-friendly sampling strategy, which ensured that the particles sampled and transported simultaneously are of the same type and close in energy to alleviate GPU thread divergences. To test the accuracy of our model, dose distributions of a set of open fields in a water phantom were calculated using our source model and compared to those calculated using the reference phase-space files. For the high dose gradient regions, the average distance-to-agreement (DTA) was within 1 mm and the maximum DTA within 2 mm. For relatively low dose gradient regions, the root-mean-square (RMS) dose difference was within 1.1% and the maximum
SU-F-BRD-09: A Random Walk Model Algorithm for Proton Dose Calculation
International Nuclear Information System (INIS)
Yao, W; Farr, J
2015-01-01
Purpose: To develop a random walk model algorithm for calculating proton dose with balanced computation burden and accuracy. Methods: Random walk (RW) model is sometimes referred to as a density Monte Carlo (MC) simulation. In MC proton dose calculation, the use of Gaussian angular distribution of protons due to multiple Coulomb scatter (MCS) is convenient, but in RW the use of Gaussian angular distribution requires an extremely large computation and memory. Thus, our RW model adopts spatial distribution from the angular one to accelerate the computation and to decrease the memory usage. From the physics and comparison with the MC simulations, we have determined and analytically expressed those critical variables affecting the dose accuracy in our RW model. Results: Besides those variables such as MCS, stopping power, energy spectrum after energy absorption etc., which have been extensively discussed in literature, the following variables were found to be critical in our RW model: (1) inverse squared law that can significantly reduce the computation burden and memory, (2) non-Gaussian spatial distribution after MCS, and (3) the mean direction of scatters at each voxel. In comparison to MC results, taken as reference, for a water phantom irradiated by mono-energetic proton beams from 75 MeV to 221.28 MeV, the gamma test pass rate was 100% for the 2%/2mm/10% criterion. For a highly heterogeneous phantom consisting of water embedded by a 10 cm cortical bone and a 10 cm lung in the Bragg peak region of the proton beam, the gamma test pass rate was greater than 98% for the 3%/3mm/10% criterion. Conclusion: We have determined key variables in our RW model for proton dose calculation. Compared with commercial pencil beam algorithms, our RW model much improves the dose accuracy in heterogeneous regions, and is about 10 times faster than MC simulations
Reply to comment on 'Model calculation of the scanned field enhancement factor of CNTs'
International Nuclear Information System (INIS)
Ahmad, Amir; Tripathi, V K
2010-01-01
In the paper (Ahmad and Tripathi 2006 Nanotechnology 17 3798), we derived an expression to compute the field enhancement factor of CNTs under any positional distribution of CNTs by using the model of a floating sphere between parallel anode and cathode plates. Using this expression we can compute the field enhancement factor of a CNT in a cluster (non-uniformly distributed CNTs). This expression was used to compute the field enhancement factor of a CNT in an array (uniformly distributed CNTs). We used an approximation to calculate the field enhancement factor. Hence, our expressions are correct in that assumption only. Zhbanov et al (2010 Nanotechnology 21 358001) suggest a correction that can calculate the field enhancement factor without using the approximation. Hence, this correction can improve the applicability of this model. (reply)
Extended wave-packet model to calculate energy-loss moments of protons in matter
Archubi, C. D.; Arista, N. R.
2017-12-01
In this work we introduce modifications to the wave-packet method proposed by Kaneko to calculate the energy-loss moments of a projectile traversing a target which is represented in terms of Gaussian functions for the momentum distributions of electrons in the atomic shells. These modifications are introduced using the Levine and Louie technique to take into account the energy gaps corresponding to the different atomic levels of the target. We use the extended wave-packet model to evaluate the stopping power, the energy straggling, the inverse mean free path, and the ionization cross sections for protons in several targets, obtaining good agreements for all these quantities on an extensive energy range that covers low-, intermediate-, and high-energy regions. The extended wave-packet model proposed here provides a method to calculate in a very straightforward way all the significant terms of the inelastic interaction of light ions with any element of the periodic table.
Calculation of search volume on cruise-searching planktivorous fish in foraging model.
Park, Bae Kyung; Lee, Yong Seok; Park, Seok Soon
2007-07-01
Search volume of cruising planktivorous fish was calculated based on its detailed behavior Th examine the factors influencing search volume, a series of experiments were conducted by varying ambient conditions, such as structural complexity light intensity and turbidity Pseudorasbora parva were used in experiment as predator and Daphnia pulex was selected as prey The shape of scanning area of P parva showed elliptic and the search volume changed drastically depending on ambient conditions. Compared with the results of previous foraging model, the search volumes of the fish under previous study were larger (1.2 to 2.4 times) than those from our study These results on the changes in feeding rate can be useful in determining microhabitat requirement of P parva and othercyprinids with a similar foraging behavior The calculated search volume is compared with other foraging model andthe effect of zooplankton-planktivore interactions on aquatic ecosystem is discussed.
Völlinger, Christine; Russenschuck, Stephan
2001-01-01
Field variations in the LHC superconducting magnets, e. g. during the ramping of the magnets, induce magnetization currents in the superconducting material, the so-called persistent currents that do not decay but persist due to the lack of resistivity. This paper describes a semi-analytical hysteresis model for hard superconductors, which has been developed for the computation of the total field errors arising from persistent currents. Since the superconducting coil is surrounded by a ferromagnetic yoke structure, the persistent current model is combined with the finite element method (FEM), as the non-linear yoke can only be calculated numerically. The used finite element method is based on a reduced vector potential formulation that avoids the meshing of the coil while calculating the part of the field arising from the source currents by means of the Biot-Savart Law. The combination allows to determine persistent current induced field errors as function of the excitation and for arbitrarily shaped iron yoke...
Comparison of inverse dynamics calculated by two- and three-dimensional models during walking
DEFF Research Database (Denmark)
Alkjaer, T; Simonsen, E B; Dyhre-Poulsen, P
2001-01-01
recorded the subjects as they walked across two force plates. The subjects were invited to approach a walking speed of 4.5 km/h. The ankle, knee and hip joint moments in the sagittal plane were calculated by 2D and 3D inverse dynamics analysis and compared. Despite the uniform walking speed (4.53 km....../h) and similar footwear, relatively large inter-individual variations were found in the joint moment patterns during the stance phase. The differences between individuals were present in both the 2D and 3D analysis. For the entire sample of subjects the overall time course pattern of the ankle, knee and hip...... the magnitude of the joint moments calculated by 2D and 3D inverse dynamics but the inter-individual variation was not affected by the different models. The simpler 2D model seems therefore appropriate for human gait analysis. However, comparisons of gait data from different studies are problematic...
A calculation model for X-ray diffraction by curved-graphene nanoparticles
International Nuclear Information System (INIS)
Chernozatonskii, L.A.; Neverov, V.S.; Kukushkin, A.B.
2012-01-01
An approximation of the positions of carbon atoms in a curved graphene sheet is suggested for calculation of X-ray diffraction (XRD) patterns of curved-graphene nanoparticles. The model is tested for carbon nanotubes and newly calculated carbon nanotoroids consisting of several hundreds of atoms. It is shown that the random distribution of carbon atoms with graphene surface-averaged density and the local graphene-like rearrangement of atoms in a curved lattice are sufficient for describing the XRD patterns of an ensemble of respective exact carbon nanoparticles of random isotropic orientation in the range of scattering wave vector's modulus q from several units to several tens of inverse nanometers. The model is of interest to a fast-routine identification of curved-graphene nanoparticles in carbonaceous materials.
Use of shell model calculations in R-matrix studies of neutron-induced reactions
International Nuclear Information System (INIS)
Knox, H.D.
1986-01-01
R-matrix analyses of neutron-induced reactions for many of the lightest p-shell nuclei are difficult due to a lack of distinct resonance structure in the reaction cross sections. Initial values for the required R-matrix parameters, E,sub(lambda) and γsub(lambdac) for states in the compound system, can be obtained from shell model calculations. In the present work, the results of recent shell model calculations for the lithium isotopes have been used in R-matrix analyses of 6 Li+n and 7 Li+n reactions for E sub(n) 7 Li and 8 Li on the 6 Li+n and 7 Li+n reaction mechanisms and cross sections are discussed. (author)
Efficient Finite Element Models for Calculation of the No-load losses of the Transformer
Directory of Open Access Journals (Sweden)
Kamran Dawood
2017-10-01
Full Text Available Different transformer models are examined for the calculation of the no-load losses using finite element analysis. Two-dimensional and three-dimensional finite element analyses are used for the simulation of the transformer. Results of the finite element method are also compared with the experimental results. The Result shows that 3-dimensional provide high accuracy as compared to the 2 dimensional full and half model. However, the 2-dimensional half model is the less time-consuming method as compared to the 3 and 2-dimensional full model. Simulation time duration taken by the different models of the transformer is also compared. The difference between the 3-dimensional finite element method and experimental results are less than 3%. These numerical methods can help transformer designers to minimize the development of the prototype transformers.
Implications of imprecision in kinetic rate data for photochemical model calculations
Energy Technology Data Exchange (ETDEWEB)
Stewart, R.W.; Thompson, A.M. [National Aeronautics and Space Administration, Greenbelt, MD (United States). Goddard Space Flight Center
1997-12-31
Evaluation of uncertainties in photochemical model calculations is of great importance to scientists performing assessment modeling. A major source of uncertainty is the measurement imprecision inherent in photochemical reaction rate data that modelers rely on. A rigorous method of evaluating the impact of data imprecision on computational uncertainty is the study of error propagation using Monte Carlo techniques. There are two problems with the current implementation of the Monte Carlo method. First, there is no satisfactory way of accounting for the variation of imprecision with temperature in 1, 2, or 3D models; second, due to its computational expense, it is impractical in 3D model studies. These difficulties are discussed. (author) 4 refs.
Isotope-hydrological models and calculational methods for investigation of groundwater flow
International Nuclear Information System (INIS)
Marton, L.
1982-01-01
Recharge of groundwater through a semi-confining bed is a typical hydrogeological phenomenon in quaternary deposits which are elevated to a lesser or greater degree above the surroundings. A simple hydrological model has been introduced in which the aquifer is recharged only by precipitation through a semi-permeable layer. For applying the model, it is necessary to know the age of the water or the radioisotope concentrations in some sections of the ground-water flow system. On the basis of the age, the hydraulic conductivity of the aquifer and of the semiconfining bed and the steady rate of infiltration can be calculated. Other hydraulic parameters can be determined with the help of a mathemathical model worked out by Freeze and Witherspoon. The hydrological and mathemathical models are inversely used and are complementary. The reliability and applicability of the hydrological model has been proved in practice and good results were gained in hydrogeological research carried out in Hungary. (author)
A numerical model for calculating vibration from a railway tunnel embedded in a full-space
Hussein, M. F. M.; Hunt, H. E. M.
2007-08-01
Vibration generated by underground railways transmits to nearby buildings causing annoyance to inhabitants and malfunctioning to sensitive equipment. Vibration can be isolated through countermeasures by reducing the stiffness of railpads, using floating-slab tracks and/or supporting buildings on springs. Modelling of vibration from underground railways has recently gained more importance on account of the need to evaluate accurately the performance of vibration countermeasures before these are implemented. This paper develops an existing model, reported by Forrest and Hunt, for calculating vibration from underground railways. The model, known as the Pipe-in-Pipe model, has been developed in this paper to account for anti-symmetrical inputs and therefore to model tangential forces at the tunnel wall. Moreover, three different arrangements of supports are considered for floating-slab tracks, one which can be used to model directly-fixed slabs. The paper also investigates the wave-guided solution of the track, the tunnel, the surrounding soil and the coupled system. It is shown that the dynamics of the track have significant effect on the results calculated in the wavenumber-frequency domain and therefore an important role on controlling vibration from underground railways.
Influence of polarization and a source model for dose calculation in MRT
Energy Technology Data Exchange (ETDEWEB)
Bartzsch, Stefan, E-mail: s.bartzsch@dkfz.de; Oelfke, Uwe [The Institute of Cancer Research, 15 Cotswold Road, Belmont, Sutton, Surrey SM2 5NG, United Kingdom and Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, D-69120 Heidelberg (Germany); Lerch, Michael; Petasecca, Marco [Centre for Medical Radiation Physics, University of Wollongong, Northfields Avenue, Wollongong 2522 (Australia); Bräuer-Krisch, Elke [European Synchrotron Radiation Facility, 6 Rue Jules Horowitz, 38000 Grenoble (France)
2014-04-15
Purpose: Microbeam Radiation Therapy (MRT), an alternative preclinical treatment strategy using spatially modulated synchrotron radiation on a micrometer scale, has the great potential to cure malignant tumors (e.g., brain tumors) while having low side effects on normal tissue. Dose measurement and calculation in MRT is challenging because of the spatial accuracy required and the arising high dose differences. Dose calculation with Monte Carlo simulations is time consuming and their accuracy is still a matter of debate. In particular, the influence of photon polarization has been discussed in the literature. Moreover, it is controversial whether a complete knowledge of phase space trajectories, i.e., the simulation of the machine from the wiggler to the collimator, is necessary in order to accurately calculate the dose. Methods: With Monte Carlo simulations in the Geant4 toolkit, the authors investigate the influence of polarization on the dose distribution and the therapeutically important peak to valley dose ratios (PVDRs). Furthermore, the authors analyze in detail phase space information provided byMartínez-Rovira et al. [“Development and commissioning of a Monte Carlo photon model for the forthcoming clinical trials in microbeam radiation therapy,” Med. Phys. 39(1), 119–131 (2012)] and examine its influence on peak and valley doses. A simple source model is developed using parallel beams and its applicability is shown in a semiadjoint Monte Carlo simulation. Results are compared to measurements and previously published data. Results: Polarization has a significant influence on the scattered dose outside the microbeam field. In the radiation field, however, dose and PVDRs deduced from calculations without polarization and with polarization differ by less than 3%. The authors show that the key consequences from the phase space information for dose calculations are inhomogeneous primary photon flux, partial absorption due to inclined beam incidence outside
Fuel models and results from the TRAC-PF1/MIMAS TMI-2 accident calculation
International Nuclear Information System (INIS)
Schwegler, E.C.; Maudlin, P.J.
1983-01-01
A brief description of several fuel models used in the TRAC-PF1/MIMAS analysis of the TMI-2 accident is presented, and some of the significant fuel-rod behavior results from this analysis are given. Peak fuel-rod temperatures, oxidation heat production, and embrittlement and failure behavior calculated for the TMI-2 accident are discussed. Other aspects of fuel behavior, such as cladding ballooning and fuel-cladding eutectic formation, were found not to significantly affect the accident progression
ANLECIS-1: Version of ANLECIS Program for Calculations with the Asymetric Rotational Model
International Nuclear Information System (INIS)
Lopez Mendez, R.; Garcia Moruarte, F.
1986-01-01
A new modified version of the ANLECIS Code is reported. This version allows to fit simultaneously the cross section of the direct process by the asymetric rotational model, and the cross section of the compound nucleus process by the Hauser-Feshbach formalism with the modern statistical corrections. The calculations based in this version show a dependence of the compound nucleus cross section with respect to the asymetric parameter γ. (author). 19 refs
Process modeling study of the CIF incinerator
International Nuclear Information System (INIS)
Hang, T.
1995-01-01
The Savannah River Site (SRS) plans to begin operating the Consolidated Incineration Facility (CIF) in 1996. The CIF will treat liquid and solid low-level radioactive, mixed and RCRA hazardous wastes generated at SRS. In addition to experimental test programs, process modeling was applied to provide guidance in areas of safety, environmental regulation compliances, process improvement and optimization. A steady-state flowsheet model was used to calculate material/energy balances and to track key chemical constituents throughout the process units. Dynamic models were developed to predict the CIF transient characteristics in normal and abnormal operation scenarios. Predictions include the rotary kiln heat transfer, dynamic responses of the CIF to fluctuations in the solid waste feed or upsets in the system equipments, performance of the control system, air inleakage in the kiln, etc. This paper reviews the modeling study performed to assist in the deflagration risk assessment
The Updated BaSTI Stellar Evolution Models and Isochrones. I. Solar-scaled Calculations
Hidalgo, Sebastian L.; Pietrinferni, Adriano; Cassisi, Santi; Salaris, Maurizio; Mucciarelli, Alessio; Savino, Alessandro; Aparicio, Antonio; Silva Aguirre, Victor; Verma, Kuldeep
2018-04-01
We present an updated release of the BaSTI (a Bag of Stellar Tracks and Isochrones) stellar model and isochrone library for a solar-scaled heavy element distribution. The main input physics that have been changed from the previous BaSTI release include the solar metal mixture, electron conduction opacities, a few nuclear reaction rates, bolometric corrections, and the treatment of the overshooting efficiency for shrinking convective cores. The new model calculations cover a mass range between 0.1 and 15 M ⊙, 22 initial chemical compositions between [Fe/H] = ‑3.20 and +0.45, with helium to metal enrichment ratio dY/dZ = 1.31. The isochrones cover an age range between 20 Myr and 14.5 Gyr, consistently take into account the pre-main-sequence phase, and have been translated to a large number of popular photometric systems. Asteroseismic properties of the theoretical models have also been calculated. We compare our isochrones with results from independent databases and with several sets of observations to test the accuracy of the calculations. All stellar evolution tracks, asteroseismic properties, and isochrones are made available through a dedicated web site.
Zhang, Jinhua; Fang, Bin; Hong, Jun; Wan, Shaoke; Zhu, Yongsheng
2017-12-01
The combined angular contact ball bearings are widely used in automatic, aerospace and machine tools, but few researches on the combined angular contact ball bearings have been reported. It is shown that the preload and stiffness of combined bearings are mutual influenced rather than simply the superposition of multiple single bearing, therefore the characteristic calculation of combined bearings achieved by coupling the load and deformation analysis of a single bearing. In this paper, based on the Jones quasi-static model and stiffness analytical model, a new iterative algorithm and model are proposed for the calculation of combined bearings preload and stiffness, and the dynamic effects include centrifugal force and gyroscopic moment have to be considered. It is demonstrated that the new method has general applicability, the preload factors of combined bearings are calculated according to the different design preloads, and the static and dynamic stiffness for various arrangements of combined bearings are comparatively studied and analyzed, and the influences of the design preload magnitude, axial load and rotating speed are discussed in detail. Besides, the change rule of dynamic contact angles of combined bearings with respect to the rotating speed is also discussed. The results show that bearing arrangement modes, rotating speed and design preload magnitude have a significant influence on the preload and stiffness of combined bearings. The proposed formulation provides a useful tool in dynamic analysis of the complex bearing-rotor system.
Directory of Open Access Journals (Sweden)
Xuan L Liu
Full Text Available The phase relations and thermodynamic properties of the condensed Al-Co-Cr ternary alloy system are investigated using first-principles calculations based on density functional theory (DFT and phase-equilibria experiments that led to X-ray diffraction (XRD and electron probe micro-analysis (EPMA measurements. A thermodynamic description is developed by means of the calculations of phase diagrams (CALPHAD method using experimental and computational data from the present work and the literature. Emphasis is placed on modeling the bcc-A2, B2, fcc-γ, and tetragonal-σ phases in the temperature range of 1173 to 1623 K. Liquid, bcc-A2 and fcc-γ phases are modeled using substitutional solution descriptions. First-principles special quasirandom structures (SQS calculations predict a large bcc-A2 (disordered/B2 (ordered miscibility gap, in agreement with experiments. A partitioning model is then used for the A2/B2 phase to effectively describe the order-disorder transitions. The critically assessed thermodynamic description describes all phase equilibria data well. A2/B2 transitions are also shown to agree well with previous experimental findings.
International Nuclear Information System (INIS)
Spaic, R.; Ilic, R.; Petrovic, B.; Dragovic, M.; Toskovic, F.
2007-01-01
An average absorbed dose of the tumour calculated by the MIRD formalism has not always a good correlation with the clinical response. The basic assumption of the MIRD schema is that a uniform spatial dose distribution is opposite to heterogeneity of intratumoral distribution of the administered radionuclide which can lead to a spatial nonuniformity of the absorbed dose. Therefore, in clinical practice, an absorbed dose of the tumour at the cellular level has to be calculated. The aim of this study is to define a referent 3D solid tumour model and using the direct Monte Carlo radiation transport method to calculate: a) absorbed fraction, b) spatial 3D absorbed dose distribution, c) absorbed dose and relative absorbed dose of cells or clusters of cells, and d) differential and accumulated dose volume histograms. A referent 3D solid tumour model is defined as a sphere which is randomly filled with cells and necrosis with defined radii and volumetric density. Radiolabelling of the tumour is defined by intracellular to extracellular radionuclide concentration and radio-labelled cell density. All these parameters are input data for software which generates a referent 3D solid tumour model. The modified FOTELP Monte Carlo code was used on this model for simulation study with beta emitters which were applied on the tumour. The absorbed fractions of Cu-67, I- 131, Re-188 and Y-90 were calculated for different tumour sphere masses and radii. Absorbed doses of cells and spatial distributions of the absorbed doses in the referent 3D solid tumour were calculated for radionuclides I-131 and Y-90. Dose scintigram or voxel presentation of absorbed dose distributions showed higher homogeneity for Y-90 than for I-131. A differential dose volume histogram, or spectrum, of the relative absorbed dose of cells, was much closer to the average absorbed dose of the tumour for Y-90 than I-131. An accumulated dose volume histogram showed that most tumour cells received a lower dose than
Energy Technology Data Exchange (ETDEWEB)
Martino, C. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-08-08
An evaluation of the previous Chemical Processing Cell (CPC) testing was performed to determine whether the planned concurrent operation, or “coupled” operations, of the Defense Waste Processing Facility (DWPF) with the Salt Waste Processing Facility (SWPF) has been adequately covered. Tests with the nitricglycolic acid flowsheet, which were both coupled and uncoupled with salt waste streams, included several tests that required extended boiling times. This report provides the evaluation of previous testing and the testing recommendation requested by Savannah River Remediation. The focus of the evaluation was impact on flammability in CPC vessels (i.e., hydrogen generation rate, SWPF solvent components, antifoam degradation products) and processing impacts (i.e., acid window, melter feed target, rheological properties, antifoam requirements, and chemical composition).
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.
Stiglic, Gregor; Pajnkihar, Majda
2015-01-01
Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES) data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC) of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes) and 47% (47% for pre-diabetes) persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734) and an average 34% (48%) persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47%) lies significantly lower for diabetes risk assessment in comparison to logistic regression (p calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that were primarily developed as classical paper questionnaires.
Evaluation of Major Online Diabetes Risk Calculators and Computerized Predictive Models.
Directory of Open Access Journals (Sweden)
Gregor Stiglic
Full Text Available Classical paper-and-pencil based risk assessment questionnaires are often accompanied by the online versions of the questionnaire to reach a wider population. This study focuses on the loss, especially in risk estimation performance, that can be inflicted by direct transformation from the paper to online versions of risk estimation calculators by ignoring the possibilities of more complex and accurate calculations that can be performed using the online calculators. We empirically compare the risk estimation performance between four major diabetes risk calculators and two, more advanced, predictive models. National Health and Nutrition Examination Survey (NHANES data from 1999-2012 was used to evaluate the performance of detecting diabetes and pre-diabetes. American Diabetes Association risk test achieved the best predictive performance in category of classical paper-and-pencil based tests with an Area Under the ROC Curve (AUC of 0.699 for undiagnosed diabetes (0.662 for pre-diabetes and 47% (47% for pre-diabetes persons selected for screening. Our results demonstrate a significant difference in performance with additional benefits for a lower number of persons selected for screening when statistical methods are used. The best AUC overall was obtained in diabetes risk prediction using logistic regression with AUC of 0.775 (0.734 and an average 34% (48% persons selected for screening. However, generalized boosted regression models might be a better option from the economical point of view as the number of selected persons for screening of 30% (47% lies significantly lower for diabetes risk assessment in comparison to logistic regression (p < 0.001, with a significantly higher AUC (p < 0.001 of 0.774 (0.740 for the pre-diabetes group. Our results demonstrate a serious lack of predictive performance in four major online diabetes risk calculators. Therefore, one should take great care and consider optimizing the online versions of questionnaires that
The O(N) model: Calculation of the effective potential for arbitrary values of N
International Nuclear Information System (INIS)
Casalbuoni, R.; Castellani, E.; De Curtis, S.; Florence Univ.
1983-01-01
By using the technique of the effective action for composition operators, we present a calculation of the effective potential of the O(N) scalar model for arbitrary values of N. The potential is given as a truncation of a loop expansion, and reproduces the known results of the N->infinite limit. The potential shows a symmetry breaking for ''small'' values of the ''classical fields'', whereas it shows Landau-type singularities in the region of ''large'' value. However, these singularities are clearly an artifact of our approximation and the model is perfectly consistent in the low energy regime. (orig.)
Directory of Open Access Journals (Sweden)
Riionheimo Janne
2003-01-01
Full Text Available We describe a technique for estimating control parameters for a plucked string synthesis model using a genetic algorithm. The model has been intensively used for sound synthesis of various string instruments but the fine tuning of the parameters has been carried out with a semiautomatic method that requires some hand adjustment with human listening. An automated method for extracting the parameters from recorded tones is described in this paper. The calculation of the fitness function utilizes knowledge of the properties of human hearing.
Model for the calculation of pressure loss through heavy fuel oil transfer pipelines
Directory of Open Access Journals (Sweden)
Hector Luis Laurencio-Alfonso,
2012-10-01
Full Text Available Considering the limitations of methodologies and empirical correlations in the evaluation of simultaneous effects produced by viscous and mix strength during the transfer of fluids through pipelines, this article presents the functional relationships that describe the pressure variations for the non-Newtonian fuel oil flowrate. The experimental study was conducted based on a characterization of the rheological behavior of fuel oil and modeling for a pseudoplastic behavior. The resulting model describes temperature changes, viscous friction effects and the effects of blending flow layers; which is therefore the basis of calculation for the selection, evaluation and rationalization of transport of heavy fuel oil by pipelines.
Optimal electricity price calculation model for retailers in a deregulated market
International Nuclear Information System (INIS)
Yusta, J.M.; Dominguez-Navarro, J.A.; Ramirez-Rosado, I.J.; Perez-Vidal, J.M.
2005-01-01
The electricity retailing, a new business in deregulated electric power systems, needs the development of efficient tools to optimize its operation. This paper defines a technical-economic model of an electric energy service provider in the environment of the deregulated electricity market in Spain. This model results in an optimization problem, for calculating the optimal electric power and energy selling prices that maximize the economic profits obtained by the provider. This problem is applied to different cases, where the impact on the profits of several factors, such as the price strategy, the discount on tariffs and the elasticity of customer demand functions, is studied. (Author)
40 CFR 600.209-08 - Calculation of vehicle-specific 5-cycle fuel economy values for a model type.
2010-07-01
...-cycle fuel economy values for a model type. 600.209-08 Section 600.209-08 Protection of Environment... model type. (a) Base level. 5-cycle fuel economy values for a base level are calculated from vehicle... any model type value is calculated for a label value. (iii) The provisions of this paragraph (a)(3...
Activity-based costing: a practical model for cost calculation in radiotherapy.
Lievens, Yolande; van den Bogaert, Walter; Kesteloot, Katrien
2003-10-01
The activity-based costing method was used to compute radiotherapy costs. This report describes the model developed, the calculated costs, and possible applications for the Leuven radiotherapy department. Activity-based costing is an advanced cost calculation technique that allocates resource costs to products based on activity consumption. In the Leuven model, a complex allocation principle with a large diversity of cost drivers was avoided by introducing an extra allocation step between activity groups and activities. A straightforward principle of time consumption, weighed by some factors of treatment complexity, was used. The model was developed in an iterative way, progressively defining the constituting components (costs, activities, products, and cost drivers). Radiotherapy costs are predominantly determined by personnel and equipment cost. Treatment-related activities consume the greatest proportion of the resource costs, with treatment delivery the most important component. This translates into products that have a prolonged total or daily treatment time being the most costly. The model was also used to illustrate the impact of changes in resource costs and in practice patterns. The presented activity-based costing model is a practical tool to evaluate the actual cost structure of a radiotherapy department and to evaluate possible resource or practice changes.
Calculation of electrical potentials on the surface of a realistic head model by finite differences
International Nuclear Information System (INIS)
Lemieux, L.; McBride, A.; Hand, J.W.
1996-01-01
We present a method for the calculation of electrical potentials at the surface of realistic head models from a point dipole generator based on a 3D finite-difference algorithm. The model was validated by comparing calculated values with those obtained algebraically for a three-shell spherical model. For a 1.25 mm cubic grid size, the mean error was 4.9% for a superficial dipole (3.75 mm from the inner surface of the skull) pointing in the radial direction. The effect of generator discretization and node spacing on the accuracy of the model was studied. Three values of the node spacing were considered: 1, 1.25 and 1.5 mm. The mean relative errors were 4.2, 6.3 and 9.3%, respectively. The quality of the approximation of a point dipole by an array of nodes in a spherical neighbourhood did not depend significantly on the number of nodes used. The application of the method to a conduction model derived from MRI data is demonstrated. (author)
The truth is out there: measured, calculated and modelled benthic fluxes.
Pakhomova, Svetlana; Protsenko, Elizaveta
2016-04-01
In a modern Earth science there is a great importance of understanding the processes, forming the benthic fluxes as one of element sources or sinks to or from the water body, which affects the elements balance in the water system. There are several ways to assess benthic fluxes and here we try to compare the results obtained by chamber experiments, calculated from porewater distributions and simulated with model. Benthic fluxes of dissolved elements (oxygen, nitrogen species, phosphate, silicate, alkalinity, iron and manganese species) were studied in the Baltic and Black Seas from 2000 to 2005. Fluxes were measured in situ using chamber incubations (Jch) and at the same time sediment cores were collected to assess the porewater distribution at different depths to calculate diffusive fluxes (Jpw). Model study was carried out with benthic-pelagic biogeochemical model BROM (O-N-P-Si-C-S-Mn-Fe redox model). It was applied to simulate biogeochemical structure of the water column and upper sediment and to assess the vertical fluxes (Jmd). By the behaviour at the water-sediment interface all studied elements can be divided into three groups: (1) elements which benthic fluxes are determined by the concentrations gradient only (Si, Mn), (2) elements which fluxes depend on redox conditions in the bottom water (Fe, PO4, NH4), and (3) elements which fluxes are strongly connected with organic matter fate (O2, Alk, NH4). For the first group it was found that measured fluxes are always higher than calculated diffusive fluxes (1.5advantage of a more accurate calculation of diffusive fluxes especially for redox dependent elements. Model results showed that in 50 cm above the sediment vertical fluxes are changing largely while in chamber experiments they are averaged. As a result, each of the methods has its disadvantages and the main facing us question is - which value should be taken for calculation the balance? This research is funded by VISTA - a basic research program and
Directory of Open Access Journals (Sweden)
Bin Liu
2014-01-01
Full Text Available It is pivotal to find an effective mathematical model revealing the galloping mechanism. And it is important to compare the difference between the existing mathematical models on the conductor galloping. In this paper, the continuum cable model for transmission lines was proposed using the Hamilton principle. Discrete models of one DOF, two DOFs, and three DOFs were derived from the continuum model by using the Garlekin method. And the three models were compared by analyzing the galloping vertical amplitude and torsional angle with different influence factors. The influence factors include wind velocity, flow density, span length, damping ratio, and initial tension. The three-DOF model is more accurate at calculating the galloping characteristics than the other two models, but the one-DOF and two-DOF models can also present the trend of galloping amplitude change from the point view of qualitative analysis. And the change of the galloping amplitude relative to the main factors was also obtained, which is very essential to the antigalloping design applied in the actual engineering.
A pedestal temperature model with self-consistent calculation of safety factor and magnetic shear
International Nuclear Information System (INIS)
Onjun, T; Siriburanon, T; Onjun, O
2008-01-01
A pedestal model based on theory-motivated models for the pedestal width and the pedestal pressure gradient is developed for the temperature at the top of the H-mode pedestal. The pedestal width model based on magnetic shear and flow shear stabilization is used in this study, where the pedestal pressure gradient is assumed to be limited by first stability of infinite n ballooning mode instability. This pedestal model is implemented in the 1.5D BALDUR integrated predictive modeling code, where the safety factor and magnetic shear are solved self-consistently in both core and pedestal regions. With the self-consistently approach for calculating safety factor and magnetic shear, the effect of bootstrap current can be correctly included in the pedestal model. The pedestal model is used to provide the boundary conditions in the simulations and the Multi-mode core transport model is used to describe the core transport. This new integrated modeling procedure of the BALDUR code is used to predict the temperature and density profiles of 26 H-mode discharges. Simulations are carried out for 13 discharges in the Joint European Torus and 13 discharges in the DIII-D tokamak. The average root-mean-square deviation between experimental data and the predicted profiles of the temperature and the density, normalized by their central values, is found to be about 14%
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.
International Nuclear Information System (INIS)
Campolina, D. de A. M.; Lima, C.P.B.; Veloso, M.A.F.
2013-01-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
GPU-based ultra-fast dose calculation using a finite size pencil beam model
Gu, Xuejun; Choi, Dongju; Men, Chunhua; Pan, Hubert; Majumdar, Amitava; Jiang, Steve B.
2009-10-01
Online adaptive radiation therapy (ART) is an attractive concept that promises the ability to deliver an optimal treatment in response to the inter-fraction variability in patient anatomy. However, it has yet to be realized due to technical limitations. Fast dose deposit coefficient calculation is a critical component of the online planning process that is required for plan optimization of intensity-modulated radiation therapy (IMRT). Computer graphics processing units (GPUs) are well suited to provide the requisite fast performance for the data-parallel nature of dose calculation. In this work, we develop a dose calculation engine based on a finite-size pencil beam (FSPB) algorithm and a GPU parallel computing framework. The developed framework can accommodate any FSPB model. We test our implementation in the case of a water phantom and the case of a prostate cancer patient with varying beamlet and voxel sizes. All testing scenarios achieved speedup ranging from 200 to 400 times when using a NVIDIA Tesla C1060 card in comparison with a 2.27 GHz Intel Xeon CPU. The computational time for calculating dose deposition coefficients for a nine-field prostate IMRT plan with this new framework is less than 1 s. This indicates that the GPU-based FSPB algorithm is well suited for online re-planning for adaptive radiotherapy.
Development of sump model for containment hydrogen distribution calculations using CFD code
Energy Technology Data Exchange (ETDEWEB)
Ravva, Srinivasa Rao, E-mail: srini@aerb.gov.in [Indian Institute of Technology-Bombay, Mumbai (India); Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India); Iyer, Kannan N. [Indian Institute of Technology-Bombay, Mumbai (India); Gaikwad, A.J. [Nuclear Safety Analysis Division, Atomic Energy Regulatory Board, Mumbai (India)
2015-12-15
Highlights: • Sump evaporation model was implemented in FLUENT using three different approaches. • Validated the implemented sump evaporation models against TOSQAN facility. • It was found that predictions are in good agreement with the data. • Diffusion based model would be able to predict both condensation and evaporation. - Abstract: Computational Fluid Dynamics (CFD) simulations are necessary for obtaining accurate predictions and local behaviour for carrying out containment hydrogen distribution studies. However, commercially available CFD codes do not have all necessary models for carrying out hydrogen distribution analysis. One such model is sump or suppression pool evaporation model. The water in the sump may evaporate during the accident progression and affect the mixture concentrations in the containment. Hence, it is imperative to study the sump evaporation and its effect. Sump evaporation is modelled using three different approaches in the present work. The first approach deals with the calculation of evaporation flow rate and sump liquid temperature and supplying these quantities through user defined functions as boundary conditions. In this approach, the mean values of the domain are used. In the second approach, the mass, momentum, energy and species sources arise due to the sump evaporation are added to the domain through user defined functions. Cell values adjacent to the sump interface are used in this. Heat transfer between gas and liquid is calculated automatically by the code itself. However, in these two approaches, the evaporation rate was computed using an experimental correlation. In the third approach, the evaporation rate is directly estimated using diffusion approximation. The performance of these three models is compared with the sump behaviour experiment conducted in TOSQAN facility.Classification: K. Thermal hydraulics.
International Nuclear Information System (INIS)
2002-01-01
Calculations with the quadratic lineal model for medium rate using the equation dose-effect. Several calculations for system of low dose rate brachytherapy plus teletherapy, calculations for brachytherapy with medium dose rate together with teletherapy, dose for fraction and the one numbers of fractions in medium rate
Espel, Federico Puente
The main objective of this PhD research is to develop a high accuracy modeling tool using a Monte Carlo based coupled system. The presented research comprises the development of models to include the thermal-hydraulic feedback to the Monte Carlo method and speed-up mechanisms to accelerate the Monte Carlo criticality calculation. Presently, deterministic codes based on the diffusion approximation of the Boltzmann transport equation, coupled with channel-based (or sub-channel based) thermal-hydraulic codes, carry out the three-dimensional (3-D) reactor core calculations of the Light Water Reactors (LWRs). These deterministic codes utilize nuclear homogenized data (normally over large spatial zones, consisting of fuel assembly or parts of fuel assembly, and in the best case, over small spatial zones, consisting of pin cell), which is functionalized in terms of thermal-hydraulic feedback parameters (in the form of off-line pre-generated cross-section libraries). High accuracy modeling is required for advanced nuclear reactor core designs that present increased geometry complexity and material heterogeneity. Such high-fidelity methods take advantage of the recent progress in computation technology and coupled neutron transport solutions with thermal-hydraulic feedback models on pin or even on sub-pin level (in terms of spatial scale). The continuous energy Monte Carlo method is well suited for solving such core environments with the detailed representation of the complicated 3-D problem. The major advantages of the Monte Carlo method over the deterministic methods are the continuous energy treatment and the exact 3-D geometry modeling. However, the Monte Carlo method involves vast computational time. The interest in Monte Carlo methods has increased thanks to the improvements of the capabilities of high performance computers. Coupled Monte-Carlo calculations can serve as reference solutions for verifying high-fidelity coupled deterministic neutron transport methods
MODELING AND CALCULATION OF FLOW AMPLIFIER PARAMETERS IN STEERING OF HEAVY TRUCKS
Directory of Open Access Journals (Sweden)
V. P. Avtushko
2008-01-01
Full Text Available The paper analyzes prospects pertaining to development of methods for dynamic calculation of monitoring hydraulic units with various types of relations. Calculated diagram of steering hydraulic drive with flow amplifier and turning cylinder has been given in the paper and its dynamic model has been developed. A hydraulic drive is considered as a system with lumped parameters. It is supposed that properties of working fluid are unchangeable during transient process; leakages and cavitations do not occur; fluid can be pressed; resistance of service drain line is taken into account. Model has been developed with due account of resistance of manifolds and internal channels of flow amplifier, hydrodynamic forces, that influence on amplifier control valves, and friction forces of movable elements. Multi-variant dynamic calculation has been done and some results of the investigations are presented in the paper. The paper also contains analysis that shows influence of various design and component parameters of flow amplifier on the drive dynamics.
International Nuclear Information System (INIS)
Ma, Ting; Zhang, Zhongjie
2014-01-01
Irregular surface topography has revolutionized how seismic traveltime is calculated and the data are processed. There are two main schemes for dealing with an irregular surface in the seismic first-arrival traveltime calculation: (1) expanding the model and (2) flattening the surface irregularities. In the first scheme, a notional infill medium is added above the surface to expand the physical space into a regular space, as required by the eikonal equation solver. Here, we evaluate the chosen propagation velocity in the infill medium through ray path tracking with the eikonal equation-solved traveltime field, and observe that the ray paths will be physically unrealistic for some values of this propagation velocity. The choice of a suitable propagation velocity in the infill medium is crucial for seismic processing of irregular topography. Our model expansion criterion for dealing with surface topography in the calculation of traveltime and ray paths using the eikonal equation highlights the importance of both the propagation velocity of the infill physical medium and the topography gradient. (paper)
A model for calculating the optimal replacement interval of computer systems
International Nuclear Information System (INIS)
Fujii, Minoru; Asai, Kiyoshi
1981-08-01
A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)
Dias, Edgard; Ebdon, James; Taylor, Huw
2015-01-01
This study proposes that calculating and interpreting removal coefficients (K20) for bacteriophages in activated sludge (AS) and trickling filter (TF) systems using stochastic modelling may provide important information that may be used to estimate the removal of phages in such systems using simplified models. In order to achieve this, 14 samples of settled wastewater and post-secondary sedimentation wastewater were collected every 2 weeks, over a 6-month period (May to November), from two AS and two TF systems situated in southern England. Initial results have demonstrated that the removal of somatic coliphages in both AS and TF systems is considerably higher than that of F-RNA coliphages, and that AS more effectively removes both phage groups than TF. The results have also demonstrated that K20 values for phages in AS are higher than in TF, which could be justified by the higher removal rates observed in AS and the models assumed for both systems. The research provides a suggested framework for calculating and predicting removal rates of pathogens and indicator organisms in wastewater treatment systems using simplified models in order to support integrated water and sanitation safety planning approaches to human health risk management.
International Nuclear Information System (INIS)
Madni, I.K.; Cazzoli, E.G.; Khatib-Rahbar, M.
1995-01-01
During certain hypothetical severe accidents in a nuclear power plant, radionuclides could be released to the environment as a plume. Prediction of the atmospheric dispersion and transport of these radionuclides is important for assessment of the risk to the public from such accidents. A simplified PC-based model was developed that predicts time-integrated air concentration of each radionuclide at any location from release as a function of time integrated source strength using the Gaussian plume model. The solution procedure involves direct analytic integration of air concentration equations over time and position, using simplified meteorology. The formulation allows for dry and wet deposition, radioactive decay and daughter buildup, reactor building wake effects, the inversion lid effect, plume rise due to buoyancy or momentum, release duration, and grass height. Based on air and ground concentrations of the radionuclides, the early dose to an individual is calculated via cloudshine, groundshine, and inhalation. The model also calculates early health effects based on the doses. This paper presents aspects of the model that would be of interest to the prediction of environmental flows and their public consequences
International Nuclear Information System (INIS)
Berge, Leonie
2015-01-01
The prompt fission neutron spectrum (PFNS) is very important for various nuclear physics applications. Yet, except for the 252 Cf spontaneous fission spectrum which is an international standard and is used for metrology purposes, the PFNS is still poorly known for most of the fissioning nuclides. In particular, few measurements exist for the fast fission spectrum (induced by a neutron whose energy exceeds about 100 keV), and the international evaluations show strong discrepancies. There are also very few data about covariances associated to the various PFNS evaluations. In this work we present three aspects of the PFNS evaluation. The first aspect is about the spectrum modeling with the FIFRELIN code, developed at CEA Cadarache, which simulates the fission fragment de-excitation by successive emissions of prompt neutrons and gammas, via the Monte-Carlo method. This code aims at calculating all fission observables in a single consistent calculation, starting from fission fragment distributions (mass, kinetic energy and spin). FIFRELIN is therefore more predictive than the analytical models used to describe the spectrum. A study of model parameters which impact the spectrum, like the fragment level density parameter, is presented in order to better reproduce the spectrum. The second aspect of this work is about the evaluation of the PFNS and its covariance matrix. We present a methodology to produce this evaluation in a rigorous way, with the CONRAD code, developed at CEA Cadarache. This implies modeling the spectrum through simple models, like the Madland-Nix model which is the most commonly used in the evaluations, by adjusting the model parameters to reproduce experimental data. The covariance matrix arises from the rigorous propagation of the sources of uncertainty involved in the calculation. In particular, the systematic uncertainties arising from the experimental set-up are propagated via a marginalization technique. The marginalization allows propagating
An equivalent circuit model and power calculations for the APS SPX crab cavities.
Energy Technology Data Exchange (ETDEWEB)
Berenc, T. (Accelerator Systems Division (APS))
2012-03-21
An equivalent parallel resistor-inductor-capacitor (RLC) circuit with beam loading for a polarized TM110 dipole-mode cavity is developed and minimum radio-frequency (rf) generator requirements are calculated for the Advanced Photon Source (APS) short-pulse x-ray (SPX) superconducting rf (SRF) crab cavities. A beam-loaded circuit model for polarized TM110 mode crab cavities was derived. The single-cavity minimum steady-state required generator power has been determined for the APS SPX crab cavities for a storage ring current of 200mA DC current as a function of external Q for various vertical offsets including beam tilt and uncontrollable detuning. Calculations to aid machine protection considerations were given.
Models optimization for the pressure drop calculation in two-phase flow cooled bundle
International Nuclear Information System (INIS)
Ladeira, L.C.D.; Rezende, H.C.
1994-01-01
The analysis of two-phase tests, performed in a mock-up of nuclear fuel element, to verify the applicability of existing calculation models to determine the pressure drop is presented. The tests were performed using Reynolds Number in the range from 4 x 10 4 to 1.6 x 10 5 , with heat flux up to 105 W/cm 2 , and three different levels of pressure: 2.0, 6.0 and 10.0 bar. The tests results were used to optimize the bubble detaching point in two correlations (Bowing and Lelouche-Zolotar) in order to obtain the subcooled void fraction. Comparison between measured and calculated results has shown that the pressure drop,in 96% of the tests, was reproduced within +- 16% when using the Bowing correlation, and +- 9% with the Lellouche-Zolotar one. (author)
Calculation of hydrogen outgassing rate of LHD by recombination limited model
International Nuclear Information System (INIS)
Akaishi, K.; Nakasuga, M.
2002-04-01
To simulate hydrogen outgassing in the plasma vacuum vessel of LHD, the recombination limited model is presented, where the time evolution of hydrogen concentration in the wall of the plasma vacuum vessel is described by a one-dimensional diffusion equation. The hydrogen outgassing rates when the plasma vacuum vessel is pumped down at room temperature and baked at 100 degC are calculated as a function of pumping time. The calculation shows that the hydrogen outgassing rate of the plasma vacuum vessel can be reduced at least by one order of magnitude due to pumping and baking. This prediction is consistent with the recent result of outgassing reduction observed in the pumping-down and baking of the plasma vacuum vessel in LHD. (author)
Raman Spectroscopy and Ab-Initio Model Calculations on Ionic Liquids
DEFF Research Database (Denmark)
Berg, Rolf W.
2007-01-01
A review of the recent developments in the study and understanding of room temperature ionic liquids are given. An intimate picture of how and why these liquids are not crystals at ambient conditions is attempted, based on evidence from crystallographical results combined with vibrational...... that the structural resolving power of Raman spectroscopy will be appreciated by the reader. It is of remarkable use on crystals of known different conformations and on the corresponding liquids, especially in combination with modern quantum mechanics calculations. It is hoped that these interdisciplinary methods...... spectroscopy and ab-initio molecular orbital calculations. A discussion is given, based mainly on some recent FT-Raman spectroscopic results on the model ionic liquid system of 1-butyl-3-methylimidazolium ([C4mim][X]) salts. The rotational isomerism of the [C4mim]þ cation is described: the presence of anti...
Theoretical modeling of zircon's crystal morphology according to data of atomistic calculations
Gromalova, Natalia; Nikishaeva, Nadezhda; Eremin, Nikolay
2017-04-01
Zircon is an essential mineral that is used in the U-Pb dating. Moreover, zircon is highly resistant to radioactive exposure. It is of great interest in solving both fundamental and applied problems associated with the isolation of high-level radioactive waste. There is significant progress in forecasting of the most energetically favorable crystal structures at the present time. Unfortunately, the theoretical forecast of crystal morphology at high technological level is under-explored nowadays, though the estimation of crystal equilibrium habit is extremely important in studying the physical and chemical properties of new materials. For the first time, the thesis about relation of the equilibrium shape of a crystal with its crystal structure was put forward in the works by O.Brave. According to it, the idealized habit is determined in the simplest case by a correspondence with the reticular densities Rhkl of individual faces. This approach, along with all subsequent corrections, does not take into account the nature of atoms and the specific features of the chemical bond in crystals. The atomistic calculations of crystal surfaces are commonly performed using the energetic characteristics of faces, namely, the surface energy (Esurf), which is a measure of the thermodynamic stability of the crystal face. The stable crystal faces are characterized by small positive values of Esurf. As we know from our previous research (Gromalova et al.,2015) one of the constitutive factors affecting the value of the surface energy in calculations is a choice of potentials model. In this regard, we studied several sets of parameters of atomistic interatomic potentials optimized previously. As the first test model («Zircon 1») were used sets of interatomic potentials of interaction Zr-O, Si-O and O-O in the form of Buckingham potentials. To improve playback properties of zircon additionally used Morse potential for a couple of Zr-Si, as well as the three-particle angular harmonic
Spatial Resolution Effect on Forest Road Gradient Calculation and Erosion Modelling
Cao, L.; Elliot, W.
2017-12-01
Road erosion is one of the main sediment sources in a forest watershed and should be properly evaluated. With the help of GIS technology, road topography can be determined and soil loss can be predicted at a watershed scale. As a vector geographical feature, the road gradient should be calculated following road direction rather than hillslope direction. This calculation might be difficult with a coarse (30-m) DEM which only provides the underlying topography information. This study was designed to explore the effect of road segmentation and DEM resolution on the road gradient calculation and erosion prediction at a watershed scale. The Water Erosion Prediction Project (WEPP) model was run on road segments of 9 lengths ranging from 40m to 200m. Road gradient was calculated from three DEM data sets: 1m LiDAR, and 10m and 30m USGS DEMs. The 1m LiDAR DEM calculated gradients were very close to the field observed road gradients, so we assumed the 1m LiDAR DEM predicted the true road gradient. The results revealed that longer road segments skipped detail topographical undulations and resulted in lower road gradients. Coarser DEMs computed steeper road gradients as larger grid cells covered more adjacent areas outside road resulting in larger elevation differences. Field surveyed results also revealed that coarser DEM might result in more gradient deviation in a curved road segment when it passes through a convex or concave slope. As road segment length increased, the gradient difference between three DEMs was reduced. There were no significant differences between road gradients of different segment lengths and DEM resolution when segments were longer than 100m. For long segments, the 10m DEM calculated road gradient was similar to the 1m LiDAR gradient. When evaluating the effects of road segment length, the predicted erosion rate decreased with increasing length when road gradient was less than 3%. In cases where the road gradients exceed 3% and rill erosion dominates
Calculating the renormalisation group equations of a SUSY model with Susyno
Fonseca, Renato M.
2012-10-01
Susyno is a Mathematica package dedicated to the computation of the 2-loop renormalisation group equations of a supersymmetric model based on any gauge group (the only exception being multiple U(1) groups) and for any field content. Program summary Program title: Susyno Catalogue identifier: AEMX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 30829 No. of bytes in distributed program, including test data, etc.: 650170 Distribution format: tar.gz Programming language: Mathematica 7 or higher. Computer: All systems that Mathematica 7+ is available for (PC, Mac). Operating system: Any platform supporting Mathematica 7+ (Windows, Linux, Mac OS). Classification: 4.2, 5, 11.1. Nature of problem: Calculating the renormalisation group equations of a supersymmetric model involves using long and complicated general formulae [1, 2]. In addition, to apply them it is necessary to know the Lagrangian in its full form. Building the complete Lagrangian of models with small representations of SU(2) and SU(3) might be easy but in the general case of arbitrary representations of an arbitrary gauge group, this task can be hard, lengthy and error prone. Solution method: The Susyno package uses group theoretical functions to calculate the super-potential and the soft-SUSY-breaking Lagrangian of a supersymmetric model, and calculates the two-loop RGEs of the model using the general equations of [1, 2]. Susyno works for models based on any representation(s) of any gauge group (the only exception being multiple U(1) groups). Restrictions: As the program is based on the formalism of [1, 2], it shares its limitations. Running time can also be a significant restriction, in particular for models with many fields. Unusual features
Khan, Stuart J; Ongerth, Jerry E
2004-01-01
A conceptual model is presented for determining which currently prescribed pharmaceutical compounds are most likely to be found in sewage, and for estimating their concentrations, both in raw sewage and after successive stages of secondary sewage treatment. A ranking of the "top-50" pharmaceutical compounds (by total mass dispensed) in Australia over the 1998 calendar year was prepared. Information on the excretion ratios and some metabolites of the pharmaceuticals enabled prediction of the overall rates of excretion into Australian sewage. Mass-balance and fugacity modelling, applied to sewage generation and to a sewage treatment plant, allowed calculation of predicted concentrations of the compounds in raw, primary and secondary treated sewage effluents. Twenty nine of the modelled pharmaceutical residuals were predicted to be present in raw sewage influent at concentrations of 1 microgl(-1) or greater. Twenty of the compounds were predicted to remain in secondary effluent at concentrations of 1 microgl(-1) or greater.
International Nuclear Information System (INIS)
Moring, M.; Markkula, M.L.
1997-03-01
The external doses under various radioactive deposition conditions are assessed and the efficiencies of some simple decontamination techniques (grass cutting, vacuum sweeping, hosing of paved surfaces and roofs, and felling trees) are compared in the study. The present model has been constructed for the Finnish conditions and housing areas, using 137 Cs transfer data from the Nordic and Central European studies and models. The compartment model concerns behaviour and decontamination of 137 Cs in the urban environment under summer conditions. Doses to man have been calculated for wet (light rain) and dry deposition in four typical Finnish building areas: single-family wooden houses, brick terraced-houses, blocks of flats and urban office buildings. (26 refs.)
Macroscopic calculational model of fission gas release from water reactor fuels
International Nuclear Information System (INIS)
Uchida, Masaki
1993-01-01
Existing models for estimating fission gas release rate usually have fuel temperature as independent variable. Use of fuel temperature, however, often brings an excess ambiguity in the estimation because it is not a rigorously definable quantity as a function of heat generation rate and burnup. To derive a mathematical model that gives gas release rate explicitly as a function of design and operational parameters, the Booth-type diffusional model was modified by changing the character of the diffusion constant from physically meaningful quantity into a mere mathematical parameter, and also changing its temperature dependency into power dependency. The derived formula was found, by proper choice of arbitrary constants, to satisfactorily predict the release rates under a variety of irradiation histories up to a burnup of 60,000 MWd/t. For simple power histories, the equation can be solved analytically by defining several transcendental functions, which enables simple calculation of release rate using graphs. (author)
Dang, Qianyu; Mazumdar, Sati; Houck, Patricia R
2008-08-01
The generalized linear mixed model (GLIMMIX) provides a powerful technique to model correlated outcomes with different types of distributions. The model can now be easily implemented with SAS PROC GLIMMIX in version 9.1. For binary outcomes, linearization methods of penalized quasi-likelihood (PQL) or marginal quasi-likelihood (MQL) provide relatively accurate variance estimates for fixed effects. Using GLIMMIX based on these linearization methods, we derived formulas for power and sample size calculations for longitudinal designs with attrition over time. We found that the power and sample size estimates depend on the within-subject correlation and the size of random effects. In this article, we present tables of minimum sample sizes commonly used to test hypotheses for longitudinal studies. A simulation study was used to compare the results. We also provide a Web link to the SAS macro that we developed to compute power and sample sizes for correlated binary outcomes.
Development of a transient calculation model for a closed sodium natural circulation loop
International Nuclear Information System (INIS)
Chang, Won Pyo; Ha, Kwi Seok; Jeong, Hae Yong; Heo, Sun; Lee, Yong Bum
2003-09-01
A natural circulation loop has usually adopted for a Liquid Metal Reactor (LMR) because of its high reliability. Up-rating of the current KALIMER capacity requires an additional PDRC to the existing PVCS to remove its decay heat under an accident. As the system analysis code currently used for LMR in Korea does not feature a stand alone capability to simulate a closed natural circulation loop, it is not eligible to be applied to PDRC. To supplement its limitation, a steady state calculation model had been developed during the first phase, and development of the transient model has successively carried out to close the present study. The developed model will then be coupled with the system analysis code, SSC-K to assess a long term cooling for the new conceptual design. The incompressibility assumption of sodium which allows the circuit to be modeled with a single loop flow, makes the model greatly simplified comparing with LWR. Some thermal-hydraulic models developed during this study can be effectively applied to other system analysis codes which require such component models, and the present development will also contribute to establishment of a code system for the LMR analysis
Set of molecular models based on quantum mechanical ab initio calculations and thermodynamic data.
Eckl, Bernhard; Vrabec, Jadran; Hasse, Hans
2008-10-09
A parametrization strategy for molecular models on the basis of force fields is proposed, which allows a rapid development of models for small molecules by using results from quantum mechanical (QM) ab initio calculations and thermodynamic data. The geometry of the molecular models is specified according to the atom positions determined by QM energy minimization. The electrostatic interactions are modeled by reducing the electron density distribution to point dipoles and point quadrupoles located in the center of mass of the molecules. Dispersive and repulsive interactions are described by Lennard-Jones sites, for which the parameters are iteratively optimized to experimental vapor-liquid equilibrium (VLE) data, i.e., vapor pressure, saturated liquid density, and enthalpy of vaporization of the considered substance. The proposed modeling strategy was applied to a sample set of ten molecules from different substance classes. New molecular models are presented for iso-butane, cyclohexane, formaldehyde, dimethyl ether, sulfur dioxide, dimethyl sulfide, thiophene, hydrogen cyanide, acetonitrile, and nitromethane. Most of the models are able to describe the experimental VLE data with deviations of a few percent.
Calculation of the Initial Magnetic Field for Mercury's Magnetosphere Hybrid Model
Alexeev, Igor; Parunakian, David; Dyadechkin, Sergey; Belenkaya, Elena; Khodachenko, Maxim; Kallio, Esa; Alho, Markku
2018-03-01
Several types of numerical models are used to analyze the interactions of the solar wind flow with Mercury's magnetosphere, including kinetic models that determine magnetic and electric fields based on the spatial distribution of charges and currents, magnetohydrodynamic models that describe plasma as a conductive liquid, and hybrid models that describe ions kinetically in collisionless mode and represent electrons as a massless neutralizing liquid. The structure of resulting solutions is determined not only by the chosen set of equations that govern the behavior of plasma, but also by the initial and boundary conditions; i.e., their effects are not limited to the amount of computational work required to achieve a quasi-stationary solution. In this work, we have proposed using the magnetic field computed by the paraboloid model of Mercury's magnetosphere as the initial condition for subsequent hybrid modeling. The results of the model have been compared to measurements performed by the Messenger spacecraft during a single crossing of the magnetosheath and the magnetosphere. The selected orbit lies in the terminator plane, which allows us to observe two crossings of the bow shock and the magnetopause. In our calculations, we have defined the initial parameters of the global magnetospheric current systems in a way that allows us to minimize paraboloid magnetic field deviation along the trajectory of the Messenger from the experimental data. We have shown that the optimal initial field parameters include setting the penetration of a partial interplanetary magnetic field into the magnetosphere with a penetration coefficient of 0.2.
Cloud-based calculators for fast and reliable access to NOAA's geomagnetic field models
Woods, A.; Nair, M. C.; Boneh, N.; Chulliat, A.
2017-12-01
While the Global Positioning System (GPS) provides accurate point locations, it does not provide pointing directions. Therefore, the absolute directional information provided by the Earth's magnetic field is of primary importance for navigation and for the pointing of technical devices such as aircrafts, satellites and lately, mobile phones. The major magnetic sources that affect compass-based navigation are the Earth's core, its magnetized crust and the electric currents in the ionosphere and magnetosphere. NOAA/CIRES Geomagnetism (ngdc.noaa.gov/geomag/) group develops and distributes models that describe all these important sources to aid navigation. Our geomagnetic models are used in variety of platforms including airplanes, ships, submarines and smartphones. While the magnetic field from Earth's core can be described in relatively fewer parameters and is suitable for offline computation, the magnetic sources from Earth's crust, ionosphere and magnetosphere require either significant computational resources or real-time capabilities and are not suitable for offline calculation. This is especially important for small navigational devices or embedded systems, where computational resources are limited. Recognizing the need for a fast and reliable access to our geomagnetic field models, we developed cloud-based application program interfaces (APIs) for NOAA's ionospheric and magnetospheric magnetic field models. In this paper we will describe the need for reliable magnetic calculators, the challenges faced in running geomagnetic field models in the cloud in real-time and the feedback from our user community. We discuss lessons learned harvesting and validating the data which powers our cloud services, as well as our strategies for maintaining near real-time service, including load-balancing, real-time monitoring, and instance cloning. We will also briefly talk about the progress we achieved on NOAA's Big Earth Data Initiative (BEDI) funded project to develop API
Chamberlain, D. M.; Elliot, J. L.
1997-01-01
We present a method for speeding up numerical calculations of a light curve for a stellar occultation by a planetary atmosphere with an arbitrary atmospheric model that has spherical symmetry. This improved speed makes least-squares fitting for model parameters practical. Our method takes as input several sets of values for the first two radial derivatives of the refractivity at different values of model parameters, and interpolates to obtain the light curve at intermediate values of one or more model parameters. It was developed for small occulting bodies such as Pluto and Triton, but is applicable to planets of all sizes. We also present the results of a series of tests showing that our method calculates light curves that are correct to an accuracy of 10(exp -4) of the unocculted stellar flux. The test benchmarks are (i) an atmosphere with a l/r dependence of temperature, which yields an analytic solution for the light curve, (ii) an atmosphere that produces an exponential refraction angle, and (iii) a small-planet isothermal model. With our method, least-squares fits to noiseless data also converge to values of parameters with fractional errors of no more than 10(exp -4), with the largest errors occurring in small planets. These errors are well below the precision of the best stellar occultation data available. Fits to noisy data had formal errors consistent with the level of synthetic noise added to the light curve. We conclude: (i) one should interpolate refractivity derivatives and then form light curves from the interpolated values, rather than interpolating the light curves themselves; (ii) for the most accuracy, one must specify the atmospheric model for radii many scale heights above half light; and (iii) for atmospheres with smoothly varying refractivity with altitude, light curves can be sampled as coarsely as two points per scale height.
Target model of nucleosome particle for track structure calculations and DNA damage modeling
Czech Academy of Sciences Publication Activity Database
Michalik, Věslav; Běgusová, Marie
1994-01-01
Roč. 66, č. 3 (1994), s. 267-277 ISSN 0955-3002 R&D Projects: GA ČR(CZ) GA204/93/2451; GA AV ČR(CZ) IA135102; GA AV ČR(CZ) IA50405 Keywords : DNA nucleosome * ionizing radiation * theoretical modeling Subject RIV: AQ - Safety, Health Protection, Human - Machine Impact factor: 2.761, year: 1994
Yu, Lan; Zhan, Tingting; Zhan, Xiancheng; Wei, Guocui; Tan, Xiaoying; Wang, Xiaolan; Li, Chengrong
2014-11-01
The osmotic pressure of xylitol solution at a wide concentration range was calculated according to the UNIFAC model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with UNIFAC model calculations from dilute to saturated solution. Results indicate that air humidity osmometry measurements are comparable to UNIFAC model calculations at a wide concentration range by two one-sided test and multiple testing corrections. The air humidity osmometry is applicable to measure the osmotic pressure and the osmotic pressure can be calculated from the concentration.
Faught, Austin M; Davidson, Scott E; Fontenot, Jonas; Kry, Stephen F; Etzel, Carol; Ibbott, Geoffrey S; Followill, David S
2017-09-01
The Imaging and Radiation Oncology Core Houston (IROC-H) (formerly the Radiological Physics Center) has reported varying levels of agreement in their anthropomorphic phantom audits. There is reason to believe one source of error in this observed disagreement is the accuracy of the dose calculation algorithms and heterogeneity corrections used. To audit this component of the radiotherapy treatment process, an independent dose calculation tool is needed. Monte Carlo multiple source models for Elekta 6 MV and 10 MV therapeutic x-ray beams were commissioned based on measurement of central axis depth dose data for a 10 × 10 cm 2 field size and dose profiles for a 40 × 40 cm 2 field size. The models were validated against open field measurements consisting of depth dose data and dose profiles for field sizes ranging from 3 × 3 cm 2 to 30 × 30 cm 2 . The models were then benchmarked against measurements in IROC-H's anthropomorphic head and neck and lung phantoms. Validation results showed 97.9% and 96.8% of depth dose data passed a ±2% Van Dyk criterion for 6 MV and 10 MV models respectively. Dose profile comparisons showed an average agreement using a ±2%/2 mm criterion of 98.0% and 99.0% for 6 MV and 10 MV models respectively. Phantom plan comparisons were evaluated using ±3%/2 mm gamma criterion, and averaged passing rates between Monte Carlo and measurements were 87.4% and 89.9% for 6 MV and 10 MV models respectively. Accurate multiple source models for Elekta 6 MV and 10 MV x-ray beams have been developed for inclusion in an independent dose calculation tool for use in clinical trial audits. © 2017 American Association of Physicists in Medicine.
Process modeling for the Integrated Nonthermal Treatment System (INTS) study
Energy Technology Data Exchange (ETDEWEB)
Brown, B.W.
1997-04-01
This report describes the process modeling done in support of the Integrated Nonthermal Treatment System (INTS) study. This study was performed to supplement the Integrated Thermal Treatment System (ITTS) study and comprises five conceptual treatment systems that treat DOE contract-handled mixed low-level wastes (MLLW) at temperatures of less than 350{degrees}F. ASPEN PLUS, a chemical process simulator, was used to model the systems. Nonthermal treatment systems were developed as part of the INTS study and include sufficient processing steps to treat the entire inventory of MLLW. The final result of the modeling is a process flowsheet with a detailed mass and energy balance. In contrast to the ITTS study, which modeled only the main treatment system, the INTS study modeled each of the various processing steps with ASPEN PLUS, release 9.1-1. Trace constituents, such as radionuclides and minor pollutant species, were not included in the calculations.
On the mixing model for calculating the temperature fields in nuclear reactor fuel assemblies
International Nuclear Information System (INIS)
Mikhin, V.I.; Zhukov, A.V.
1985-01-01
One of the alternatives of the mixing model applied for calculating temperature fields in nuclear reactor fuel assemblies,including the fuel assemblies with nonequilibrium energy-release in fuel element cross section, is consistently described. The equations for both constant and variable values of coolant density and heat capacity are obtained. The mixing model is based on a set of mass, heat and longitudinal momentum balance equations. This set is closed by the ratios connecting the unknown values for gaps between fuel elements with the averaged values for neighbouring channels. The ratios to close momentum and heat balance equations, explaining, in particular, the nonequivalent heat and mass, momentum and mass transfer coefficients, are suggested. The balance equations with variable coolant density and heat capacity are reduced to the form coinciding with those of the similar equations with constant values of these parameters. Application of one of the main ratios of the mixing model relating the coolant transverse overflow in the gaps between fuel elements to the averaged coolant rates (flow rates) in the neighbouring channels is mainly limited by the coolant stabilized flow in the fuel assemblies with regular symmetrical arrangement of elements. Mass transfer coefficients for these elements are experimentally determined. The ratio in the paper is also applicable for calculation of fuel assembly temperature fields with a small relative shift of elements
International Nuclear Information System (INIS)
Estryk, G.; Higa, M.
1993-01-01
The RA-3 (5 MW, MTR) reactor is mainly utilized to produce radioisotopes (Mo-99, I-131, etc.). It started operating with Low Enrichment Uranium (LEU) in 1990, and spends around 12 fuels per year. Although this consumption is small compared to a nuclear power station. It is important to do a good management of them. The present report describes: - A reactor model to perform the Fuel Shuffling. - Results of fuel management simulations for 2 and a half years of operation. Some features of the calculations can be summarized as follows: 1) A 3D calculation model is used with the code PUMA. It does not have experimental adjustments, except for some approximations in the reflector representation and predicts: power, flux distributions and reactivity of the core in an acceptable way. 2) Comparisons have been made with the measurements done in the commissioning with LEU fuels, and it has also been compared with the empirical method (the previous one) which had been used in the former times of operation with LEU fuel. 3) The number of points of the model is approximately 13500, an it can be run in 80386 personal computer. The present method has been verified as a good tool to perform the simulations for the fuel management of RA-3 reactor. It is expected to produce some economic advantages in: - Achieving a better utilization of the fuels. - Leaving more time of operation for radioisotopes production. The activation measurements through the whole core required by the previous method can be significantly reduced. (author)
Calculation methods for simulation and modelling of nuclear power plant accidents
International Nuclear Information System (INIS)
Zurita Centelles, A.
1985-01-01
The study deals with the development of calculation procedures for the determination of transient operating conditions in pressurized water reactors, which present the following characteristics: application of largely analytic methods for the description of primary circuit components; strict modular structure of the program for the easy exchange of component models; applicability of different component models according to the applicable case; large valid ranges of application of the thermodynamic variables of state in the transient models; in case of necessity exchange possibility of slip, pressure drop and heat transmission correlations as well as other functions; application in the dynamic components analyses of the anglo-saxon lumped parameter suitable for the system instrumentation. With these calculation procedures it is possible to analyse the effect of a certain selection of transients - up to reaching turbine tripout and reactor emergency shutdown - in the individual primary circuit components. These transients may be generally classified amongst the heat rejection and heat input modifications in the secondary circuit, in the coolant or in the reactivity balance and power distribution. (orig.) [de
Calculating the Probability of Returning a Loan with Binary Probability Models
Directory of Open Access Journals (Sweden)
Julian Vasilev
2014-12-01
Full Text Available The purpose of this article is to give a new approach in calculating the probability of returning a loan. A lot of factors affect the value of the probability. In this article by using statistical and econometric models some influencing factors are proved. The main approach is concerned with applying probit and logit models in loan management institutions. A new aspect of the credit risk analysis is given. Calculating the probability of returning a loan is a difficult task. We assume that specific data fields concerning the contract (month of signing, year of signing, given sum and data fields concerning the borrower of the loan (month of birth, year of birth (age, gender, region, where he/she lives may be independent variables in a binary logistics model with a dependent variable “the probability of returning a loan”. It is proved that the month of signing a contract, the year of signing a contract, the gender and the age of the loan owner do not affect the probability of returning a loan. It is proved that the probability of returning a loan depends on the sum of contract, the remoteness of the loan owner and the month of birth. The probability of returning a loan increases with the increase of the given sum, decreases with the proximity of the customer, increases for people born in the beginning of the year and decreases for people born at the end of the year.
International Nuclear Information System (INIS)
Davis, P.A.
1997-01-01
Models that simulate the transport and behaviour of radionuclides in the environment are used extensively in the nuclear industry for safety and licensing purposes. They are needed to calculate derived release limits for new and operating facilities, to estimate consequences following hypothetical accidents and to help manage a real emergency. But predictions generated for these purposes are essentially meaningless unless they are accompanied by a quantitative estimate of the confidence that can be placed in them. For example, in an emergency where there has been an accidental release of radioactivity to the atmosphere, decisions based on a validated model with small uncertainties would likely be very different from those based on an untested model, or on one with large uncertainties. This paper begins with a discussion of some general methods for establishing the credibility of model predictions. The focus will be on environmental transport models but the principles apply to models of all kinds. Establishing the credibility of a model is not a trivial task, It involves a number of tasks including face validation, verification, experimental validation and sensitivity and uncertainty analyses. The remainder of the paper will present quantitative results relating to the credibility of environmental transport models. Model formation, choice of parameter values and the influence of the user will all be discussed as sources of uncertainty in predictions. The magnitude of uncertainties that must be expected in various applications of the models will be presented. The examples used throughout the paper are drawn largely from recent work carried out in BIOMOVS and VAMP. (DM)
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Wang, Junmei; Cieplak, Piotr; Li, Jie; Hou, Tingjun; Luo, Ray; Duan, Yong
2011-03-31
In this work, four types of polarizable models have been developed for calculating interactions between atomic charges and induced point dipoles. These include the Applequist, Thole linear, Thole exponential model, and the Thole Tinker-like. The polarizability models have been optimized to reproduce the experimental static molecular polarizabilities obtained from the molecular refraction measurements on a set of 420 molecules reported by Bosque and Sales. We grouped the models into five sets depending on the interaction types, that is, whether the interactions of two atoms that form the bond, bond angle, and dihedral angle are turned off or scaled down. When 1-2 (bonded) and 1-3 (separated by two bonds) interactions are turned off, 1-4 (separated by three bonds) interactions are scaled down, or both, all models including the Applequist model achieved similar performance: the average percentage error (APE) ranges from 1.15 to 1.23%, and the average unsigned error (AUE) ranges from 0.143 to 0.158 Å(3). When the short-range 1-2, 1-3, and full 1-4 terms are taken into account (set D models), the APE ranges from 1.30 to 1.58% for the three Thole models, whereas the Applequist model (DA) has a significantly larger APE (3.82%). The AUE ranges from 0.166 to 0.196 Å(3) for the three Thole models, compared with 0.446 Å(3) for the Applequist model. Further assessment using the 70-molecule van Duijnen and Swart data set clearly showed that the developed models are both accurate and highly transferable and are in fact have smaller errors than the models developed using this particular data set (set E models). The fact that A, B, and C model sets are notably more accurate than both D and E model sets strongly suggests that the inclusion of 1-2 and 1-3 interactions reduces the transferability and accuracy.
Paul, F.; Maisch, M.; Rothenbühler, C.; Hoelzle, M.; Haeberli, W.
2007-02-01
The observed rapid glacier wastage in the European Alps during the past 20 years already has strong impacts on the natural environment (rock fall, lake formation) as well as on human activities (tourism, hydro-power production, etc.) and poses several new challenges also for glacier monitoring. With a further increase of global mean temperature in the future, it is likely that Alpine glaciers and the high-mountain environment as an entire system will further develop into a state of imbalance. Hence, the assessment of future glacier geometries is a valuable prerequisite for various impact studies. In order to calculate and visualize in a consistent manner future glacier extent for a large number of individual glaciers (> 100) according to a given climate change scenario, we have developed an automated and simple but robust approach that is based on an empirical relationship between glacier size and the steady-state accumulation area ratio (AAR 0) in the Alps. The model requires digital glacier outlines and a digital elevation model (DEM) only and calculates new glacier geometries from a given shift of the steady-state equilibrium line altitude (ELA 0) by means of hypsographic modelling. We have calculated changes in number, area and volume for 3062 individual glacier units in Switzerland and applied six step changes in ELA 0 (from + 100 to + 600 m) combined with four different values of the AAR 0 (0.5, 0.6, 0.67, 0.75). For an AAR 0 of 0.6 and an ELA 0 rise of 200 m (400 m) we calculate a total area loss of - 54% (- 80%) and a corresponding volume loss of - 50% (- 78%) compared to the 1973 glacier extent. In combination with a geocoded satellite image, the future glacier outlines are also used for automated rendering of perspective visualisations. This is a very attractive tool for communicating research results to the general public. Our study is illustrated for a test site in the Upper Engadine (Switzerland), where landscape changes above timberline play an
A model to calculate effectiveness of a submarine-launched nuclear ASW weapon
Energy Technology Data Exchange (ETDEWEB)
Magnoli, D.E.
1989-06-01
LLNL's Navy Tactical Applications Group (NTAG) has produced a computer model to calculate the probability of kill of a submarine-launched nuclear ASW standoff-weapon. Because of the uncertainties associated with target position and motion and with weapon delivery, this is a problem appropriately treated statistically. The code is a Monte Carlo's simulation which follows the engagement from localization through optional evasive maneuvers of the target to attack and damage assessment. For a given scenario (weapon characteristics, target characteristics, firing platform depth and hardness, etc.) the code produces a table and ultimately a plot of Pk as a function of range. 2 figs., 1 tab.
Systematical calculation of α decay half-lives with a generalized liquid drop model
Energy Technology Data Exchange (ETDEWEB)
Bao, Xiaojun [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Hongfei, E-mail: zhanghongfei@lzu.edu.cn [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Zhang, Haifei [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Royer, G. [Laboratoire Subatech, UMR, IN2P3/CNRS, Université – Ecole des Mines, 44 Nantes (France); Li, Junqing [School of Nuclear Science and Technology, Lanzhou University, Lanzhou 730000 (China); Institute of Modern Physics, Chinese Academy of Science, Lanzhou 730000 (China)
2014-01-15
A systematic calculation of α decay half-lives is presented for even–even nuclei between Te and Z=118 isotopes. The potential energy governing α decay has been determined within a liquid drop model including proximity effects between the α particle and the daughter nucleus and taking into account the experimental Q value. The α decay half-lives have been deduced from the WKB barrier penetration probability. The α decay half-lives obtained agree reasonably well with the experimental data.
Programs and subroutines for calculating cadmium body burdens based on a one-compartment model
International Nuclear Information System (INIS)
Robinson, C.V.; Novak, K.M.
1980-08-01
A pair of FORTRAN programs for calculating the body burden of cadmium as a function of age is presented, together with a discussion of the assumptions which serve to specify the underlying, one-compartment model. Account is taken of the contributions to the body burden from food, from ambient air, from smoking, and from occupational inhalation. The output is a set of values for ages from birth to 90 years which is either longitudinal (for a given year of birth) or cross-sectional (for a given calendar year), depending on the choice of input parameters
Flow aerodynamics modeling of an MHD swirl combustor - calculations and experimental verification
International Nuclear Information System (INIS)
Gupta, A.K.; Beer, J.M.; Louis, J.F.; Busnaina, A.A.; Lilley, D.G.
1981-01-01
This paper describes a computer code for calculating the flow dynamics of constant density flow in the second stage trumpet shaped nozzle section of a two stage MHD swirl combustor for application to a disk generator. The primitive pressure-velocity variable, finite difference computer code has been developed to allow the computation of inert nonreacting turbulent swirling flows in an axisymmetric MHD model swirl combustor. The method and program involve a staggered grid system for axial and radial velocities, and a line relaxation technique for efficient solution of the equations. Tue produces as output the flow field map of the non-dimensional stream function, axial and swirl velocity. 19 refs
Model-supported forward calculation of secondary helium observed by IBEX
Mueller, H. R.; Wood, B. E.
2017-12-01
Low-energy secondary neutral helium, created by charge exchange from interstellar helium ions, flows into the inner heliosphere and is part of the neutral helium signal observed by IBEX, the other contributor being primary neutral helium directly from interstellar space. With the help of an accurate, analytic heliospheric neutral test-particle code coupled to a global heliospheric model dominated by hydrogen and protons, the distribution functions and fluxes of secondary helium neutrals are calculated theoretically, from first principles. A general assessment of the characteristics and main sources of secondaries is given, as well as a discussion of their relevance to probe the outer heliosheath.
An approximate framework for quantum transport calculation with model order reduction
Energy Technology Data Exchange (ETDEWEB)
Chen, Quan, E-mail: quanchen@eee.hku.hk [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Li, Jun [Department of Chemistry, The University of Hong Kong (Hong Kong); Yam, Chiyung [Beijing Computational Science Research Center (China); Zhang, Yu [Department of Chemistry, The University of Hong Kong (Hong Kong); Wong, Ngai [Department of Electrical and Electronic Engineering, The University of Hong Kong (Hong Kong); Chen, Guanhua [Department of Chemistry, The University of Hong Kong (Hong Kong)
2015-04-01
A new approximate computational framework is proposed for computing the non-equilibrium charge density in the context of the non-equilibrium Green's function (NEGF) method for quantum mechanical transport problems. The framework consists of a new formulation, called the X-formulation, for single-energy density calculation based on the solution of sparse linear systems, and a projection-based nonlinear model order reduction (MOR) approach to address the large number of energy points required for large applied biases. The advantages of the new methods are confirmed by numerical experiments.
GoSam 2.0. Automated one loop calculations within and beyond the standard model
International Nuclear Information System (INIS)
Greiner, Nicolas; Deutsches Elektronen-Synchrotron
2014-10-01
We present GoSam 2.0, a fully automated framework for the generation and evaluation of one loop amplitudes in multi leg processes. The new version offers numerous improvements both on generational aspects as well as on the reduction side. This leads to a faster and more stable code for calculations within and beyond the Standard Model. Furthermore it contains the extended version of the standardized interface to Monte Carlo programs which allows for an easy combination with other existing tools. We briefly describe the conceptual innovations and present some phenomenological results.
Shell-model calculations of beta-decay rates for s- and r-process nucleosyntheses
International Nuclear Information System (INIS)
Takahashi, K.; Mathews, G.J.; Bloom, S.D.
1985-01-01
Examples of large-basis shell-model calculations of Gamow-Teller β-decay properties of specific interest in the astrophysical s- and r- processes are presented. Numerical results are given for: (1) the GT-matrix elements for the excited state decays of the unstable s-process nucleus 99 Tc; and (2) the GT-strength function for the neutron-rich nucleus 130 Cd, which lies on the r-process path. The results are discussed in conjunction with the astrophysics problems. 23 refs., 3 figs
DEFF Research Database (Denmark)
Skulason, Egill; Tripkovic, Vladimir; Björketun, Mårten
2010-01-01
Density functional theory calculations have been performed for the three elementary steps―Tafel, Heyrovsky, and Volmer―involved in the hydrogen oxidation reaction (HOR) and its reverse, the hydrogen evolution reaction (HER). For the Pt(111) surface a detailed model consisting of a negatively...... charged Pt(111) slab and solvated protons in up to three water bilayers is considered and reaction energies and activation barriers are determined by using a newly developed computational scheme where the potential can be kept constant during a charge transfer reaction. We determine the rate limiting...
Calculation of accurate small angle X-ray scattering curves from coarse-grained protein models
DEFF Research Database (Denmark)
Stovgaard, Kasper; Andreetta, Christian; Ferkinghoff-Borg, Jesper
2010-01-01
Background: Genome sequencing projects have expanded the gap between the amount of known protein sequences and structures. The limitations of current high resolution structure determination methods make it unlikely that this gap will disappear in the near future. Small angle X-ray scattering (SAXS......) is an established low resolution method for routinely determining the structure of proteins in solution. The purpose of this study is to develop a method for the efficient calculation of accurate SAXS curves from coarse-grained protein models. Such a method can for example be used to construct a likelihood function...
Sensitivity analysis using the FRAPCON-1/EM: development of a calculation model for licensing
International Nuclear Information System (INIS)
Chapot, J.L.C.
1985-01-01
The FRAPCON-1/EM is version of the FRAPCON-1 code which analyses fuel rods performance under normal operation conditions. This version yields conservative results and is used by the NRC in its licensing activities. A sensitivity analysis was made, to determine the combination of models from the FRAPCON-1/EM which yields the most conservative results for a typical Angra-1 reactor fuel rod. The present analysis showed that this code can be used as a calculation tool for the licensing of the Angra-1 reload. (F.E.) [pt
International Nuclear Information System (INIS)
Caracappa, Peter F.; Xu, X. George; Gu, Jianwei
2011-01-01
The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)
Novel and Efficient Methods for Calculating Pressure in Polymer Lattice Models
Zhang, Pengfei; Wang, Qiang
2014-03-01
Pressure calculation in polymer lattice models is an important but nontrivial subject. The three existing methods - thermodynamic integration, repulsive wall, and sedimentation equilibrium methods - all have their limitations and cannot be used to accurately calculate the pressure at all polymer volume fractions φ. Here we propose two novel methods. In the first method, we combine Monte Carlo simulation in an expanded grand-canonical ensemble with the Wang-Landau - Optimized Ensemble (WL-OE) simulation to calculate the pressure as a function of polymer volume fraction, which is very efficient at low to intermediate φ and exhibits negligible finite-size effects. In the second method, we introduce a repulsive plane with bridging bonds, which is similar to the repulsive wall method but eliminates its confinement effects, and estimate the two-dimensional density of states (in terms of the number of bridging bonds and the contact number) using the 1/ t version of Wang-Landau algorithm. This works well at all φ, especially at high φ where all the methods involving chain insertion trial moves fail.
Mixed layer depth calculation in deep convection regions in ocean numerical models
Courtois, Peggy; Hu, Xianmin; Pennelly, Clark; Spence, Paul; Myers, Paul G.
2017-12-01
Mixed Layer Depths (MLDs) diagnosed by conventional numerical models are generally based on a density difference with the surface (e.g., 0.01 kg.m-3). However, the temperature-salinity compensation and the lack of vertical resolution contribute to over-estimated MLD, especially in regions of deep convection. In the present work, we examined the diagnostic MLD, associated with the deep convection of the Labrador Sea Water (LSW), calculated with a simple density difference criterion. The over-estimated MLD led us to develop a new tool, based on an observational approach, to recalculate MLD from model output. We used an eddy-permitting, 1/12° regional configuration of the Nucleus for European Modelling of the Ocean (NEMO) to test and discuss our newly defined MLD. We compared our new MLD with that from observations, and we showed a major improvement with our new algorithm. To show the new MLD is not dependent on a single model and its horizontal resolution, we extended our analysis to include 1/4° eddy-permitting simulations, and simulations using the Modular Ocean Model (MOM) model.
International Nuclear Information System (INIS)
Karlberg, O.
1995-02-01
Doses to critical groups from the activity released from swedish reactors were modelled in 1983. In this report these calculations are compared to doses calculated (using the same assumptions as in the 1983 model) from the activity measured in the water recipient. The study shows that the model overestimates activity in biota and sediments, which was expected, since the model was constructed to be conservative. 13 refs, 5 figs, 6 tabs
Program realization of mathematical model of kinematic calculation of flat lever mechanisms
Directory of Open Access Journals (Sweden)
M. A. Vasechkin
2016-01-01
Full Text Available Calculation of kinematic mechanisms is very time-consuming work. Due to the content of a large number of similar operations can be automated using computers. Forthis purpose, it is necessary to implement a software implementation ofthe mathematical model of calculation of kinematic mechanisms of the second class. In the article on Turbo Pascal presents the text module to library procedures all kinematic studies of planar lever mechanisms of the second class. The determination of the kinematic characteristics of the mechanism and the construction of its provisions, plans, plans, speeds and accelerations carried out on the example of the six-link mechanism. The beginning of the motionless coordinate system coincides with the axis of rotation of the crank AB. It is assumed that the known length of all links, the positions of all additional points of links and the coordinates of all kinematic pairs rack mechanism, i.e. this stage of work to determine the kinematics of the mechanism must be preceded by a stage of synthesis of mechanism (determining missing dimensions of links. Denote the coordinates of point C and considering that the analogues of velocities and accelerations of this point is 0 (stationary point, appeal to the procedure that computes the kinematics group the Assyrians (GA third. Specify kinematic parameters of point D, taking the beginning of the guide slide E at point C, the angle, the analogue of the angular velocity and the analogue of the angular acceleration of the guide is zero, knowing the length of the connecting rod DE and the length of link 5, refer to the procedure for the GA of the second kind. The use of library routines module of the kinematic calculation, makes it relatively simple to organize a simulation of the mechanism motion, to calculate the projection analogues of velocities and accelerations of all links of the mechanism, to build plans of the velocities and accelerations at each position of the mechanism.
Popov, I; Valašková, J; Štefaničková, J; Krásnik, V
2017-01-01
A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.
Calculation of the P-T phase diagram of nitrogen using a mean field model
Enginer, Y.; Algul, G.; Yurtseven, H.
2017-12-01
The P-T phase diagram is calculated at low and moderate pressures by obtaining the phase line equations for the transitions considered in nitrogen using the Landau phenomenological model. For some transitions, a quadratic coupling between the order parameters is taken into account in the expansion of free energies in terms of the order parameters. A quadratic function in T and P is fitted to the experimental P-T data from the literature and the fitted parameters are determined. It is shown that the model studied here describes the observed data adequately, which can also be used to predict the thermodynamic properties of the phases of the molecular nitrogen within the temperatures and pressures of the P-T phase diagram of this system.
Model calculations of the age of firn air across the Antarctic continent
Directory of Open Access Journals (Sweden)
K. A. Kaspers
2004-01-01
Full Text Available The age of firn air in Antarctica at pore close-off depth is only known for a few specific sites where firn air has been sampled for analyses. We present a model that calculates the age of firn air at pore close-off depth for the entire Antarctic continent. The model basically uses four meteorological parameters as input (surface temperature, pressure, accumulation rate and wind speed. Using parameterisations for surface snow density, pore close-off density and tortuosity, in combination with a density-depth model and data of a regional atmospheric climate model, distribution of pore close-off depth for the entire Antarctic continent is determined. The deepest pore close-off depth was found for the East Antarctic Plateau near 72° E, 82° S, at 150±15 m (2σ. A firn air diffusion model was applied to calculate the age of CO2 at pore close-off depth. The results predict that the oldest firn gas (CO2 age is located between Dome Fuji, Dome Argos and Vostok at 43° E, 78° S being 148±23 (1σ or 38 for 2σ years old. At this location an atmospheric trace gas record should be obtained. In this study we show that methyl chloride could be recorded with a predicted length of 125 years as an example for trace gas records at this location. The longest record currently available from firn air is derived at South Pole, being 80 years. Sensitivity tests reveal that the locations with old firn air (East Antarctic Plateau have an estimated uncertainty (2σ for the modelled CO2 age at pore close-off depth of 30% and of about 40% for locations with younger firn air (CO2 age typically 40 years. Comparing the modelled age of CO2 at pore close-off depth with directly determined ages at seven sites yielded a correlation coefficient of 0.90 and a slope close to 1, suggesting a high level of confidence for the modelled results in spite of considerable remaining uncertainties.
Modelling of preheated regenerative chain in Cernavoda NPP using MMS calculation code
International Nuclear Information System (INIS)
Bigu, M.; Nita, I.; Prisecaru, I.; Dupleac, D.
2005-01-01
Full text: In this work it was studied operation of preheated regenerative chain from NPP Cernavoda. To obtain this analysis coupled analyses of condensate system, water supply system, and drain cooler system were effected. The analysis boundaries are: Upstream: - Steam condensers - Turbine Bleed Steam Downstream: - Steam Generators. The analysis was made in two steps: 1) Getting of hydraulic characteristic of pipe network from steam condensers to steam generators at nominal regime; this step was obtained with hydraulic package called PIPENET. 2) Real thermal hydraulic analyses were done based on hydraulic characteristic of pipe network and supplementary data required for heat transfer calculation in equipment of preheated regenerative chain. Thermal analyses were done using MMS package and refered to normal operating regimes, namely, nominal operating regime required for calibration of calculating model, shutdown regime, start-up regime from zero power hot to nominal power and to abnormal operating regimes, namely, turbine trip, reactor trip and loss of two condensate pumps. The results were compared with already existing analysis and showed the largest differences at interface areas (i.e. 5%). This led us to idea of extending analysis to all secondary circuits in order to reduce the number of boundary conditions which can generate uncertainty in analysis. In this analysis we obtained an advanced model of preheated regenerative chain of secondary circuit in Cernavoda NPP which could be extended up to cover the whole secondary circuit by including the analysis of steam generators, turbine, and steam condenser. (authors)
Modelling of preheated regenerative chain in Cernavoda NPP using MMS calculation code
International Nuclear Information System (INIS)
Bigu, M.; Nita, I.; Prisecaru, I.; Dupleac, D.
2005-01-01
In this work it was studied operation of preheated regenerative chain from NPP Cernavoda. To obtain this analysis coupled analyses of condensate system, water supply system, and drain cooler system were effected. The analysis boundaries are: Upstream: - Steam condensers - Turbine Bleed Steam Downstream: - Steam Generators. The analysis was made in two steps: 1) Getting of hydraulic characteristic of pipe network from steam condensers to steam generators at nominal regime; this step was obtained with hydraulic package called PIPENET. 2) Real thermal hydraulic analyses were done based on hydraulic characteristic of pipe network and supplementary data required for heat transfer calculation in equipment of preheated regenerative chain. Thermal analyses were done using MMS package and referred to normal operating regimes, namely, nominal operating regime required for calibration of calculating model, shutdown regime, start-up regime from zero power hot to nominal power and to abnormal operating regimes, namely, turbine trip, reactor trip and loss of two condensate pumps. The results were compared with already existing analysis and showed the largest differences at interface areas (i.e. 5%). This led US to idea of extending analysis to all secondary circuits in order to reduce the number of boundary conditions which can generate uncertainty in analysis. In this analysis we obtained an advanced model of preheated regenerative chain of secondary circuit in Cernavoda NPP which could be extended up to cover the whole secondary circuit by including the analysis of steam generators, turbine, and steam condenser. (authors)
Bus Operation Monitoring Oriented Public Transit Travel Index System and Calculation Models
Directory of Open Access Journals (Sweden)
Jiancheng Weng
2013-01-01
Full Text Available This study proposed a two-dimensional index system which is concerned essentially with urban travel based on travel modes and user satisfaction. First, the public transit was taken as an example to describe the index system establishing process. In consideration of convenience, rapid, reliability, comfort, and safety, a bus service evaluation index system was established. The indicators include the N-minute coverage of bus stops, average travel speed, and fluctuation of travel time between stops and bus load factor which could intuitively describe the characteristics of public transport selected to calculate bus travel indexes. Then, combined with the basic indicators, the calculation models of Convenience Index (CI, Rapid Index (RI, Reliability Index (RBI, and Comfort Index (CTI were established based on the multisource data of public transit including the real-time bus GPS data and passenger IC card data. Finally, a case study of Beijing bus operation evaluation and analysis was conducted by taking real bus operation data including GPS data and passenger transaction recorder (IC card data. The results showed that the operation condition of the public transit was well reflected and scientifically classified by the bus travel index models.
Jacob, D; Palacios, J J
2011-01-28
We study the performance of two different electrode models in quantum transport calculations based on density functional theory: parametrized Bethe lattices and quasi-one-dimensional wires or nanowires. A detailed account of implementation details in both the cases is given. From the systematic study of nanocontacts made of representative metallic elements, we can conclude that the parametrized electrode models represent an excellent compromise between computational cost and electronic structure definition as long as the aim is to compare with experiments where the precise atomic structure of the electrodes is not relevant or defined with precision. The results obtained using parametrized Bethe lattices are essentially similar to the ones obtained with quasi-one-dimensional electrodes for large enough cross-sections of these, adding a natural smearing to the transmission curves that mimics the true nature of polycrystalline electrodes. The latter are more demanding from the computational point of view, but present the advantage of expanding the range of applicability of transport calculations to situations where the electrodes have a well-defined atomic structure, as is the case for carbon nanotubes, graphene nanoribbons, or semiconducting nanowires. All the analysis is done with the help of codes developed by the authors which can be found in the quantum transport toolbox ALACANT and are publicly available.
Directory of Open Access Journals (Sweden)
V. Giannoglou
2016-06-01
Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.
Lesperance, Marielle; Inglis-Whalen, M; Thomson, R M
2014-02-01
To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with(125)I, (103)Pd, or (131)Cs seeds, and to investigate doses to ocular structures. An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20-30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%-10% and 13%-14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%-17% and 29%-34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up to 16%. In the full eye model
International Nuclear Information System (INIS)
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M.
2014-01-01
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with 125 I, 103 Pd, or 131 Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model simulation by up
Energy Technology Data Exchange (ETDEWEB)
Lesperance, Marielle; Inglis-Whalen, M.; Thomson, R. M., E-mail: rthomson@physics.carleton.ca [Carleton Laboratory for Radiotherapy Physics, Department of Physics, Carleton University, Ottawa K1S 5B6 (Canada)
2014-02-15
Purpose : To investigate the effects of the composition and geometry of ocular media and tissues surrounding the eye on dose distributions for COMS eye plaque brachytherapy with{sup 125}I, {sup 103}Pd, or {sup 131}Cs seeds, and to investigate doses to ocular structures. Methods : An anatomically and compositionally realistic voxelized eye model with a medial tumor is developed based on a literature review. Mass energy absorption and attenuation coefficients for ocular media are calculated. Radiation transport and dose deposition are simulated using the EGSnrc Monte Carlo user-code BrachyDose for a fully loaded COMS eye plaque within a water phantom and our full eye model for the three radionuclides. A TG-43 simulation with the same seed configuration in a water phantom neglecting the plaque and interseed effects is also performed. The impact on dose distributions of varying tumor position, as well as tumor and surrounding tissue media is investigated. Each simulation and radionuclide is compared using isodose contours, dose volume histograms for the lens and tumor, maximum, minimum, and average doses to structures of interest, and doses to voxels of interest within the eye. Results : Mass energy absorption and attenuation coefficients of the ocular media differ from those of water by as much as 12% within the 20–30 keV photon energy range. For all radionuclides studied, average doses to the tumor and lens regions in the full eye model differ from those for the plaque in water by 8%–10% and 13%–14%, respectively; the average doses to the tumor and lens regions differ between the full eye model and the TG-43 simulation by 2%–17% and 29%–34%, respectively. Replacing the surrounding tissues in the eye model with water increases the maximum and average doses to the lens by 2% and 3%, respectively. Substituting the tumor medium in the eye model for water, soft tissue, or an alternate melanoma composition affects tumor dose compared to the default eye model
Energy Technology Data Exchange (ETDEWEB)
Arbeiter, F. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Gordeev, S. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)]. E-mail: gordeev@irs.fzk.de; Heinzel, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany); Slobodtchouk, V. [Forschungszentrum Karlsruhe GmbH, Postfach 3640, D-76021 Karlsruhe (Germany)
2006-02-15
The aim of the present work is to choose an optimal use of CFD codes for thermohydraulic calculation of the helium cooled fusion reactor components, such as divertor module, test blanket module and International Fusion Materials Irradiation Facility (IFMIF) test modules. In spite of common features (intense heat flux, nuclear heating of the structure, helium-cooling), all these components have different boundary conditions, such as helium temperature, pressure and heating rate and different geometries. It is the reason for the appearance of some effects in the flow influencing significantly the heat transfer. A number of turbulence models offered by the commercial STAR-CD code were tested on the experiments carried out in the Forschungszentrum Karlsruhe (FZK) and on the experimental data from the scientific publications. Results of different turbulence models are compared and analysed. For geometrically simple channel flows with significant gas property variation low-Re number k-{epsilon} models with damping functions give more accurate results and are more appropriate for the conditions of the IFMIF HFTM. The heat transfer in regions with flow impingement is well predicted by turbulence models, which include different limiters in the turbulence production. Most reliable turbulence models were chosen for the thermohydraulic analysis.
Non-LTE model calculations for SN 1987A and the extragalactic distance scale
Schmutz, W.; Abbott, D. C.; Russell, R. S.; Hamann, W.-R.; Wessolowski, U.
1990-01-01
This paper presents model atmospheres for the first week of SN 1987A, based on the luminosity and density/velocity structure from hydrodynamic models of Woosley (1988). The models account for line blanketing, expansion, sphericity, and departures from LTE in hydrogen and helium and differ from previously published efforts because they represent ab initio calculations, i.e., they contain essentially no free parameters. The formation of the UV spectrum is dominated by the effects of line blanketing. In the absorption troughs, the Balmer line profiles were fit well by these models, but the observed emissions are significantly stronger than predicted, perhaps due to clumping. The generally good agreement between the present synthetic spectra and observations provides independent support for the overall accuracy of the hydrodynamic models of Woosley. The question of the accuracy of the Baade-Wesselink method is addressed in a detailed discussion of its approximations. While the application of the standard method produces a distance within an uncertainty of 20 percent in the case of SN 1987A, systematic errors up to a factor of 2 are possible, particularly if the precursor was a red supergiant.
Directory of Open Access Journals (Sweden)
Giovanni Pitari
2017-12-01
Full Text Available Two independent chemistry-transport models with troposphere-stratosphere coupling are used to quantify the different components of the radiative forcing (RF from aircraft emissions of NOx, i.e., the University of L'Aquila climate-chemistry model (ULAQ-CCM and the University of Oslo chemistry-transport model (Oslo-CTM3. The tropospheric NOx enhancement due to aircraft emissions produces a short-term O3 increase with a positive RF (+17.3 mW/m2 (as an average value of the two models. This is partly compensated by the CH4 decrease due to the OH enhancement (−9.4 mW/m2. The latter is a long-term response calculated using a surface CH4 flux boundary condition (FBC, with at least 50 years needed for the atmospheric CH4 to reach steady state. The radiative balance is also affected by the decreasing amount of CO2 produced at the end of the CH4 oxidation chain: an average CO2 accumulation change of −2.2 ppbv/yr is calculated on a 50 year time horizon (−1.6 mW/m2. The aviation perturbed amount of CH4 induces a long-term response of tropospheric O3 mostly due to less HO2 and CH3O2 being available for O3 production, compared with the reference case where a constant CH4 surface mixing ratio boundary condition is used (MBC (−3.9 mW/m2. The CH4 decrease induces a long-term response of stratospheric H2O (−1.4 mW/m2. The latter finally perturbs HOx and NOx in the stratosphere, with a more efficient NOx cycle for mid-stratospheric O3 depletion and a decreased O3 production from HO2+NO in the lower stratosphere. This produces a long-term stratospheric O3 loss, with a negative RF (−1.2 mW/m2, compared with the CH4 MBC case. Other contributions to the net NOx RF are those due to NO2 absorption of UV-A and aerosol perturbations (the latter calculated only in the ULAQ-CCM. These comprise: increasing sulfate due to more efficient oxidation of SO2, increasing inorganic and organic nitrates and the net aerosols indirect effect on warm clouds
Energy Technology Data Exchange (ETDEWEB)
Freeman, L.B. (ed.)
1978-08-01
The calculational model used in the analysis of LWBR nuclear performance is described. The model was used to analyze the as-built core and predict core nuclear performance prior to core operation. The qualification of the nuclear model using experiments and calculational standards is described. Features of the model include: an automated system of processing manufacturing data; an extensively analyzed nuclear data library; an accurate resonance integral calculation; space-energy corrections to infinite medium cross sections; an explicit three-dimensional diffusion-depletion calculation; a transport calculation for high energy neutrons; explicit accounting for fuel and moderator temperature feedback, clad diameter shrinkage, and fuel pellet growth; and an extensive testing program against experiments and a highly developed analytical standard.
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
INTERA Environmental Consultants, Inc.
1979-01-01
The model developed under this contract is a modified version of the deep well disposal model developed by INTERCOMP Resource Development and Engineering, Inc., for the U.S. Geological Survey (A model for calculating effects of liquid waste disposal in deep saline aquifers). The model is a finite-difference numerical solution of the partial differential equations describing
Sensitivity analysis on a dose-calculation model for the terrestrial food-chain pathway
International Nuclear Information System (INIS)
Abdel-Aal, M.M.
1994-01-01
Parameter uncertainty and sensitivity were applied to the U.S. Regulatory Commission's (NRC) Regulatory Guide 1.109 (1977) models for calculating the ingestion dose via a terrestrial food-chain pathway in order to assess the transport of chronically released, low-level effluents from light-water reactors. In the analysis, we used the generation of latin hypercube samples (LHS) and employed a constrained sampling scheme. The generation of these samples is based on information supplied to the LHS program for variables or parameters. The actually sampled values are used to form vectors of variables that are commonly used as inputs to computer models for the purpose of sensitivity and uncertainty analysis. Regulatory models consider the concentrations of radionuclides that are deposited on plant tissues or lead to root uptake of nuclides initially deposited on soil. We also consider concentrations in milk and beef as a consequence of grazing on contaminated pasture or ingestion of contaminated feed by dairy and beef cattle. The radionuclides Sr-90 and Cs-137 were selected for evaluation. The most sensitive input parameters for the model were the ground-dispersion parameter, release rates of radionuclides, and soil-to-plant transfer coefficients of radionuclides. (Author)
Franesqui, Miguel A; Yepes, Jorge; García-González, Cándida
2017-08-01
This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA). The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled "Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves" (Franesqui et al., 2017) [1].
Directory of Open Access Journals (Sweden)
Miguel A. Franesqui
2017-08-01
Full Text Available This article outlines the ultrasound data employed to calibrate in the laboratory an analytical model that permits the calculation of the depth of partial-depth surface-initiated cracks on bituminous pavements using this non-destructive technique. This initial calibration is required so that the model provides sufficient precision during practical application. The ultrasonic pulse transit times were measured on beam samples of different asphalt mixtures (semi-dense asphalt concrete AC-S; asphalt concrete for very thin layers BBTM; and porous asphalt PA. The cracks on the laboratory samples were simulated by means of notches of variable depths. With the data of ultrasound transmission time ratios, curve-fittings were carried out on the analytical model, thus determining the regression parameters and their statistical dispersion. The calibrated models obtained from laboratory datasets were subsequently applied to auscultate the evolution of the crack depth after microwaves exposure in the research article entitled “Top-down cracking self-healing of asphalt pavements with steel filler from industrial waste applying microwaves” (Franesqui et al., 2017 [1].
Surface complexation modeling calculation of Pb(II) adsorption onto the calcined diatomite
Ma, Shu-Cui; Zhang, Ji-Lin; Sun, De-Hui; Liu, Gui-Xia
2015-12-01
Removal of noxious heavy metal ions (e.g. Pb(II)) by surface adsorption of minerals (e.g. diatomite) is an important means in the environmental aqueous pollution control. Thus, it is very essential to understand the surface adsorptive behavior and mechanism. In this work, the Pb(II) apparent surface complexation reaction equilibrium constants on the calcined diatomite and distributions of Pb(II) surface species were investigated through modeling calculations of Pb(II) based on diffuse double layer model (DLM) with three amphoteric sites. Batch experiments were used to study the adsorption of Pb(II) onto the calcined diatomite as a function of pH (3.0-7.0) and different ionic strengths (0.05 and 0.1 mol L-1 NaCl) under ambient atmosphere. Adsorption of Pb(II) can be well described by Freundlich isotherm models. The apparent surface complexation equilibrium constants (log K) were obtained by fitting the batch experimental data using the PEST 13.0 together with PHREEQC 3.1.2 codes and there is good agreement between measured and predicted data. Distribution of Pb(II) surface species on the diatomite calculated by PHREEQC 3.1.2 program indicates that the impurity cations (e.g. Al3+, Fe3+, etc.) in the diatomite play a leading role in the Pb(II) adsorption and dominant formation of complexes and additional electrostatic interaction are the main adsorption mechanism of Pb(II) on the diatomite under weak acidic conditions.
DWPF SB6 INITIAL CPC FLOWSHEET TESTING SB6-1 TO SB6-4L TESTS OF SB6-A AND SB6-B SIMULANTS
Energy Technology Data Exchange (ETDEWEB)
Lambert, D.; Pickenheim, B.; Best, D.
2009-09-09
The Defense Waste Processing Facility (DWPF) will transition from Sludge Batch 5 (SB5) processing to Sludge Batch 6 (SB6) processing in late fiscal year 2010. Tests were conducted using non-radioactive simulants of the expected SB6 composition to determine the impact of varying the acid stoichiometry during the Sludge Receipt and Adjustment Tank (SRAT) and Slurry Mix Evaporator (SME) processes. The work was conducted to meet the Technical Task Request (TTR) HLW/DWPF/TTR-2008-0043, Rev.0 and followed the guidelines of a Task Technical and Quality Assurance Plan (TT&QAP). The flowsheet studies are performed to evaluate the potential chemical processing issues, hydrogen generation rates, and process slurry rheological properties as a function of acid stoichiometry. These studies were conducted with the estimated SB6 composition at the time of the study. This composition assumed a blend of 101,085 kg of Tank 4 insoluble solids and 179,000 kg of Tank 12 insoluble solids. The current plans are to subject Tank 12 sludge to aluminum dissolution. Liquid Waste Operations assumed that 75% of the aluminum would be dissolved during this process. After dissolution and blending of Tank 4 sludge slurry, plans included washing the contents of Tank 51 to {approx}1M Na. After the completion of washing, the plan assumes that 40 inches on Tank 40 slurry would remain for blending with the qualified SB6 material. There are several parameters that are noteworthy concerning SB6 sludge: (1) This is the second batch DWPF will be processing that contains sludge that has had a significant fraction of aluminum removed through aluminum dissolution; (2) The sludge is high in mercury, but the projected concentration is lower than SB5; (3) The sludge is high in noble metals, but the projected concentrations are lower than SB5; and(4) The sludge is high in U and Pu - components that are not added in sludge simulants. Six DWPF process simulations were completed in 4-L laboratory-scale equipment using
International Nuclear Information System (INIS)
Smith, D.L.; Guenther, P.T.
1983-11-01
We suggest a procedure for estimating uncertainties in neutron cross sections calculated with a nuclear model descriptive of a specific mass region. It applies standard error propagation techniques, using a model-parameter covariance matrix. Generally, available codes do not generate covariance information in conjunction with their fitting algorithms. Therefore, we resort to estimating a relative covariance matrix a posteriori from a statistical examination of the scatter of elemental parameter values about the regional representation. We numerically demonstrate our method by considering an optical-statistical model analysis of a body of total and elastic scattering data for the light fission-fragment mass region. In this example, strong uncertainty correlations emerge and they conspire to reduce estimated errors to some 50% of those obtained from a naive uncorrelated summation in quadrature. 37 references
International Nuclear Information System (INIS)
Sada, Koichi; Michioka, Takenobu; Ichikawa, Yoichi
2002-01-01
Because effluent gas is sometimes released from low positions, viz., near the ground surface and around buildings, the effects caused by buildings within the site area are not negligible for gas diffusion predictions. For these reasons, the effects caused by buildings for gas diffusion are considered under the terrain following calculation coordinate system in this report. Numerical calculation meshes on the ground surface are treated as the building with the adaptation of wall function techniques of turbulent quantities in the flow calculations using a turbulence closure model. The reflection conditions of released particles on building surfaces are taken into consideration in the diffusion calculation using the Lagrangian particle model. Obtained flow and diffusion calculation results are compared with those of wind tunnel experiments around the building. It was apparent that features observed in a wind tunnel, viz., the formation of cavity regions behind the building and the gas diffusion to the ground surface behind the building, are also obtained by numerical calculation. (author)
Barton, Jonathan S.; Hall, Dorothy K.; Sigurosson, Oddur; Williams, Richard S., Jr.; Smith, Laurence C.; Garvin, James B.
1999-01-01
Two ascending European Space Agency (ESA) Earth Resources Satellites (ERS)-1/-2 tandem-mode, synthetic aperture radar (SAR) pairs are used to calculate the surface elevation of Hofsjokull, an ice cap in central Iceland. The motion component of the interferometric phase is calculated using the 30 arc-second resolution USGS GTOPO30 global digital elevation product and one of the ERS tandem pairs. The topography is then derived by subtracting the motion component from the other tandem pair. In order to assess the accuracy of the resultant digital elevation model (DEM), a geodetic airborne laser-altimetry swath is compared with the elevations derived from the interferometry. The DEM is also compared with elevations derived from a digitized topographic map of the ice cap from the University of Iceland Science Institute. Results show that low temporal correlation is a significant problem for the application of interferometry to small, low-elevation ice caps, even over a one-day repeat interval, and especially at the higher elevations. Results also show that an uncompensated error in the phase, ramping from northwest to southeast, present after tying the DEM to ground-control points, has resulted in a systematic error across the DEM.
Model calculation of positron states in tungsten containing hydrogen and helium
International Nuclear Information System (INIS)
Troev, T; Nankov, N; Yoshiie, T; Popov, E
2010-01-01
Tungsten is a candidate material for plasma-facing first wall of a fusion power plant. Understanding of defects, tritium and helium behaviour in plasma facing materials [PFM] is an important issue for fusion reactor from viewpoints of its mechanical properties under neutron irradiation. Experiments with high-Z materials show that erosion of these materials under normal operation condition is considerably lower than the plasma induced erosion of low-Z materials like carbon or beryllium. Quantitative understanding of the experimental results for defects in tungsten needs a comprehensive theory of electron-positron interaction. The properties of defects in tungsten containing hydrogen or helium atoms have been investigated by model positron lifetime quantum-mechanical calculations. The electron wave functions have been obtained in the local density approximation LDA to the density functional theory DFT. On the bases of calculated results, the behaviour of vacancies, empty nano-voids and nano-voids with hydrogen and helium were discussed. It was established that hydrogen and helium in larger three-dimensional vacancy clusters in W change the annihilation characteristics dramatically. The hydrogen and helium atoms are trapped by lattice vacancies. These results provide physical insight for positron interactions in tungsten defects and can be used for prediction of hydrogen-H or helium-He4 and (tritium-H3) generation for the design of fusion reactors.
Modeling aluminum-air battery systems
Savinell, R. F.; Willis, M. S.
The performance of a complete aluminum-air battery system was studied with a flowsheet model built from unit models of each battery system component. A plug flow model for heat transfer was used to estimate the amount of heat transferred from the electrolyte to the air stream. The effect of shunt currents on battery performance was found to be insignificant. Using the flowsheet simulator to analyze a 100 cell battery system now under development demonstrated that load current, aluminate concentration, and electrolyte temperature are dominant variables controlling system performance. System efficiency was found to decrease as both load current and aluminate concentration increases. The flowsheet model illustrates the interdependence of separate units on overall system performance.
International Nuclear Information System (INIS)
Tashevski, Done; Dimitrovski, Mile
1994-01-01
The model for calculation of emission and immision of air pollutants from vehicles with internal combustion engine on the crossroads in urban environments, with substitution of a great number of exhaust-pipes with one chimney in the centre of the crossroad has been made. The whole calculation of the pollution sources mentioned above is, in the fact, the calculation of the emission and imisson of pollutants from point sources of pollution. (author)
Generic models of deep formation water calculated with PHREEQC using the "gebo"-database
Bozau, E.; van Berk, W.
2012-04-01
To identify processes during the use of formation waters for geothermal energy production an extended hydrogeochemical thermodynamic database (named "gebo"-database) for the well known and commonly used software PHREEQC has been developed by collecting and inserting data from literature. The following solution master species: Fe(+2), Fe(+3), S(-2), C(-4), Si, Zn, Pb, and Al are added to the database "pitzer.dat" which is provided with the code PHREEQC. According to the solution master species the necessary solution species and phases (solid phases and gases) are implemented. Furthermore, temperature and pressure adaptations of the mass action law constants, Pitzer parameters for the calculation of activity coefficients in waters of high ionic strength and solubility equilibria among gaseous and aqueous species of CO2, methane, and hydrogen sulphide are implemented into the "gebo"-database. Combined with the "gebo"-database the code PHREEQC can be used to test the behaviour of highly concentrated solutions (e.g. formation waters, brines). Chemical changes caused by temperature and pressure gradients as well as the exposure of the water to the atmosphere and technical equipments can be modelled. To check the plausibility of additional and adapted data/parameters experimental solubility data from literature (e.g. sulfate and carbonate minerals) are compared to modelled mineral solubilities at elevated levels of Total Dissolved Solids (TDS), temperature, and pressure. First results show good matches between modelled and experimental mineral solubility for barite, celestite, anhydrite, and calcite in high TDS waters indicating the plausibility of additional and adapted data and parameters. Furthermore, chemical parameters of geothermal wells in the North German Basin are used to test the "gebo"-database. The analysed water composition (starting with the main cations and anions) is calculated by thermodynamic equilibrium reactions of pure water with the minerals found in
Cooling load calculation by the radiant time series method - effect of solar radiation models
Energy Technology Data Exchange (ETDEWEB)
Costa, Alexandre M.S. [Universidade Estadual de Maringa (UEM), PR (Brazil)], E-mail: amscosta@uem.br
2010-07-01
In this work was analyzed numerically the effect of three different models for solar radiation on the cooling load calculated by the radiant time series' method. The solar radiation models implemented were clear sky, isotropic sky and anisotropic sky. The radiant time series' method (RTS) was proposed by ASHRAE (2001) for replacing the classical methods of cooling load calculation, such as TETD/TA. The method is based on computing the effect of space thermal energy storage on the instantaneous cooling load. The computing is carried out by splitting the heat gain components in convective and radiant parts. Following the radiant part is transformed using time series, which coefficients are a function of the construction type and heat gain (solar or non-solar). The transformed result is added to the convective part, giving the instantaneous cooling load. The method was applied for investigate the influence for an example room. The location used was - 23 degree S and 51 degree W and the day was 21 of January, a typical summer day in the southern hemisphere. The room was composed of two vertical walls with windows exposed to outdoors with azimuth angles equals to west and east directions. The output of the different models of solar radiation for the two walls in terms of direct and diffuse components as well heat gains were investigated. It was verified that the clear sky exhibited the less conservative (higher values) for the direct component of solar radiation, with the opposite trend for the diffuse component. For the heat gain, the clear sky gives the higher values, three times higher for the peek hours than the other models. Both isotropic and anisotropic models predicted similar magnitude for the heat gain. The same behavior was also verified for the cooling load. The effect of room thermal inertia was decreasing the cooling load during the peak hours. On the other hand the higher thermal inertia values are the greater for the non peak hours. The effect
Energy Technology Data Exchange (ETDEWEB)
Chang, Jong Hwa; Lee, Jeong Yeon; Lee, Young Ouk; Sukhovitski, Efrem Sh. [Korea Atomic Energy Research Institute, Taejeon (Korea)
2000-01-01
Programs SHEMMAN and OPTMAN (Version 6) have been developed for determinations of nuclear Hamiltonian parameters and for optical model calculations, respectively. The optical model calculations by OPTMAN with coupling schemes built on wave functions functions of non-axial soft-rotator are self-consistent, since the parameters of the nuclear Hamiltonian are determined by adjusting the energies of collective levels to experimental values with SHEMMAN prior to the optical model calculation. The programs have been installed at Nuclear Data Evaluation Laboratory of KAERI. This report is intended as a brief manual of these codes. 43 refs., 9 figs., 1 tabs. (Author)
International Nuclear Information System (INIS)
Johnson, J.O.; Miller, L.F.; Kam, F.B.K.
1981-05-01
A neutronics model for the Oak Ridge National Laboratory Bulk Shielding Reactor (ORNL-SAR) was developed and verified by experimental measurements. A cross-section library was generated from the 218 group Master Library using the AMPX Block Code system. A series of one-, two-, and three-dimensional neutronics calculations were performed utilizing both transport and diffusion theory. Spectral comparison was made with 58 Ni(n,p) reaction. The results of the comparison between the calculational model and other experimental measurements showed agreement within 10% and therefore the model was determined to be adequate for calculating the neutron fluence for future irradiation experiments in the ORNL-BSR
An Exploration of Wind Stress Calculation Techniques in Hurricane Storm Surge Modeling
Directory of Open Access Journals (Sweden)
Kyra M. Bryant
2016-09-01
Full Text Available As hurricanes continue to threaten coastal communities, accurate storm surge forecasting remains a global priority. Achieving a reliable storm surge prediction necessitates accurate hurricane intensity and wind field information. The wind field must be converted to wind stress, which represents the air-sea momentum flux component required in storm surge and other oceanic models. This conversion requires a multiplicative drag coefficient for the air density and wind speed to represent the air-sea momentum exchange at a given location. Air density is a known parameter and wind speed is a forecasted variable, whereas the drag coefficient is calculated using an empirical correlation. The correlation’s accuracy has brewed a controversy of its own for more than half a century. This review paper examines the lineage of drag coefficient correlations and their acceptance among scientists.
Optical model calculation for the unresolved/resolved resonance region of Fe-56
Energy Technology Data Exchange (ETDEWEB)
Kawano, Toshihiko [Kyushu Univ., Fukuoka (Japan); Froehner, F.H.
1997-03-01
We have studied optical model fits to total neutron cross sections of structural materials using the accurate data base for {sup 56}Fe existing in the resolved and unresolved resonance region. Averages over resolved resonances were calculated with Lorentzian weighting in Reich-Moore (reduced R matrix) approximation. Starting from the best available optical potentials we found that adjustment of the real and imaginary well depths does not work satisfactorily with the conventional weak linear energy dependence of the well depths. If, however, the linear dependences are modified towards low energies, the average total cross sections can be fitted quite well, from the resolved resonance region up to 20 MeV and higher. (author)
Monte Carlo based geometrical model for efficiency calculation of an n-type HPGe detector
Energy Technology Data Exchange (ETDEWEB)
Padilla Cabal, Fatima, E-mail: fpadilla@instec.c [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba); Lopez-Pino, Neivy; Luis Bernal-Castillo, Jose; Martinez-Palenzuela, Yisel; Aguilar-Mena, Jimmy; D' Alessandro, Katia; Arbelo, Yuniesky; Corrales, Yasser; Diaz, Oscar [Instituto Superior de Tecnologias y Ciencias Aplicadas, ' Quinta de los Molinos' Ave. Salvador Allende, esq. Luaces, Plaza de la Revolucion, Ciudad de la Habana, CP 10400 (Cuba)
2010-12-15
A procedure to optimize the geometrical model of an n-type detector is described. Sixteen lines from seven point sources ({sup 241}Am, {sup 133}Ba, {sup 22}Na, {sup 60}Co, {sup 57}Co, {sup 137}Cs and {sup 152}Eu) placed at three different source-to-detector distances (10, 20 and 30 cm) were used to calibrate a low-background gamma spectrometer between 26 and 1408 keV. Direct Monte Carlo techniques using the MCNPX 2.6 and GEANT 4 9.2 codes, and a semi-empirical procedure were performed to obtain theoretical efficiency curves. Since discrepancies were found between experimental and calculated data using the manufacturer parameters of the detector, a detail study of the crystal dimensions and the geometrical configuration is carried out. The relative deviation with experimental data decreases from a mean value of 18-4%, after the parameters were optimized.
Ben Mosbah, Abdallah
In order to improve the qualities of wind tunnel tests, and the tools used to perform aerodynamic tests on aircraft wings in the wind tunnel, new methodologies were developed and tested on rigid and flexible wings models. A flexible wing concept is consists in replacing a portion (lower and/or upper) of the skin with another flexible portion whose shape can be changed using an actuation system installed inside of the wing. The main purpose of this concept is to improve the aerodynamic performance of the aircraft, and especially to reduce the fuel consumption of the airplane. Numerical and experimental analyses were conducted to develop and test the methodologies proposed in this thesis. To control the flow inside the test sections of the Price-Paidoussis wind tunnel of LARCASE, numerical and experimental analyses were performed. Computational fluid dynamics calculations have been made in order to obtain a database used to develop a new hybrid methodology for wind tunnel calibration. This approach allows controlling the flow in the test section of the Price-Paidoussis wind tunnel. For the fast determination of aerodynamic parameters, new hybrid methodologies were proposed. These methodologies were used to control flight parameters by the calculation of the drag, lift and pitching moment coefficients and by the calculation of the pressure distribution around an airfoil. These aerodynamic coefficients were calculated from the known airflow conditions such as angles of attack, the mach and the Reynolds numbers. In order to modify the shape of the wing skin, electric actuators were installed inside the wing to get the desired shape. These deformations provide optimal profiles according to different flight conditions in order to reduce the fuel consumption. A controller based on neural networks was implemented to obtain desired displacement actuators. A metaheuristic algorithm was used in hybridization with neural networks, and support vector machine approaches and their
Electromagnetic field modeling and ion optics calculations for a continuous-flow AMS system
Han, B. X.; von Reden, K. F.; Roberts, M. L.; Schneider, R. J.; Hayes, J. M.; Jenkins, W. J.
2007-06-01
A continuous-flow 14C AMS (CFAMS) system is under construction at the NOSAMS facility. This system is based on a NEC Model 1.5SDH-1 0.5 MV Pelletron accelerator and will utilize a combination of a microwave ion source (MIS) and a charge exchange canal (CXC) to produce negative carbon ions from a continuously flowing stream of CO2 gas. For high-efficiency transmission of the large emittance, large energy-spread beam from the ion source unit, a large-acceptance and energy-achromatic injector consisting of a 45° electrostatic spherical analyzer (ESA) and a 90° double-focusing magnet has been designed. The 45° ESA is rotatable to accommodate a 134-sample MC-SNICS as a second ion source. The high-energy achromat (90° double focusing magnet and 90° ESA) has also been customized for large acceptance. Electromagnetic field modeling and ion optics calculations of the beamline were done with Infolytica MagNet, ElecNet, and Trajectory Evaluator. PBGUNS and SIMION were used for the modeling of ion source unit.
Mathematical model of whole-process calculation for bottom-blowing copper smelting
Li, Ming-zhou; Zhou, Jie-min; Tong, Chang-ren; Zhang, Wen-hai; Li, He-song
2017-11-01
The distribution law of materials in smelting products is key to cost accounting and contaminant control. Regardless, the distribution law is difficult to determine quickly and accurately by mere sampling and analysis. Mathematical models for material and heat balance in bottom-blowing smelting, converting, anode furnace refining, and electrolytic refining were established based on the principles of material (element) conservation, energy conservation, and control index constraint in copper bottom-blowing smelting. Simulation of the entire process of bottom-blowing copper smelting was established using a self-developed MetCal software platform. A whole-process simulation for an enterprise in China was then conducted. Results indicated that the quantity and composition information of unknown materials, as well as heat balance information, can be quickly calculated using the model. Comparison of production data revealed that the model can basically reflect the distribution law of the materials in bottom-blowing copper smelting. This finding provides theoretical guidance for mastering the performance of the entire process.
An Approach for Calculating Land Valuation by Using Inspire Data Models
Aydinoglu, A. C.; Bovkir, R.
2017-11-01
Land valuation is a highly important concept for societies and governments have always emphasis on the process especially for taxation, expropriation, market capitalization and economic activity purposes. To success an interoperable and standardised land valuation, INSPIRE data models can be very practical and effective. If data used in land valuation process produced in compliance with INSPIRE specifications, a reliable and effective land valuation process can be performed. In this study, possibility of the performing land valuation process with using the INSPIRE data models was analysed and with the help of Geographic Information Systems (GIS) a case study in Pendik was implemented. For this purpose, firstly data analysis and gathering was performed. After, different data structures were transformed according to the INSPIRE data model requirements. For each data set necessary ETL (Extract-Transform-Load) tools were produced and all data transformed according to the target data requirements. With the availability and practicability of spatial analysis tools of GIS software, land valuation calculations were performed for study area.
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
Kim, Myung-Hee Y.; Hada, Megumi; Cucinotta, Francis A.; Wu, Honglu
2014-01-01
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET gamma or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ''biological Bragg curve'' is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta, et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called "overkill".
Energy Technology Data Exchange (ETDEWEB)
Frost, G. J.; Fried, Alan; Lee, Y.- N.; Wert, B.; Henry, B.; Drummond, J. R.; Evans, M. J.; Fehsenfeld, Fred C.; Goldan, P. D.; Holloway, J. S.; Hubler, Gerhard F.; Jakoubek, R.; Jobson, B Tom T.; Knapp, K.; Kuster, W. C.; Roberts, J.; Rudolph, Jochen; Ryerson, T. B.; Stohl, A.; Stroud, C.; Sueper, D. T.; Trainer, Michael; Williams, J.
2002-04-18
Formaldehyde (CH2O) measurements from two independent instruments are compared with photochemical box model calculations. The measurements were made on the National Oceanic and Atmospheric Administration P-3 aircraft as part of the 1997 North Atlantic Regional Experiment (NARE 97). The data set considered here consists of air masses sampled between 0 and 8 km over the North Atlantic Ocean which do not show recent influence from emissions or transport. These air masses therefore should be in photochemical steady state with respect to CH2O when constrained by the other P-3 measurements, and methane oxidation was expected to be the predominant source of CH2O in these air masses. For this data set both instruments measured identical CH2O concentrations to within 40 parts per trillion by volume (pptv) on average over the 0–800 pptv range, although differences larger than the combined 2s total uncertainty estimates were observed between the two instruments in 11% of the data. Both instruments produced higher CH2O concentrations than the model in more than 90% of this data set, with a median measured-modeled [CH2O] difference of 0.13 or 0.18 ppbv (depending on the instrument), or about a factor of 2. Such large differences cannot be accounted for by varying model input parameters within their respective uncertainty ranges. After examining the possible reasons for the model-measurement discrepancy, we conclude that there are probably one or more additional unknown sources of CH2O in the North Atlantic troposphere.
Directory of Open Access Journals (Sweden)
Linhong Wang
2013-01-01
Full Text Available As an important component of the urban adaptive traffic control system, subarea partition algorithm divides the road network into some small subareas and then determines the optimal signal control mode for each signalized intersection. Correlation model is the core of subarea partition algorithm because it can quantify the correlation degree of adjacent signalized intersections and decides whether these intersections can be grouped into one subarea. In most cases, there are more than two intersections in one subarea. However, current researches only focus on the correlation model for two adjacent intersections. The objective of this study is to develop a model which can calculate the correlation degree of multiple intersections adaptively. The cycle lengths, link lengths, number of intersections, and path flow between upstream and downstream coordinated phases were selected as the contributing factors of the correlation model. Their jointly impacts on the performance of the coordinated control mode relative to the isolated control mode were further studied using numerical experiments. The paper then proposed a correlation index (CI as an alternative to relative performance. The relationship between CI and the four contributing factors was established in order to predict the correlation, which determined whether adjacent intersections could be partitioned into one subarea. A value of 0 was set as the threshold of CI. If CI was larger than 0, multiple intersections could be partitioned into one subarea; otherwise, they should be separated. Finally, case studies were conducted in a real-life signalized network to evaluate the performance of the model. The results show that the CI simulates the relative performance well and could be a reliable index for subarea partition.
Energy Technology Data Exchange (ETDEWEB)
Drover, Damion, Ryan
2011-12-01
One of the largest exports in the Southeast U.S. is forest products. Interest in biofuels using forest biomass has increased recently, leading to more research into better forest management BMPs. The USDA Forest Service, along with the Oak Ridge National Laboratory, University of Georgia and Oregon State University are researching the impacts of intensive forest management for biofuels on water quality and quantity at the Savannah River Site in South Carolina. Surface runoff of saturated areas, transporting excess nutrients and contaminants, is a potential water quality issue under investigation. Detailed maps of variable source areas and soil characteristics would therefore be helpful prior to treatment. The availability of remotely sensed and computed digital elevation models (DEMs) and spatial analysis tools make it easy to calculate terrain attributes. These terrain attributes can be used in models to predict saturated areas or other attributes in the landscape. With laser altimetry, an area can be flown to produce very high resolution data, and the resulting data can be resampled into any resolution of DEM desired. Additionally, there exist many maps that are in various resolutions of DEM, such as those acquired from the U.S. Geological Survey. Problems arise when using maps derived from different resolution DEMs. For example, saturated areas can be under or overestimated depending on the resolution used. The purpose of this study was to examine the effects of DEM resolution on the calculation of topographic wetness indices used to predict variable source areas of saturation, and to find the best resolutions to produce prediction maps of soil attributes like nitrogen, carbon, bulk density and soil texture for low-relief, humid-temperate forested hillslopes. Topographic wetness indices were calculated based on the derived terrain attributes, slope and specific catchment area, from five different DEM resolutions. The DEMs were resampled from LiDAR, which is a
Energy Technology Data Exchange (ETDEWEB)
Pareizs, J. M.; Crawford, C. L.
2013-04-26
The current Waste Solidification Engineering (WSE) practice is to prepare sludge batches in Tank 51 by transferring sludge from other tanks to Tank 51. Tank 51 sludge is washed and transferred to Tank 40, the current Defense Waste Processing Facility (DWPF) feed tank. Prior to transfer of Tank 51 to Tank 40, the Savannah River National Laboratory (SRNL) typically simulates the Tank Farm and DWPF processes using a Tank 51 sample (referred to as the qualification sample). WSE requested the SRNL to perform characterization on a Sludge Batch 8 (SB8) sample and demonstrate the DWPF flowsheet in the SRNL shielded cells for SB8 as the final qualification process required prior to SB8 transfer from Tank 51 to Tank 40. A 3-L sample from Tank 51 (the SB8 qualification sample; Tank Farm sample HTF-51-12-80) was received by SRNL on September 20, 2012. The as-received sample was characterized prior to being washed. The washed material was further characterized and used as the material for the DWPF process simulation including a Sludge Receipt and Adjustment Tank (SRAT) cycle, a Slurry Mix Evaporator (SME) cycle, and glass fabrication and chemical durability measurements.
Calculation of mass discharge of the Greenland ice sheet in the Earth System Model
Directory of Open Access Journals (Sweden)
O. O. Rybak
2016-01-01
Full Text Available Mass discharge calculation is a challenging task for the ice sheet modeling aimed at evaluation of their contribution to the global sea level rise during past interglacials, as well as one of the consequences of future climate change. In Greenland, ablation is the major source of fresh water runoff. It is approximately equal to the dynamical discharge (iceberg calving. Its share might have still larger during the past interglacials when the margins of the GrIS retreated inland. Refreezing of the melted water and its retention are two poorly known processes playing as a counterpart of melting and, thus, exerting influence on the run off. Interaction of ice sheets and climate is driven by energy and mass exchange processes and is complicated by numerous feed-backs. To study the complex of these processes, coupling of an ice sheet model and a climate model (i.e. models of the atmosphere and the ocean in one model is required, which is often called the Earth System Model (ESM. Formalization of processes of interaction between the ice sheets and climate within the ESM requires elaboration of special techniques to deal with dramatic differences in spatial and temporal variability scales within each of three ESM’s blocks. In this paper, we focus on the method of coupling of a Greenland ice sheet model (GrISM with the climate model INMCM having been developed in the Institute of Numerical Mathematics of Russian Academy of Sciences. Our coupling approach consists in applying of a special buffer model, which serves as an interface between GrISM and INMCM. A simple energy and water exchange model (EWBM-G allows realistic description of surface air temperature and precipitation fields adjusted to a relief of elevation of the GrIS surface. In a series of diagnostic numerical experiments with the present-day GrIS geometry and the modeled climate we studied sensitivity of the modeled surface mass balance and run off to the key EWBM-G parameters and compared
Rüstemoǧlu, Sevinç; Barutçu, Burak; Sibel Menteş, Å..
2010-05-01
The continuous usage of fossil fuels as primary energy source is the reason of the emission of CO and powerless economy of the country affected by the great flactuations in the unit price of energy sources. In recent years, developments in wind energy sector and the supporting new renewable energy policies of the countries allow the new wind farm owners and the firms who expect to be an owner to consider and invest on the renewable sources. In this study, the annual production of the turbines with 1.8 kW and 30 kW which are available for Istanbul Technical University in Energy Institute is calculated by Wasp and WindPro Field Flow Models and the wind characteristics of the area are analysed. The meteorological data used in calculation includes the period between 02.March.2000 and 31.May.2004 and is taken from the meteorological mast ( ) in Istanbul Technical University's campus area. The measurement data is taken from 2 m and 10 m heights with hourly means. The topography, roughness classes and shelter effects are defined in the models to make accurate extrapolation to the turbine sites. As an advantage, the region is nearly 3.5 km close to the Istanbul Bosphorous but as it can be seen from the Wasp and WindPro Model Results, the Bosphorous effect is interrupted by the new buildings and hight forestry. The shelter effect of these high buildings have a great influence on the wind flow and decrease the high wind energy potential which is produced by the Bosphorous effect. This study, which determines wind characteristics and expected annual production, is important for this Project Site and therefore gains importance before the construction of wind energy system. However, when the system is operating, developing the energy management skills, forecasting the wind speed and direction will become important. At this point, three statistical models which are Kalman Fitler, AR Model and Neural Networks models are used to determine the success of each method for correct
Analytical calculation of the vibrator-rotor transition in the sdg interacting boson model
International Nuclear Information System (INIS)
Wang Baolin
1992-01-01
Analytical calculation of the vibrator-rotor transition is given by utilizing the 1/N expansion technique in the sdg IBM. The phase transition of low-lying energy spectrum and E2 transition for Sm isotopes are calculated
Sefzik, Travis H; Turco, Domenic; Iuliucci, Robbie J; Facelli, Julio C
2005-02-17
The NMR chemical shift, a six-parameter tensor property, is highly sensitive to the position of the atoms in a molecule. To extract structural parameters from chemical shifts, one must rely on theoretical models. Therefore, a high quality group of shift tensors that serve as benchmarks to test the validity of these models is warranted and necessary to highlight existing computational limitations. Here, a set of 102 13C chemical-shift tensors measured in single crystals, from a series of aromatic and saccharide molecules for which neutron diffraction data are available, is used to survey models based on the density functional (DFT) and Hartree-Fock (HF) theories. The quality of the models is assessed by their least-squares linear regression parameters. It is observed that in general DFT outperforms restricted HF theory. For instance, Becke's three-parameter exchange method and mpw1pw91 generally provide the best predicted shieldings for this group of tensors. However, this performance is not universal, as none of the DFT functionals can predict the saccharide tensors better than HF theory. Both the orientations of the principal axis system and the magnitude of the shielding were compared using the chemical-shift distance to evaluate the quality of the calculated individual tensor components in units of ppm. Systematic shortcomings in the prediction of the principal components were observed, but the theory predicts the corresponding isotropic value more accurately. This is because these systematic errors cancel, thereby indicating that the theoretical assessment of shielding predictions based on the isotropic shift should be avoided.
What Can we learn from Raman spectroscopy and model calculations on room temperature ionic liquids?
DEFF Research Database (Denmark)
Berg, Rolf W.
2006-01-01
calculation software. This software allows everyone - on an ordinary PC after some trial and error - to calculate the most stable equilibrium geometries of guessed chemical structures and to depict their normal vibrations and the fundamental Raman and IR spectra. These new calculations form the basis...
Energy Technology Data Exchange (ETDEWEB)
Daniel, W. E.; Hansen, E. K.; Shehee, T. C.
2012-10-30
This report includes the literature review, hydrogen off-gas calculations, and hydrogen generation tests to determine that H-Canyon can safely dissolve the Sodium Reactor Experiment (SRE; thorium fuel), Ford Nuclear Reactor (FNR; aluminum alloy fuel), and Denmark Reactor (DR-3; silicide fuel, aluminum alloy fuel, and aluminum oxide fuel) assemblies in the L-Bundles with respect to the hydrogen levels in the projected peak off-gas rates. This is provided that the number of L-Bundles charged to the dissolver is controlled. Examination of SRE dissolution for potential issues has aided in predicting the optimal batching scenario. The calculations detailed in this report demonstrate that the FNR, SRE, and DR-3 used nuclear fuel (UNF) are bounded by MURR UNF and may be charged using the controls outlined for MURR dissolution in a prior report.
International Nuclear Information System (INIS)
Efremenko, Dmitry S.; Loyola, Diego G.; Spurr, Robert J.D.; Doicu, Adrian
2014-01-01
In the independent pixel approximation (IPA), radiative transfer computations involving cloudy scenes require two separate calls to the radiative transfer model (RTM), one call for a clear sky scenario, the other for an atmosphere containing clouds. In this paper, clouds are considered as an optically homogeneous layer. We present two novel methods for RTM performance enhancement with particular application to trace gas retrievals under cloudy conditions. Both methods are based on reusing results from clear-sky RTM calculations to speed up corresponding calculations for the cloud-filled scenario. The first approach is numerically exact, and has been applied to the discrete-ordinate with matrix exponential (DOME) RTM. Results from the original clear sky computation can be saved in the memory and reused for the non-cloudy layers in the second computation. In addition, for the whole-atmosphere boundary-value approach to the determination of the intensity field, we can exploit a ’telescoping technique’ to reduce the dimensionality (and hence the computational effort for the solution) of the boundary value problem in the absence of Rayleigh scattering contributions for higher azimuthal components of the radiation field. The second approach is (for the cloudy scenario) to generate a spectral correction applied to the radiation field from a fast two-stream RTM. This correction is based on the use of principal-component analysis (PCA) applied to a given window of spectral optical property data, in order to exploit redundancy in the data and confine the number of full-stream multiple scatter computations to the first few EOFs (Empirical Orthogonal Functions) arising from the PCA. This method has been applied to the LIDORT RTM; although the method involves some approximation, it provides accuracy better than 0.2%, and a speed-up factor of approximately 2 compared with two calls of RTM. -- Highlights: • Reusing results from clear-sky computations for a model with a
Water vapor measurements at ALOMAR over a solar cycle compared with model calculations by LIMA
Hartogh, P.; Sonnemann, G. R.; Grygalashvyly, M.; Song, Li; Berger, U.; Lübken, F.-J.
2010-01-01
Microwave water vapor measurements between 40 and 80 km altitude over a solar cycle (1996-2006) were carried out in high latitudes at Arctic Lidar Observatory for Middle Atmosphere Research (ALOMAR) (69.29°N, 16.03°E), Norway. Some smaller gaps and three interruptions of monitoring in the winters 1996/1997 and 2005/2006 and from spring 2001 to spring 2002 occurred during this period. The observations show a distinct year-to-year variability not directly related to solar Lyman-α radiation. In winter the water vapor mixing ratios in the upper domain were anticorrelated to the solar activity, whereas in summer, minima occurred in the years after the solar maximum in 2000/2001. In winter, sudden stratospheric warmings (SSWs) modulated the water vapor mixing ratios. Within the stratopause region a middle atmospheric water vapor maximum was observed, which results from the methane oxidation and is a regular feature there. The altitude of the maximum increased by approximately 5 km as summer approached. The largest mixing ratios were monitored in autumn. During the summer season a secondary water vapor maximum also occurred above 65 km most pronounced in late summer. The solar Lyman-α radiation impacts the water vapor mixing ratio particularly in winter above 65 km. In summer the correlation is positive below 70 km. The correlation is also positive in the lower mesosphere/stratopause region in winter due to the action of sudden stratospheric warmings, which occur more frequently under the condition of high solar activity and the enhancing the humidity. A strong day-to-day variability connected with planetary wave activity was found throughout the entire year. Model calculations by means of Leibniz-Institute Middle Atmosphere model (LIMA) reflect the essential patterns of the water vapor variation, but the results also show differences from the observations, indicating that exchange processes between the troposphere and stratosphere not modeled by LIMA could have
Comparison of Model Calculations of Biological Damage from Exposure to Heavy Ions with Measurements
Kim, Myung-Hee Y.; Wu, Honglu; Hada, Megumi; Cucinotta, Francis
The space environment consists of a varying field of radiation particles including high-energy ions, with spacecraft shielding material providing the major protection to astronauts from harmful exposure. Unlike low-LET g or X rays, the presence of shielding does not always reduce the radiation risks for energetic charged-particle exposure. Dose delivered by the charged particle increases sharply at the Bragg peak. However, the Bragg curve does not necessarily represent the biological damage along the particle path since biological effects are influenced by the track structures of both primary and secondary particles. Therefore, the ‘‘biological Bragg curve’’ is dependent on the energy and the type of the primary particle and may vary for different biological end points. Measurements of the induction of micronuclei (MN) have made across the Bragg curve in human fibroblasts exposed to energetic silicon and iron ions in vitro at two different energies, 300 MeV/nucleon and 1 GeV/nucleon. Although the data did not reveal an increased yield of MN at the location of the Bragg peak, the increased inhibition of cell progression, which is related to cell death, was found at the Bragg peak location. These results are compared to the calculations of biological damage using a stochastic Monte-Carlo track structure model, Galactic Cosmic Ray Event-based Risk Model (GERM) code (Cucinotta et al., 2011). The GERM code estimates the basic physical properties along the passage of heavy ions in tissue and shielding materials, by which the experimental set-up can be interpreted. The code can also be used to describe the biophysical events of interest in radiobiology, cancer therapy, and space exploration. The calculation has shown that the severely damaged cells at the Bragg peak are more likely to go through reproductive death, the so called “overkill”. F. A. Cucinotta, I. Plante, A. L. Ponomarev, and M. Y. Kim, Nuclear Interactions in Heavy Ion Transport and Event
Zhu, Aibin; Yang, Yulei; Chen, Wei; Yuan, Xiaoyang
2014-05-01
Traditional model for calculating performance parameters of a fix-pad journal bearing leads to heavy workload, complicated and changeable formulae as it requires deriving various geometric formulae with different bearing types such as circular journal bearing, dislocated bearing and elliptic bearing. Considering different pad preload ratios for non-standard bearing, traditional model not only becomes more complicated but also reduces scalability and promotion of the calculation programs. For the complexly case of traditional model while dealing with various fix-pad journal bearings, unified coordinate system model for performance calculation of fix-pad journal bearing is presented in the paper. A unified coordinate system with the bearing center at the origin is established, and the eccentricity ratio and attitude angle of axis relative to each pad are calculated through the coordinates of journal center and each pad center. Geometric description of fix-pad journal bearing is unified in this model, which can be used for both various standard bearing and non-standard bearing with different pad preload ratios. Validity of this model is verified with an elliptical bearing. Performance of a non-standard four-leaf bearing with different pad preload ratios is calculated based on this model. The calculation result shows that increasing preload ratio of the pad 1 and keeping that of the left three pads constant improves bearing capacity, stiffness and damping coefficients. This research presents a unified coordinate system model unifies performance calculation of fix-pad journal bearings and studied a non-standard four-leaf bearing with different pad preload ratios, the research conclusions provides new methods for performance calculation of fix-pad journal bearings.
Mixed modeling and sample size calculations for identifying housekeeping genes.
Dai, Hongying; Charnigo, Richard; Vyhlidal, Carrie A; Jones, Bridgette L; Bhandary, Madhusudan
2013-08-15
Normalization of gene expression data using internal control genes that have biologically stable expression levels is an important process for analyzing reverse transcription polymerase chain reaction data. We propose a three-way linear mixed-effects model to select optimal housekeeping genes. The mixed-effects model can accommodate multiple continuous and/or categorical variables with sample random effects, gene fixed effects, systematic effects, and gene by systematic effect interactions. We propose using the intraclass correlation coefficient among gene expression levels as the stability measure to select housekeeping genes that have low within-sample variation. Global hypothesis testing is proposed to ensure that selected housekeeping genes are free of systematic effects or gene by systematic effect interactions. A gene combination with the highest lower bound of 95% confidence interval for intraclass correlation coefficient and no significant systematic effects is selected for normalization. Sample size calculation based on the estimation accuracy of the stability measure is offered to help practitioners design experiments to identify housekeeping genes. We compare our methods with geNorm and NormFinder by using three case studies. A free software package written in SAS (Cary, NC, U.S.A.) is available at http://d.web.umkc.edu/daih under software tab. Copyright © 2013 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
O. Funk
Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements.
Key words. Meteorology and atmospheric dynamics (radiative processes; instruments and techniques
Directory of Open Access Journals (Sweden)
O. Funk
2003-03-01
Full Text Available This paper addresses the statistics underlying cloudy sky radiative transfer (RT by inspection of the distribution of the path lengths of solar photons. Recent studies indicate that this approach is promising, since it might reveal characteristics about the diffusion process underlying atmospheric radiative transfer (Pfeilsticker, 1999. Moreover, it uses an observable that is directly related to the atmospheric absorption and, therefore, of climatic relevance. However, these studies are based largely on the accuracy of the measurement of the photon path length distribution (PPD. This paper presents a refined analysis method based on high resolution spectroscopy of the oxygen A-band. The method is validated by Monte Carlo simulation atmospheric spectra. Additionally, a new method to measure the effective optical thickness of cloud layers, based on fitting the measured differential transmissions with a 1-dimensional (discrete ordinate RT model, is presented. These methods are applied to measurements conducted during the cloud radar inter-comparison campaign CLARE’98, which supplied detailed cloud structure information, required for the further analysis. For some exemplary cases, measured path length distributions and optical thicknesses are presented and backed by detailed RT model calculations. For all cases, reasonable PPDs can be retrieved and the effects of the vertical cloud structure are found. The inferred cloud optical thicknesses are in agreement with liquid water path measurements. Key words. Meteorology and atmospheric dynamics (radiative processes; instruments and techniques
Sink efficiency calculation of dislocations in irradiated materials by phase-field modelling
International Nuclear Information System (INIS)
Rouchette, Adrien
2015-01-01
The aim of this work is to develop a modelling technique for diffusion of crystallographic migrating defects in irradiated metals and absorption by sinks to better predict the microstructural evolution in those materials.The phase field technique is well suited for this problem, since it naturally takes into account the elastic effects of dislocations on point defect diffusion in the most complex cases. The phase field model presented in this work has been adapted to simulate the generation of defects by irradiation and their absorption by the dislocation cores by means of a new order parameter associated to the sink morphology. The method has first been validated in different reference cases by comparing the sink strengths obtained numerically with analytical solutions available in the literature. Then, the method has been applied to dislocations with different orientations in zirconium, taking into account the anisotropic properties of the crystal and point defects, obtained by state-of-the-art atomic calculations.The results show that the shape anisotropy of the point defects promotes the vacancy absorption by basal loops, which is consistent with the experimentally observed zirconium growth under irradiation. Finally, the rigorous investigation of the dislocation loop case proves that phase field simulations give more accurate results than analytical solutions in realistic loop density ranges. (author)
Sous, John; Chakraborty, Monodeep; Krems, Roman; Berciu, Mona
2017-04-01
We develop a method to compute the Green's function for two particles in an infinite chain and coupled to phonons by interactions that modulate their hopping as described by the Peierls/Su-Schrieffer-Heeger (SSH) model. The method is based on a variational approximation to the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy and is shown to agree with exact digaonalization calculations. We show that the properties of bipolarons arising in such models is qualitatively different from those of the well-studied Holstein bipolarons. In particular, we show that depending on the particle statistics, strongly bound bipolarons may or may not form. In the case of hard-core bosons, we demonstrate novel effects for dimers such as sharp transitions and self-trapping. In the case of soft-core particles/ spinfull fermions, we show that the mediated interactions lead to overscreeing of the bare Hubbard U repulsion resulting in the formation of strongly bound bipolarons. This work was supported by NSERC of Canada and the Stewart Blusson Quantum Matter Institute.
De Lucas, Javier
2015-03-01
A simple geometrical model for calculating the effective emissivity in blackbody cylindrical cavities has been developed. The back ray tracing technique and the Monte Carlo method have been employed, making use of a suitable set of coordinates and auxiliary planes. In these planes, the trajectories of individual photons in the successive reflections between the cavity points are followed in detail. The theoretical model is implemented by using simple numerical tools, programmed in Microsoft Visual Basic for Application and Excel. The algorithm is applied to isothermal and non-isothermal diffuse cylindrical cavities with a lid; however, the basic geometrical structure can be generalized to a cylindro-conical shape and specular reflection. Additionally, the numerical algorithm and the program source code can be used, with minor changes, for determining the distribution of the cavity points, where photon absorption takes place. This distribution could be applied to the study of the influence of thermal gradients on the effective emissivity profiles, for example. Validation is performed by analyzing the convergence of the Monte Carlo method as a function of the number of trials and by comparison with published results of different authors.
Energy Technology Data Exchange (ETDEWEB)
Gerasimenko, B.F. [V.G. Khlopin Radium Inst., Saint Peterburg (Russian Federation)
1997-03-01
The calculations of integral spectra of prompt neutrons of spontaneous fission of {sup 244}Cm and {sup 246}Cm were carried out. The calculations were done by the Statistical Computer Code Complex SCOFIN applying the Hauser-Feschbach method as applied to the description of the de-excitation of excited fission fragments by means of neutron emission. The emission of dipole gamma-quanta from these fragments was considered as a competing process. The average excitation energy of a fragment was calculated by two-spheroidal model of tangent fragments. The density of levels in an excited fragment was calculated by the Fermi-gas model. The quite satisfactory agreement was reached between theoretical and experimental results obtained in frames of Project measurements. The calculated values of average multiplicities of neutron number were 2,746 for {sup 244}Cm and 2,927 for {sup 246}Cm that was in a good accordance with published experimental figures. (author)
Model calculated global, regional and megacity premature mortality due to air pollution
Directory of Open Access Journals (Sweden)
J. Lelieveld
2013-07-01
Full Text Available Air pollution by fine particulate matter (PM2.5 and ozone (O3 has increased strongly with industrialization and urbanization. We estimate the premature mortality rates and the years of human life lost (YLL caused by anthropogenic PM2.5 and O3 in 2005 for epidemiological regions defined by the World Health Organization (WHO. This is based upon high-resolution global model calculations that resolve urban and industrial regions in greater detail compared to previous work. Results indicate that 69% of the global population is exposed to an annual mean anthropogenic PM2.5 concentration of >10 μg m−3 (WHO guideline and 33% to > 25 μg m−3 (EU directive. We applied an epidemiological health impact function and find that especially in large countries with extensive suburban and rural populations, air pollution-induced mortality rates have been underestimated given that previous studies largely focused on the urban environment. We calculate a global respiratory mortality of about 773 thousand/year (YLL ≈ 5.2 million/year, 186 thousand/year by lung cancer (YLL ≈ 1.7 million/year and 2.0 million/year by cardiovascular disease (YLL ≈ 14.3 million/year. The global mean per capita mortality caused by air pollution is about 0.1% yr−1. The highest premature mortality rates are found in the Southeast Asia and Western Pacific regions (about 25% and 46% of the global rate, respectively where more than a dozen of the most highly polluted megacities are located.
FTE MODIFICATION OF NURSING STAFF CALCULATING FORMULA WITH TEAM PROFESIONAL NURSING CARE MODEL
Directory of Open Access Journals (Sweden)
Erlin Kurnia
2017-04-01
Full Text Available Introduction: A variety of formulas that can be done to count the needs of nursing staff in inpatient rooms include Ministry of Health Republic of Indonesia method, Gillies, Nina Formulation, Douglas, and Full Time Equivalent (FTE. The purpose of this study was to recommend the formula for calculating nurse staff needs in implementation of team nursing model of care delivery. Method: The design used in this study was a time and motion study. Data was collected by observations and questionnaires. The population was the nurses who work at Kediri Baptist Hospital inpatient wards. The observation and questionnaires to the resource persons utilized as a data collection method. Two inpatient wards were the taken as simulation places, there were Ward A and Ward B. Ward A was taken as simulation place based on FTE method and Ward B was taken as simulation place based on Ministry of Health Republic of Indonesia method. Based on the calculation according to the Ministry of Health Republic of Indonesia method obtained the required number of nursing staff as many as 17 people in Ward A and 23 in Ward B. Meanwhile, according to FTE count obtained the number of nursing staff as many as 20 people in Ward A and 33 in Ward B. Result: The simulation results obtained an increase in performance of duties and job satisfaction of nurses in inpatient wards that were simulated using the FTE method. Discussion: The inpatient ward that is simulated using the Ministry of Health Republic of Indonesia method obtained an increase in performance of duties but a decrease in job satisfactions. It can be concluded that the FTE method is more appropriate to use than Ministry of Health Republic of Indonesia.
Atmospheric Dispersion Models for the Calculation of Environmental Impact: A Comparative Study
International Nuclear Information System (INIS)
Caputo, Marcelo; Gimenez, Marcelo; Felicelli, Sergio; Schlamp, Miguel
2000-01-01
In this paper some new comparisons are presented between the codes AERMOD, HPDM and HYSPLIT.The first two are Gaussian stationary plume codes and they were developed to calculate environmental impact produced by chemical contaminants.HYSPLIT is a hybrid code because it uses a Lagrangian reference system to describe the transport of a puff center of mass and uses an Eulerian system to describe the dispersion within the puff.The meteorological and topographic data used in the present work were obtained from runs of the prognostic code RAMS, provided by NOAA. The emission was fixed in 0.3 g/s , 284 K and 0 m/s .The surface rough was fixed in 0.1m and flat terrain was considered.In order to analyze separate effects and to go deeper in the comparison, the meteorological data was split into two, depending on the atmospheric stability class (F to B), and the wind direction was fixed to neglect its contribution to the contaminant dispersion.The main contribution of this work is to provide recommendations about the validity range of each code depending on the model used.In the case of Gaussian models the validity range is fixed by the distance in which the atmospheric condition can be consider homogeneous.In the other hand the validity range of HYSPLIT's model is determined by the spatial extension of the meteorological data.The results obtained with the three codes are comparable if the emission is in equilibrium with the environment.This means that the gases were emitted at the same temperature of the medium with zero velocity.There was an important difference between the dispersion parameters used by the Gaussian codes
Statistical equilibrium calculations for silicon in early-type model stellar atmospheres
International Nuclear Information System (INIS)
Kamp, L.W.
1976-02-01
Line profiles of 36 multiplets of silicon (Si) II, III, and IV were computed for a grid of model atmospheres covering the range from 15,000 to 35,000 K in effective temperature and 2.5 to 4.5 in log (gravity). The computations involved simultaneous solution of the steady-state statistical equilibrium equations for the populations and of the equation of radiative transfer in the lines. The variables were linearized, and successive corrections were computed until a minimal accuracy of 1/1000 in the line intensities was reached. The common assumption of local thermodynamic equilibrium (LTE) was dropped. The model atmospheres used also were computed by non-LTE methods. Some effects that were incorporated into the calculations were the depression of the continuum by free electrons, hydrogen and ionized helium line blocking, and auto-ionization and dielectronic recombination, which later were found to be insignificant. Use of radiation damping and detailed electron (quadratic Stark) damping constants had small but significant effects on the strong resonance lines of Si III and IV. For weak and intermediate-strength lines, large differences with respect to LTE computations, the results of which are also presented, were found in line shapes and strengths. For the strong lines the differences are generally small, except for the models at the hot, low-gravity extreme of the range. These computations should be useful in the interpretation of the spectra of stars in the spectral range B0--B5, luminosity classes III, IV, and V
Spectra for the A = 6 reactions calculated from a three-body resonance model
Directory of Open Access Journals (Sweden)
Paris Mark W.
2016-01-01
Full Text Available We develop a resonance model of the transition matrix for three-body breakup reactions of the A = 6 system and present calculations for the nucleon observed spectra, which are important for inertial confinement fusion and Big Bang nucleosynthesis (BBN. The model is motivated by the Faddeev approach where the form of the T matrix is written as a sum of the distinct Jacobi coordinate systems corresponding to particle configurations (α, n-n and (n; n-α to describe the final state. The structure in the spectra comes from the resonances of the two-body subsystems of the three-body final state, namely the singlet (T = 1 nucleon-nucleon (NN anti-bound resonance, and the Nα resonances designated the ground state (Jπ = 3−2${{{3^ - }} \\over 2}$ and first excited state (Jπ = 1−2${{{1^ - }} \\over 2}$ of the A = 5 systems 5He and 5Li. These resonances are described in terms of single-level, single-channel R-matrix parameters that are taken from analyses of NN and Nα scattering data. While the resonance parameters are approximately charge symmetric, external charge-dependent effects are included in the penetrabilities, shifts, and hard-sphere phases, and in the level energies to account for internal Coulomb differences. The shapes of the resonance contributions to the spectrum are fixed by other, two-body data and the only adjustable parameters in the model are the combinatorial amplitudes for the compound system. These are adjusted to reproduce the observed nucleon spectra from measurements at the Omega and NIF facilities. We perform a simultaneous, least-squares fit of the tt neutron spectra and the 3He3He proton spectra. Using these amplitudes we make a prediction of the α spectra for both reactions at low energies. Significant differences in the tt and 3He3He spectra are due to Coulomb effects.
Modelling of large sodium fires: A coupled experimental and calculational approach
International Nuclear Information System (INIS)
Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.
1996-01-01
The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super
Matrix product state calculations for one-dimensional quantum chains and quantum impurity models
Energy Technology Data Exchange (ETDEWEB)
Muender, Wolfgang
2011-09-28
involving a Kondo exciton and population switching in quantum dots. It turns out that both phenomena rely on the various manifestations of Anderson orthogonality (AO), which describes the fact that the response of the Fermi sea to a quantum quench (i.e. an abrupt change of some property of the impurity or quantum dot) is a change of the scattering phase shifts of all the single-particle wave functions, therefore drastically changing the system. In this context, we demonstrate that NRG, a highly accurate method for quantum impurity models, allows for the calculation of all static and dynamic quantities related to AO and present an extensive NRG study for population switching in quantum dots. (orig.)
Zheng, Na; Xu, Hai-Bo
2015-10-01
An empirical numerical model that includes nuclear absorption, multiple Coulomb scattering and energy loss is presented for the calculation of transmission through thick objects in high energy proton radiography. In this numerical model the angular distributions are treated as Gaussians in the laboratory frame. A Monte Carlo program based on the Geant4 toolkit was developed and used for high energy proton radiography experiment simulations and verification of the empirical numerical model. The two models are used to calculate the transmission fraction of carbon and lead step-wedges in proton radiography at 24 GeV/c, and to calculate radial transmission of the French Test Object in proton radiography at 24 GeV/c with different angular cuts. It is shown that the results of the two models agree with each other, and an analysis of the slight differences is given. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)
Modeling tool for calculating dietary iron bioavailability in iron-sufficient adults.
Fairweather-Tait, Susan J; Jennings, Amy; Harvey, Linda J; Berry, Rachel; Walton, Janette; Dainty, Jack R
2017-06-01
Background: Values for dietary iron bioavailability are required for setting dietary reference values. These are estimated from predictive algorithms, nonheme iron absorption from meals, and models of iron intake, serum ferritin concentration, and iron requirements. Objective: We developed a new interactive tool to predict dietary iron bioavailability. Design: Iron intake and serum ferritin, a quantitative marker of body iron stores, from 2 nationally representative studies of adults in the United Kingdom and Ireland and a trial in elderly people in Norfolk, United Kingdom, were used to develop a model to predict dietary iron absorption at different serum ferritin concentrations. Individuals who had raised inflammatory markers or were taking iron-containing supplements were excluded. Results: Mean iron intakes were 13.6, 10.3, and 10.9 mg/d and mean serum ferritin concentrations were 140.7, 49.4, and 96.7 mg/L in men, premenopausal women, and postmenopausal women, respectively. The model predicted that at serum ferritin concentrations of 15, 30, and 60 mg/L, mean dietary iron absorption would be 22.3%, 16.3%, and 11.6%, respectively, in men; 27.2%, 17.2%, and 10.6%, respectively, in premenopausal women; and 18.4%, 12.7%, and 10.5%, respectively, in postmenopausal women. Conclusions: An interactive program for calculating dietary iron absorption at any concentration of serum ferritin is presented. Differences in iron status are partly explained by age but also by diet, with meat being a key determinant. The effect of the diet is more marked at lower serum ferritin concentrations. The model can be applied to any adult population in whom representative, good-quality data on iron intake and iron status have been collected. Values for dietary iron bioavailability can be derived for any target concentration of serum ferritin, thereby giving risk managers and public health professionals a flexible and transparent basis on which to base their dietary recommendations. This
From Risk Models to Loan Contracts: Austerity as the Continuation of Calculation by Other Means
Directory of Open Access Journals (Sweden)
Pierre Pénet
2014-06-01
Full Text Available This article analyses how financial actors sought to minimise financial uncertainties during the European sovereign debt crisis by employing simulations as legal instruments of market regulation. We first contrast two roles that simulations can play in sovereign debt markets: ‘simulation-hypotheses’, which work as bundles of constantly updated hypotheses with the goal of better predicting financial risks; and ‘simulation-fictions’, which provide fixed narratives about the present with the purpose of postponing the revision of market risks. Using ratings reports published by Moody’s on Greece and European Central Bank (ECB regulations, we show that Moody’s stuck to a simulationfiction and displayed rating inertia on Greece’s trustworthiness to prevent the destabilising effects that further downgrades would have on Greek borrowing costs. We also show that the multi-notch downgrade issued by Moody’s in June 2010 followed the ECB’s decision to remove ratings from its collateral eligibility requirements. Then, as regulators moved from ‘regulation through model’ to ‘regulation through contract’, ratings stopped functioning as simulation-fictions. Indeed, the conditions of the Greek bailout implemented in May 2010 replaced the CRAs’ models as the main simulation-fiction, which market actors employed to postpone the prospect of a Greek default. We conclude by presenting austerity measures as instruments of calculative governance rather than ideological compacts
A model for reliability analysis and calculation applied in an example from chemical industry
Directory of Open Access Journals (Sweden)
Pejović Branko B.
2010-01-01
Full Text Available The subject of the paper is reliability design in polymerization processes that occur in reactors of a chemical industry. The designed model is used to determine the characteristics and indicators of reliability, which enabled the determination of basic factors that result in a poor development of a process. This would reduce the anticipated losses through the ability to control them, as well as enabling the improvement of the quality of production, which is the major goal of the paper. The reliability analysis and calculation uses the deductive method based on designing of a scheme for fault tree analysis of a system based on inductive conclusions. It involves the use standard logical symbols and rules of Boolean algebra and mathematical logic. The paper eventually gives the results of the work in the form of quantitative and qualitative reliability analysis of the observed process, which served to obtain complete information on the probability of top event in the process, as well as objective decision making and alternative solutions.
Extrapolated renormalization group calculation of the surface tension in square-lattice Ising model
International Nuclear Information System (INIS)
Curado, E.M.F.; Tsallis, C.; Levy, S.V.F.; Oliveira, M.J. de
1980-06-01
By using self-dual clusters (whose sizes are characterized by the numbers b=2, 3, 4, 5) within a real space renormalization group framework, the longitudinal surface tension of the square-lattice first-neighbour 1/2-spin ferromagnetic Ising model is calculated. The exact critical temperature T sub(c) is recovered for any value of b; the exact assymptotic behaviour of the surface tension in the limit of low temperatures is analytically recovered; the approximate correlation length critical exponents monotonically tend towards the exact value ν=1 (which, at two dimensions, coincides with the surface tension critical exponent μ) for increasingly large cells; the same behaviour is remarked in what concerns the approximate values for the surface tension amplitude in the limit T→T sub(c). Four different numerical procedures are developed for extrapolating to b→infinite the renormalization group results for the surface tension, and quite satisfactory agreement is obtained with Onsager's exact expression (error varying from zero to a few percent on the whole temperature domain). Furthermore the set of RG surface tensions is compared with a set of biased surface tensions (associated to appropriate misfit seams), and find only fortuitous coincidence among them. (Author) [pt
Using 3d Bim Model for the Value-Based Land Share Calculations
Çelik Şimşek, N.; Uzun, B.
2017-11-01
According to the Turkish condominium ownership system, 3D physical buildings and its condominium units are registered to the condominium ownership books via 2D survey plans. Currently, 2D representations of the 3D physical objects, causes inaccurate and deficient implementations for the determination of the land shares. Condominium ownership and easement right are established with a clear indication of land shares (condominium ownership law, article no. 3). So, the land share of each condominium unit have to be determined including the value differences among the condominium units. However the main problem is that, land share has often been determined with area based over the project before construction of the building. The objective of this study is proposing a new approach in terms of value-based land share calculations of the condominium units that subject to condominium ownership. So, the current approaches and its failure that have taken into account in determining the land shares are examined. And factors that affect the values of the condominium units are determined according to the legal decisions. This study shows that 3D BIM models can provide important approaches for the valuation problems in the determination of the land shares.
Speeding up particle-tracking calculations by nested dissection of a geometric model
International Nuclear Information System (INIS)
Altiparmakov, D.V.
1991-01-01
Forced by the needs of a number of topics such as computer graphics, computer-aided design and manufacturing, and Monte Carlo simulation, a variety of solid modeling and ray-tracing methods have been developed in the 1980s. Many of them, however, suffer from long computation time. In Monte Carlo codes, designed to work primarily on high-capability computers, the basic acceleration strategy relied on vectorization. Introducing the event-based approach, impressive gains (5 to 20) in performances have been achieved on vector super-computers. Time consumption seems to be a much more severe problem in computer graphics because graphics packages are usually devoted to work on mini or personal computers. Two types of approaches, bounding volumes and space sub-division, have been used to speed up image rendering since the early days of the ray-tracing method. It is the goal of this paper to present the benefit of using such an approach in neutronics calculations
Directory of Open Access Journals (Sweden)
M. de Reus
2005-01-01
Full Text Available An intensive field measurement campaign was performed in July/August 2002 at the Global Atmospheric Watch station Izaña on Tenerife to study the interaction of mineral dust aerosol and tropospheric chemistry (MINATROC. A dense Saharan dust plume, with aerosol masses exceeding 500 µg m-3, persisted for three days. During this dust event strongly reduced mixing ratios of ROx (HO2, CH3O2 and higher organic peroxy radicals, H2O2, NOx (NO and NO2 and O3 were observed. A chemistry boxmodel, constrained by the measurements, has been used to study gas phase and heterogeneous chemistry. It appeared to be difficult to reproduce the observed HCHO mixing ratios with the model, possibly related to the representation of precursor gas concentrations or the absence of dry deposition. The model calculations indicate that the reduced H2O2 mixing ratios in the dust plume can be explained by including the heterogeneous removal reaction of HO2 with an uptake coefficient of 0.2, or by assuming heterogeneous removal of H2O2 with an accommodation coefficient of 5x10-4. However, these heterogeneous reactions cannot explain the low ROx mixing ratios observed during the dust event. Whereas a mean daytime net ozone production rate (NOP of 1.06 ppbv/hr occurred throughout the campaign, the reduced ROx and NOx mixing ratios in the Saharan dust plume contributed to a reduced NOP of 0.14-0.33 ppbv/hr, which likely explains the relatively low ozone mixing ratios observed during this event.
International Nuclear Information System (INIS)
Smith, H.D.; Merz, M.D.; Wiemers, K.D.; Smith, G.L.
1996-02-01
High-level radioactive waste stored in tanks at the U.S. Department of Energy's (DOE's) Hanford Site will be pretreated to concentrate radioactive constituents and fed to the vitrification plant A flowsheet for feed preparation within the vitrification plant (based on the Hanford Waste Vitrification Plant (HWVP) design) called for HCOOH addition during the feed preparation step to adjust rheology and glass redox conditions. However, the potential for generating H 2 and NH 3 during treatment of high-level waste (HLW) with HCOOH was identified at Pacific Northwest Laboratory (PNL). Studies at the University of Georgia, under contract with Savannah River Technology Center (SRTC) and PNL, have verified the catalytic role of noble metals (Pd, Rh, Ru), present in the waste, in the generation of H 2 and NH 3 . Both laboratory-scale and pilot-scale studies at SRTC have documented the H 2 and NH 3 generation phenomenal Because H 2 and NH 3 may create hazardous conditions in the vessel vapor space and offgas system of a vitrification plant, reducing the H 2 generation rate and the NH 3 generation to the lowest possible levels consistent with desired melter feed characteristics is important. The Fiscal Year 1993 and 1994 studies were conducted with simulated (non-radioactive), pre-treated neutralized current acid waste (NCAW). Neutralized current acid waste is a high-level waste originating from the plutonium/uranium extraction (PUREX) plant that has been partially denitrated with sugar, neutralized with NaOH, and is presently stored in double-shell tanks. The non-radioactive simulant used for the present study includes all of the trace components found in the waste, or substitutes a chemically similar element for radioactive or very toxic species. The composition and simulant preparation steps were chosen to best simulate the chemical processing characteristics of the actual waste
Energy Technology Data Exchange (ETDEWEB)
Karnland, O. [Clay Technology, Lund (Sweden)
1997-12-01
A number of quite different quantitative models concerning swelling pressure in bentonite clay have been proposed by different researchers over the years. The present report examines some of the models which possibly may be used also for saline conditions. A discrepancy between calculated and measured values was noticed for all models at brine conditions. In general the models predicted a too low swelling pressure compared to what was experimentally found. An osmotic component in the clay/water system is proposed in order to improve the previous conservative use of the thermodynamic model. Calculations of this osmotic component is proposed to be made by use of the clay cation exchange capacity and Donnan equilibrium. Calculations made by this approach showed considerably better correlation to literature laboratory data, compared to calculations made by the previous conservative use of the thermodynamic model. A few verifying laboratory tests were made and are briefly described in the report. The improved thermodynamic model predicts substantial bentonite swelling pressures also in saturated sodium chloride solution if the density of the system is high enough. In practice, the model predicts a substantial swelling pressure for the buffer in a KBS-3 repository if the system is exposed to brines, but the positive effects of mixing bentonite into a backfill material will be lost, since the available compaction technique does not give a sufficiently high bentonite density 37 refs, 15 figs