WorldWideScience

Sample records for response calculations based

  1. Final disposal room structural response calculations

    International Nuclear Information System (INIS)

    Stone, C.M.

    1997-08-01

    Finite element calculations have been performed to determine the structural response of waste-filled disposal rooms at the WIPP for a period of 10,000 years after emplacement of the waste. The calculations were performed to generate the porosity surface data for the final set of compliance calculations. The most recent reference data for the stratigraphy, waste characterization, gas generation potential, and nonlinear material response have been brought together for this final set of calculations

  2. Dose-Response Calculator for ArcGIS

    Science.gov (United States)

    Hanser, Steven E.; Aldridge, Cameron L.; Leu, Matthias; Nielsen, Scott E.

    2011-01-01

    The Dose-Response Calculator for ArcGIS is a tool that extends the Environmental Systems Research Institute (ESRI) ArcGIS 10 Desktop application to aid with the visualization of relationships between two raster GIS datasets. A dose-response curve is a line graph commonly used in medical research to examine the effects of different dosage rates of a drug or chemical (for example, carcinogen) on an outcome of interest (for example, cell mutations) (Russell and others, 1982). Dose-response curves have recently been used in ecological studies to examine the influence of an explanatory dose variable (for example, percentage of habitat cover, distance to disturbance) on a predicted response (for example, survival, probability of occurrence, abundance) (Aldridge and others, 2008). These dose curves have been created by calculating the predicted response value from a statistical model at different levels of the explanatory dose variable while holding values of other explanatory variables constant. Curves (plots) developed using the Dose-Response Calculator overcome the need to hold variables constant by using values extracted from the predicted response surface of a spatially explicit statistical model fit in a GIS, which include the variation of all explanatory variables, to visualize the univariate response to the dose variable. Application of the Dose-Response Calculator can be extended beyond the assessment of statistical model predictions and may be used to visualize the relationship between any two raster GIS datasets (see example in tool instructions). This tool generates tabular data for use in further exploration of dose-response relationships and a graph of the dose-response curve.

  3. Calculating the Responses of Self-Powered Radiation Detectors.

    Science.gov (United States)

    Thornton, D. A.

    Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual

  4. The calculated neutron response of a sphere with the multi-counters

    International Nuclear Information System (INIS)

    Li Taosheng; Yang Lianzhen; Li Dongyu

    2004-01-01

    Based on the difference of the neutron distribution in the moderator, three position sensitive proportional counters which are perpendicular to each other are inserted into the moderator. The energy responses with six spherical moderators and six incidence directions have been calculated by MCNP4A code. The calculated results for two divided region methods in the radial of the spherical moderator have been analyzed and compared. (authors)

  5. Site response calculations for nuclear power plants

    International Nuclear Information System (INIS)

    Wight, L.H.

    1975-01-01

    Six typical sites consisting of three soil profiles with average shear wave velocities of 800, 1800, and 5000 ft/sec as well as two soil depths of 200 and 400 ft were considered. Seismic input to these sites was a synthetic accelerogram applied at the surface and corresponding to a statistically representative response spectrum. The response of each of these six sites to this input was calculated with the SHAKE program. The results of these calculations are presented

  6. Calculation of ex-core detector responses

    Energy Technology Data Exchange (ETDEWEB)

    Wouters, R. de; Haedens, M. [Tractebel Engineering, Brussels (Belgium); Baenst, H. de [Electrabel, Brussels (Belgium)

    2005-07-01

    The purpose of this work carried out by Tractebel Engineering, is to develop and validate a method for predicting the ex-core detector responses in the NPPs operated by Electrabel. Practical applications are: prediction of ex-core calibration coefficients for startup power ascension, replacement of xenon transients by theoretical predictions, and analysis of a Rod Drop Accident. The neutron diffusion program PANTHER calculates node-integrated fission sources which are combined with nodal importance representing the contribution of a neutron born in that node to the ex-core response. These importance are computed with the Monte Carlo program MCBEND in adjoint mode, with a model of the whole core at full power. Other core conditions are treated using sensitivities of the ex-core responses to water densities, computed with forward Monte Carlo. The Scaling Factors (SF), or ratios of the measured currents to the calculated response, have been established on a total of 550 in-core flux maps taken in four NPPs. The method has been applied to 15 startup transients, using the average SF obtained from previous cycles, and to 28 xenon transients, using the SF obtained from the in-core map immediately preceding the transient. The values of power (P) and axial offset (AOi) reconstructed with the theoretical calibration agree well with the measured values. The ex-core responses calculated during a rod drop transient have been successfully compared with available measurements, and with theoretical data obtained by alternative methods. In conclusion, the method is adequate for the practical applications previously listed. (authors)

  7. MONTE CARLO CALCULATION OF THE ENERGY RESPONSE OF THE NARF HURST-TYPE FAST- NEUTRON DOSIMETER

    Energy Technology Data Exchange (ETDEWEB)

    De Vries, T. W.

    1963-06-15

    The response function for the fast-neutron dosimeter was calculated by the Monte Carlo technique (Code K-52) and compared with a calculation based on the Bragg-Gray principle. The energy deposition spectra so obtained show that the response spectra become softer with increased incident neutron energy ahove 3 Mev. The K-52 calculated total res nu onse is more nearly constant with energy than the BraggGray response. The former increases 70 percent from 1 Mev to 14 Mev while the latter increases 135 percent over this energy range. (auth)

  8. Calculation of reactivity using a finite impulse response filter

    Energy Technology Data Exchange (ETDEWEB)

    Suescun Diaz, Daniel [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil); Senra Martinez, Aquilino [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)], E-mail: aquilino@lmp.ufrj.br; Carvalho Da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)

    2008-03-15

    A new formulation is presented in this paper to solve the inverse kinetics equation. This method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. Reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. This new method of reactivity calculation has very special features, amongst which it can be pointed out that the linear part is characterized by a filter named finite impulse response (FIR). The FIR filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive form. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way.

  9. Proportional counter response calculations for gallium solar neutrino detectors

    International Nuclear Information System (INIS)

    Kouzes, R.T.; Reynolds, D.

    1989-01-01

    Gallium bases solar neutrino detectors are sensitive to the primary pp reaction in the sun. Two experiments using gallium, SAGE in the Soviet Union and GALLEX in Europe, are under construction and will produce data by 1989. The radioactive /sup 71/Ge produced by neutrinos interacting with the gallium detector material, is chemically extracted and counted in miniature proportional counters. A number of calculations have been carried out to simulate the response of these counters to the decay of /sup 71/Ge and to background events

  10. Numerical calculation models of the elastoplastic response of a structure under seismic action

    International Nuclear Information System (INIS)

    Edjtemai, Nima.

    1982-06-01

    Two digital calculation models developed in this work have made it possible to analyze the exact dynamic behaviour of ductile structures with one or several degrees of liberty, during earthquakes. With the first model, response spectra were built in the linear and non-linear fields for different absorption and ductility values and two types of seismic accelerograms. The comparative study of these spectra made it possible to check the validity of certain hypotheses suggested for the construction of elastoplastic spectra from corresponding linear spectra. A simplified method of non-linear seismic calculation based on the modal analysis and the spectra of elastoplastic response was then applied to structures with a varying number of degrees of liberty. The results obtained in this manner were compared with those provided by an exact calculation provided by the second digital model developed by us [fr

  11. A new approach to calculating spatial impulse responses

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt

    1997-01-01

    Using linear acoustics the emitted and scattered ultrasound field can be found by using spatial impulse responses as developed by Tupholme (1969) and Stepanishen (1971). The impulse response is calculated by the Rayleigh integral by summing the spherical waves emitted from all of the aperture...

  12. Base response arising from free-field motions

    International Nuclear Information System (INIS)

    Whitley, J.R.; Morgan, J.R.; Hall, W.J.; Newmark, N.M.

    1977-01-01

    A procedure is illustrated in this paper for deriving (estimating) from a free-field record the horizontal base motions of a building, including horizontal rotation and translation. More specifically the goal was to compare results of response calculations based on derived accelerations with the results of calculations based on recorded accelerations. The motions are determined by assuming that an actual recorded ground wave transits a rigid base of a given dimension. Calculations given in the paper were made employing the earthquake acceleration time histories of the Hollywood storage building and the adjacent P.E. lot for the Kern County (1952) and San Fernando (1971) earthquakes. (Auth.)

  13. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  14. Exact-exchange-based quasiparticle calculations

    International Nuclear Information System (INIS)

    Aulbur, Wilfried G.; Staedele, Martin; Goerling, Andreas

    2000-01-01

    One-particle wave functions and energies from Kohn-Sham calculations with the exact local Kohn-Sham exchange and the local density approximation (LDA) correlation potential [EXX(c)] are used as input for quasiparticle calculations in the GW approximation (GWA) for eight semiconductors. Quasiparticle corrections to EXX(c) band gaps are small when EXX(c) band gaps are close to experiment. In the case of diamond, quasiparticle calculations are essential to remedy a 0.7 eV underestimate of the experimental band gap within EXX(c). The accuracy of EXX(c)-based GWA calculations for the determination of band gaps is as good as the accuracy of LDA-based GWA calculations. For the lowest valence band width a qualitatively different behavior is observed for medium- and wide-gap materials. The valence band width of medium- (wide-) gap materials is reduced (increased) in EXX(c) compared to the LDA. Quasiparticle corrections lead to a further reduction (increase). As a consequence, EXX(c)-based quasiparticle calculations give valence band widths that are generally 1-2 eV smaller (larger) than experiment for medium- (wide-) gap materials. (c) 2000 The American Physical Society

  15. Calculations of dosimetric parameter and REM meter response for BE(d, n) source

    International Nuclear Information System (INIS)

    Chen Changmao

    1988-01-01

    Based on the recent data about neutron spectra, the average energy, effictive energy and conversion coefficient of fluence to dose equivalent are calculated for some Be (α, n) neutron sources which have differene types and structures. The responses of 2202D and 0075 REM meter for thses spectral neutrons are also estimated. The results indicate that the relationship between average energy and conversion coefficient or REM meter responses can be described by simple functions

  16. Global calculation of PWR reactor core using the two group energy solution by the response matrix method

    International Nuclear Information System (INIS)

    Conti, C.F.S.; Watson, F.V.

    1991-01-01

    A computational code to solve a two energy group neutron diffusion problem has been developed base d on the Response Matrix Method. That method solves the global problem of PWR core, without using the cross sections homogenization process, thus it is equivalent to a pontwise core calculation. The present version of the code calculates the response matrices by the first order perturbative method and considers developments on arbitrary order Fourier series for the boundary fluxes and interior fluxes. (author)

  17. Semi-classical calculation of the spin-isospin response functions

    International Nuclear Information System (INIS)

    Chanfray, G.

    1987-03-01

    We present a semi-classical calculation of the nuclear response functions beyond the Thomas-Fermi approximation. We apply our formalism to the spin-isospin responses and show that the surface peaked h/2π corrections considerably decrease the ratio longitudinal/transverse as obtained through hadronic probes

  18. Development of NRESP98 Monte Carlo codes for the calculation of neutron response functions of neutron detectors. Calculation of the response function of spherical BF{sub 3} proportional counter

    Energy Technology Data Exchange (ETDEWEB)

    Hashimoto, M.; Saito, K.; Ando, H. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center

    1998-05-01

    The method to calculate the response function of spherical BF{sub 3} proportional counter, which is commonly used as neutron dose rate meter and neutron spectrometer with multi moderator system, is developed. As the calculation code for evaluating the response function, the existing code series NRESP, the Monte Carlo code for the calculation of response function of neutron detectors, is selected. However, the application scope of the existing NRESP is restricted, the NRESP98 is tuned as generally applicable code, with expansion of the geometrical condition, the applicable element, etc. The NRESP98 is tested with the response function of the spherical BF{sub 3} proportional counter. Including the effect of the distribution of amplification factor, the detailed evaluation of the charged particle transportation and the effect of the statistical distribution, the result of NRESP98 calculation fit the experience within {+-}10%. (author)

  19. Online detector response calculations for high-resolution PET image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Pratx, Guillem [Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Levin, Craig, E-mail: cslevin@stanford.edu [Departments of Radiology, Physics and Electrical Engineering, and Molecular Imaging Program at Stanford, Stanford University, Stanford, CA 94305 (United States)

    2011-07-07

    Positron emission tomography systems are best described by a linear shift-varying model. However, image reconstruction often assumes simplified shift-invariant models to the detriment of image quality and quantitative accuracy. We investigated a shift-varying model of the geometrical system response based on an analytical formulation. The model was incorporated within a list-mode, fully 3D iterative reconstruction process in which the system response coefficients are calculated online on a graphics processing unit (GPU). The implementation requires less than 512 Mb of GPU memory and can process two million events per minute (forward and backprojection). For small detector volume elements, the analytical model compared well to reference calculations. Images reconstructed with the shift-varying model achieved higher quality and quantitative accuracy than those that used a simpler shift-invariant model. For an 8 mm sphere in a warm background, the contrast recovery was 95.8% for the shift-varying model versus 85.9% for the shift-invariant model. In addition, the spatial resolution was more uniform across the field-of-view: for an array of 1.75 mm hot spheres in air, the variation in reconstructed sphere size was 0.5 mm RMS for the shift-invariant model, compared to 0.07 mm RMS for the shift-varying model.

  20. Comparison of calculated and measured spectral response and intrinsic efficiency for a boron-loaded plastic neutron detector

    Energy Technology Data Exchange (ETDEWEB)

    Kamykowski, E.A. (Grumman Corporate Research Center, Bethpage, NY (United States))

    1992-07-15

    Boron-loaded scintillators offer the potential for neutron spectrometers with a simplified, peak-shaped response. The Monte Carlo code, MCNP, has been used to calculate the detector characteristics of a scintillator made of a boron-loaded plastic, BC454, for neutrons between 1 and 7 MeV. Comparisons with measurements are made of spectral response for neutron energies between 4 and 6 MeV and of intrinsic efficiencies for neutrons up to 7 MeV. In order to compare the calculated spectra with measured data, enhancements to MCNP were introduced to generate tallies of light output spectra for recoil events terminating in a final capture by {sup 10}B. The comparison of measured and calculated spectra shows agreement in response shape, full width at half maximum, and recoil energy deposition. Intrinsic efficiencies measured to 7 MeV are also in agreement with the MCNP calculations. These results validate the code predictions and affirm the value of MCNP as a useful tool for development of sensor concepts based on boron-loaded plastics. (orig.).

  1. Calculation of the spin-isospin response functions in an extended semi-classical theory

    International Nuclear Information System (INIS)

    Chanfray, G.

    1987-01-01

    We present a semi-classical calculation of the spin isospin response-functions beyond Thomas-Fermi theory. We show that surface-peaked ℎ 2 corrections reduce the collective effects predicted by Thomas-Fermi calculations. These effects, small for a volume response, become important for surface responses probed by hadrons. This yields a considerable improvement of the agreement with the (p, p') Los Alamos data

  2. Goal based mesh adaptivity for fixed source radiation transport calculations

    International Nuclear Information System (INIS)

    Baker, C.M.J.; Buchan, A.G.; Pain, C.C.; Tollit, B.S.; Goffin, M.A.; Merton, S.R.; Warner, P.

    2013-01-01

    Highlights: ► Derives an anisotropic goal based error measure for shielding problems. ► Reduces the error in the detector response by optimizing the finite element mesh. ► Anisotropic adaptivity captures material interfaces using fewer elements than AMR. ► A new residual based on the numerical scheme chosen forms the error measure. ► The error measure also combines the forward and adjoint metrics in a novel way. - Abstract: In this paper, the application of goal based error measures for anisotropic adaptivity applied to shielding problems in which a detector is present is explored. Goal based adaptivity is important when the response of a detector is required to ensure that dose limits are adhered to. To achieve this, a dual (adjoint) problem is solved which solves the neutron transport equation in terms of the response variables, in this case the detector response. The methods presented can be applied to general finite element solvers, however, the derivation of the residuals are dependent on the underlying finite element scheme which is also discussed in this paper. Once error metrics for the forward and adjoint solutions have been formed they are combined using a novel approach. The two metrics are combined by forming the minimum ellipsoid that covers both the error metrics rather than taking the maximum ellipsoid that is contained within the metrics. Another novel approach used within this paper is the construction of the residual. The residual, used to form the goal based error metrics, is calculated from the subgrid scale correction which is inherent in the underlying spatial discretisation employed

  3. Base response arising from free-field motions

    International Nuclear Information System (INIS)

    Whitley, J.R.; Morgan, J.R.; Hall, W.J.; Newmark, N.M.

    1977-01-01

    A procedure is illustrated in this paper for deriving (estimating) from a free-field record the horizontal base motions of a building, including horizontal rotation and translation. More specifically the goal was to compare results of response calculations based on derived accelerations with the results of calculations based on recorded accelerations. The motions are determined by assuming that an actual recorded ground wave transits a rigid base of a given dimension. Calculations given in the paper were made employing the earthquake acceleration time histories of the Hollywood storage building and the adjacent P.E. lot for the Kern County (1952) and San Fernando (1971) earthquakes. For the Kern County earthquake the derived base corner accelerations, including the effect of rotation show generally fair agreement with the spectra computed from the Hollywood storage corner record. For the San Fernando earthquake the agreement between the spectra computed from derived base corner accelerations and that computed from the actual basement corner record is not as good as that for the Kern County earthquake. These limited studies admittedly are hardly a sufficient basis on which to form a judgment, but these differences noted probably can be attributed in part to foundation distortion, building feedback, distance between measurement points, and soil structure interaction; it was not possible to take any of these factors into account in these particular calculations

  4. Calculated energy response of lithium fluoride finger-tip dosimeters

    International Nuclear Information System (INIS)

    Johns, T.F.

    1965-07-01

    Calculations have been made of the energy response of the lithium fluoride thermoluminescent dosimeters being used at A.E.E. Winfrith for the measurement of radiation doses to the finger-tips of people handling radio-active materials. It is shown that the energy response is likely to be materially affected if the sachet in which the powder is held contains elements with atomic numbers much higher than 9 (e.g. if the sachet is made from polyvinyl chloride). (author)

  5. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    DEFF Research Database (Denmark)

    Rinker, Jennifer M.

    2016-01-01

    at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four......This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a high-dimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data...... turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project...

  6. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    International Nuclear Information System (INIS)

    Rinker, Jennifer M.

    2016-01-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity. (paper)

  7. Calculation and applications of the frequency dependent neutron detector response functions

    International Nuclear Information System (INIS)

    Van Dam, H.; Van Hagen, T.H.J.J. der; Hoogenboom, J.E.; Keijzer, J.

    1994-01-01

    The theoretical basis is presented for the evaluation of the frequency dependent function that enables to calculate the response of a neutron detector to parametric fluctuations ('noise') or oscillations in reactor core. This function describes the 'field view' of a detector and can be calculated with a static transport code under certain conditions which are discussed. Two applications are presented: the response of an ex-core detector to void fraction fluctuations in a BWR and of both in and ex-core detectors to a rotating neutron absorber near or inside a research reactor core. (authors). 7 refs., 4 figs

  8. NFAP calculation of pressure response of 1/6th scale model containment structure

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1988-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  9. Programmable calculator: alternative to minicomputer-based analyzer

    International Nuclear Information System (INIS)

    Hochel, R.C.

    1979-01-01

    Described are a number of typical field and laboratory counting systems that use standard stand-alone multichannel analyzers (MCA) interfaced to a Hewlett-Packard Company (HP 9830) programmable calculator. Such systems can offer significant advantages in cost and flexibility over a minicomputyr-based system. Because most laboratories tend to accumulate MCA's over the years, the programmable calculator also offers an easy way to upgrade the laboratory while making optimum use of existing systems. Software programs are easily tailored to fit a variety of general or specific applications. The only disadvantage of the calculator vs a computer-based system is in speed of analyses; however, for most applications this handicap is minimal. Applications discussed give a brief overview of the power and flexibility of the MCA-calculator approach to automated counting and data reduction

  10. Dielectric response of periodic systems from quantum Monte Carlo calculations.

    Science.gov (United States)

    Umari, P; Willamson, A J; Galli, Giulia; Marzari, Nicola

    2005-11-11

    We present a novel approach that allows us to calculate the dielectric response of periodic systems in the quantum Monte Carlo formalism. We employ a many-body generalization for the electric-enthalpy functional, where the coupling with the field is expressed via the Berry-phase formulation for the macroscopic polarization. A self-consistent local Hamiltonian then determines the ground-state wave function, allowing for accurate diffusion quantum Monte Carlo calculations where the polarization's fixed point is estimated from the average on an iterative sequence, sampled via forward walking. This approach has been validated for the case of an isolated hydrogen atom and then applied to a periodic system, to calculate the dielectric susceptibility of molecular-hydrogen chains. The results found are in excellent agreement with the best estimates obtained from the extrapolation of quantum-chemistry calculations.

  11. Calculation of seismic response of a flexible rotor by complex modal method, 1

    International Nuclear Information System (INIS)

    Azuma, Takao; Saito, Shinobu

    1984-01-01

    In rotary machines, at the time of earthquakes, whether the rotating part and stationary part touch or whether the bearings and seals are damaged or not are problems. In order to examine these problems, it is necessary to analyze the seismic response of a rotary shaft or sometimes a casing system. But the conventional analysis methods are unsatisfactory. Accordingly, in the case of a general shaft system supported with slide bearings and on which gyro effect acts, complex modal method must be used. This calculation method is explained in detail in the book of Lancaster, however, when this method is applied to the seismic response of rotary shafts, the calculation time is considerably different according to the method of final integration. In this study, good results were obtained when the method which did not depend on numerical integration was attempted. The equation of motion and its solution, the displacement vector of a foundation, the verification of the calculation program and the example of calculating the seismic response of two coupled rotor shafts are reported. (Kako, I.)

  12. Response-Based Estimation of Sea State Parameters

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam

    2007-01-01

    of measured ship responses. It is therefore interesting to investigate how the filtering aspect, introduced by FRF, affects the final outcome of the estimation procedures. The paper contains a study based on numerical generated time series, and the study shows that filtering has an influence...... calculated by a 3-D time domain code and by closed-form (analytical) expressions, respectively. Based on comparisons with wave radar measurements and satellite measurements it is seen that the wave estimations based on closedform expressions exhibit a reasonable energy content, but the distribution of energy...

  13. Sampling of Stochastic Input Parameters for Rockfall Calculations and for Structural Response Calculations Under Vibratory Ground Motion

    International Nuclear Information System (INIS)

    M. Gross

    2004-01-01

    The purpose of this scientific analysis is to define the sampled values of stochastic (random) input parameters for (1) rockfall calculations in the lithophysal and nonlithophysal zones under vibratory ground motions, and (2) structural response calculations for the drip shield and waste package under vibratory ground motions. This analysis supplies: (1) Sampled values of ground motion time history and synthetic fracture pattern for analysis of rockfall in emplacement drifts in nonlithophysal rock (Section 6.3 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (2) Sampled values of ground motion time history and rock mechanical properties category for analysis of rockfall in emplacement drifts in lithophysal rock (Section 6.4 of ''Drift Degradation Analysis'', BSC 2004 [DIRS 166107]); (3) Sampled values of ground motion time history and metal to metal and metal to rock friction coefficient for analysis of waste package and drip shield damage to vibratory motion in ''Structural Calculations of Waste Package Exposed to Vibratory Ground Motion'' (BSC 2004 [DIRS 167083]) and in ''Structural Calculations of Drip Shield Exposed to Vibratory Ground Motion'' (BSC 2003 [DIRS 163425]). The sampled values are indices representing the number of ground motion time histories, number of fracture patterns and rock mass properties categories. These indices are translated into actual values within the respective analysis and model reports or calculations. This report identifies the uncertain parameters and documents the sampled values for these parameters. The sampled values are determined by GoldSim V6.04.007 [DIRS 151202] calculations using appropriate distribution types and parameter ranges. No software development or model development was required for these calculations. The calculation of the sampled values allows parameter uncertainty to be incorporated into the rockfall and structural response calculations that support development of the seismic scenario for the

  14. Double sliding-window technique: a new method to calculate the neuronal response onset latency.

    Science.gov (United States)

    Berényi, Antal; Benedek, György; Nagy, Attila

    2007-10-31

    Neuronal response onset latency provides important data on the information processing within the central nervous system. In order to enhance the quality of the onset latency estimation, we have developed a 'double sliding-window' technique, which combines the advantages of mathematical methods with the reliability of standard statistical processes. This method is based on repetitive series of statistical probes between two virtual time windows. The layout of the significance curve reveals the starting points of changes in neuronal activity in the form of break-points between linear segments. A second-order difference function is applied to determine the position of maximum slope change, which corresponds to the onset of the response. In comparison with Poisson spike-train analysis, the cumulative sum technique and the method of Falzett et al., this 'double sliding-window, technique seems to be a more accurate automated procedure to calculate the response onset latency of a broad range of neuronal response characteristics.

  15. Calculation of integrated biological response in brachytherapy

    International Nuclear Information System (INIS)

    Dale, Roger G.; Coles, Ian P.; Deehan, Charles; O'Donoghue, Joseph A.

    1997-01-01

    Purpose: To present analytical methods for calculating or estimating the integrated biological response in brachytherapy applications, and which allow for the presence of dose gradients. Methods and Materials: The approach uses linear-quadratic (LQ) formulations to identify an equivalent biologically effective dose (BED eq ) which, if applied to a specified tissue volume, would produce the same biological effect as that achieved by a given brachytherapy application. For simple geometrical cases, BED multiplying factors have been derived which allow the equivalent BED for tumors to be estimated from a single BED value calculated at a dose reference point. For more complex brachytherapy applications a voxel-by-voxel determination of the equivalent BED will be more accurate. Equations are derived which when incorporated into brachytherapy software would facilitate such a process. Results: At both high and low dose rates, the BEDs calculated at the dose reference point are shown to be lower than the true values by an amount which depends primarily on the magnitude of the prescribed dose; the BED multiplying factors are higher for smaller prescribed doses. The multiplying factors are less dependent on the assumed radiobiological parameters. In most clinical applications involving multiple sources, particularly those in multiplanar arrays, the multiplying factors are likely to be smaller than those derived here for single sources. The overall suggestion is that the radiobiological consequences of dose gradients in well-designed brachytherapy treatments, although important, may be less significant than is sometimes supposed. The modeling exercise also demonstrates that the integrated biological effect associated with fractionated high-dose-rate (FHDR) brachytherapy will usually be different from that for an 'equivalent' continuous low-dose-rate (CLDR) regime. For practical FHDR regimes involving relatively small numbers of fractions, the integrated biological effect to

  16. Data base to compare calculations and observations

    International Nuclear Information System (INIS)

    Tichler, J.L.

    1985-01-01

    Meteorological and climatological data bases were compared with known tritium release points and diffusion calculations to determine if calculated concentrations could replace measure concentrations at the monitoring stations. Daily tritium concentrations were monitored at 8 stations and 16 possible receptors. Automated data retrieval strategies are listed

  17. Heavy Ion SEU Cross Section Calculation Based on Proton Experimental Data, and Vice Versa

    CERN Document Server

    Wrobel, F; Pouget, V; Dilillo, L; Ecoffet, R; Lorfèvre, E; Bezerra, F; Brugger, M; Saigné, F

    2014-01-01

    The aim of this work is to provide a method to calculate single event upset (SEU) cross sections by using experimental data. Valuable tools such as PROFIT and SIMPA already focus on the calculation of the proton cross section by using heavy ions cross-section experiments. However, there is no available tool that calculates heavy ion cross sections based on measured proton cross sections with no knowledge of the technology. We based our approach on the diffusion-collection model with the aim of analyzing the characteristics of transient currents that trigger SEUs. We show that experimental cross sections could be used to characterize the pulses that trigger an SEU. Experimental results allow yet defining an empirical rule to identify the transient current that are responsible for an SEU. Then, the SEU cross section can be calculated for any kind of particle and any energy with no need to know the Spice model of the cell. We applied our method to some technologies (250 nm, 90 nm and 65 nm bulk SRAMs) and we sho...

  18. NFAP calculation of the response of a 1/6 scale reinforced concrete containment model

    International Nuclear Information System (INIS)

    Costantino, C.J.; Pepper, S.; Reich, M.

    1989-01-01

    The details associated with the NFAP calculation of the pressure response of the 1/6th scale model containment structure are discussed in this paper. Comparisons are presented of some of the primary items of interest with those determined from the experiment. It was found from this comparison that the hoop response of the containment wall was adequately predicted by the NFAP finite element calculation, including the response in the high pressure, high strain range at which cracking of the concrete and yielding of the hoop reinforcement occurred. In the vertical or meridional direction, it was found that the model was significantly softer than predicted by the finite element calculation; that is, the vertical strains in the test were three to four times larger than computed in the NFAP calculation. These differences were noted even at low strain levels at which the concrete would not be expected to be cracked under tensile loadings. Simplified calculations for the containment indicate that the vertical stiffness of the wall is similar to that which would be determined by assuming the concrete fully cracked. Thus, the experiment indicates an anomalous behavior in the vertical direction

  19. CLEAR (Calculates Logical Evacuation And Response): A generic transportation network model for the calculation of evacuation time estimates

    International Nuclear Information System (INIS)

    Moeller, M.P.; Desrosiers, A.E.; Urbanik, T. II

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuation times for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies. (author)

  20. CLEAR (Calculates Logical Evacuation And Response): A Generic Transportation Network Model for the Calculation of Evacuation Time Estimates

    Energy Technology Data Exchange (ETDEWEB)

    Moeller, M. P.; Urbanik, II, T.; Desrosiers, A. E.

    1982-03-01

    This paper describes the methodology and application of the computer model CLEAR (Calculates Logical Evacuation And Response) which estimates the time required for a specific population density and distribution to evacuate an area using a specific transportation network. The CLEAR model simulates vehicle departure and movement on a transportation network according to the conditions and consequences of traffic flow. These include handling vehicles at intersecting road segments, calculating the velocity of travel on a road segment as a function of its vehicle density, and accounting for the delay of vehicles in traffic queues. The program also models the distribution of times required by individuals to prepare for an evacuation. In order to test its accuracy, the CLEAR model was used to estimate evacuatlon tlmes for the emergency planning zone surrounding the Beaver Valley Nuclear Power Plant. The Beaver Valley site was selected because evacuation time estimates had previously been prepared by the licensee, Duquesne Light, as well as by the Federal Emergency Management Agency and the Pennsylvania Emergency Management Agency. A lack of documentation prevented a detailed comparison of the estimates based on the CLEAR model and those obtained by Duquesne Light. However, the CLEAR model results compared favorably with the estimates prepared by the other two agencies.

  1. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  2. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  3. Response surfaces and sensitivity analyses for an environmental model of dose calculations

    Energy Technology Data Exchange (ETDEWEB)

    Iooss, Bertrand [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)]. E-mail: bertrand.iooss@cea.fr; Van Dorpe, Francois [CEA Cadarache, DEN/DTN/SMTM/LMTE, 13108 Saint Paul lez Durance, Cedex (France); Devictor, Nicolas [CEA Cadarache, DEN/DER/SESI/LCFR, 13108 Saint Paul lez Durance, Cedex (France)

    2006-10-15

    A parametric sensitivity analysis is carried out on GASCON, a radiological impact software describing the radionuclides transfer to the man following a chronic gas release of a nuclear facility. An effective dose received by age group can thus be calculated according to a specific radionuclide and to the duration of the release. In this study, we are concerned by 18 output variables, each depending of approximately 50 uncertain input parameters. First, the generation of 1000 Monte-Carlo simulations allows us to calculate correlation coefficients between input parameters and output variables, which give a first overview of important factors. Response surfaces are then constructed in polynomial form, and used to predict system responses at reduced computation time cost; this response surface will be very useful for global sensitivity analysis where thousands of runs are required. Using the response surfaces, we calculate the total sensitivity indices of Sobol by the Monte-Carlo method. We demonstrate the application of this method to one site of study and to one reference group near the nuclear research Center of Cadarache (France), for two radionuclides: iodine 129 and uranium 238. It is thus shown that the most influential parameters are all related to the food chain of the goat's milk, in decreasing order of importance: dose coefficient 'effective ingestion', goat's milk ration of the individuals of the reference group, grass ration of the goat, dry deposition velocity and transfer factor to the goat's milk.

  4. Determining dose rate with a semiconductor detector - Monte Carlo calculations of the detector response

    Energy Technology Data Exchange (ETDEWEB)

    Nordenfors, C

    1999-02-01

    To determine dose rate in a gamma radiation field, based on measurements with a semiconductor detector, it is necessary to know how the detector effects the field. This work aims to describe this effect with Monte Carlo simulations and calculations, that is to identify the detector response function. This is done for a germanium gamma detector. The detector is normally used in the in-situ measurements that is carried out regularly at the department. After the response function is determined it is used to reconstruct a spectrum from an in-situ measurement, a so called unfolding. This is done to be able to calculate fluence rate and dose rate directly from a measured (and unfolded) spectrum. The Monte Carlo code used in this work is EGS4 developed mainly at Stanford Linear Accelerator Center. It is a widely used code package to simulate particle transport. The results of this work indicates that the method could be used as-is since the accuracy of this method compares to other methods already in use to measure dose rate. Bearing in mind that this method provides the nuclide specific dose it is useful, in radiation protection, since knowing what the relations between different nuclides are and how they change is very important when estimating the risks

  5. Linear response calculation using the canonical-basis TDHFB with a schematic pairing functional

    International Nuclear Information System (INIS)

    Ebata, Shuichiro; Nakatsukasa, Takashi; Yabana, Kazuhiro

    2011-01-01

    A canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory is obtained with an approximation that the pair potential is assumed to be diagonal in the time-dependent canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-response calculations for even-even nuclei. E1 strength distributions for proton-rich Mg isotopes are systematically calculated. The calculation suggests strong Landau damping of giant dipole resonance for drip-line nuclei.

  6. Applying the universal neutron transport codes to the calculation of well-logging probe response at different rock porosities

    International Nuclear Information System (INIS)

    Bogacz, J.; Loskiewicz, J.; Zazula, J.M.

    1991-01-01

    The use of universal neutron transport codes in order to calculate the parameters of well-logging probes presents a new approach first tried in U.S.A. and UK in the eighties. This paper deals with first such an attempt in Poland. The work is based on the use of MORSE code developed in Oak Ridge National Laboratory in U.S.A.. Using CG MORSE code we calculated neutron detector response when surrounded with sandstone of porosities 19% and 38%. During the work it come out that it was necessary to investigate different methods of estimation of the neutron flux. The stochastic estimation method as used currently in the original MORSE code (next collision approximation) can not be used because of slow convergence of its variance. Using the analog type of estimation (calculation of the sum of track lengths inside detector) we obtained results of acceptable variance (∼ 20%) for source-detector spacing smaller than 40 cm. The influence of porosity on detector response is correctly described for detector positioned at 27 cm from the source. At the moment the variances are quite large. (author). 33 refs, 8 figs, 8 tabs

  7. Calculations of accelerator-based neutron sources characteristics

    International Nuclear Information System (INIS)

    Tertytchnyi, R.G.; Shorin, V.S.

    2000-01-01

    Accelerator-based quasi-monoenergetic neutron sources (T(p,n), D(d;n), T(d;n) and Li (p,n)-reactions) are widely used in experiments on measuring the interaction cross-sections of fast neutrons with nuclei. The present work represents the code for calculation of the yields and spectra of neutrons generated in (p, n)- and ( d; n)-reactions on some targets of light nuclei (D, T; 7 Li). The peculiarities of the stopping processes of charged particles (with incident energy up to 15 MeV) in multilayer and multicomponent targets are taken into account. The code version is made in terms of the 'SOURCE,' a subroutine for the well-known MCNP code. Some calculation results for the most popular accelerator- based neutron sources are given. (authors)

  8. Calculation of Multisphere Neutron Spectrometer Response Functions in Energy Range up to 20 MeV

    CERN Document Server

    Martinkovic, J

    2005-01-01

    Multisphere neutron spectrometer is a basic instrument of neutron measurements in the scattered radiation field at charged-particles accelerators for radiation protection and dosimetry purposes. The precise calculation of the spectrometer response functions is a necessary condition of the propriety of neutron spectra unfolding. The results of the response functions calculation for the JINR spectrometer with LiI(Eu) detector (a set of 6 homogeneous and 1 heterogeneous moderators, "bare" detector within cadmium cover and without it) at two geometries of the spectrometer irradiation - in uniform monodirectional and uniform isotropic neutron fields - are given. The calculation was carried out by the code MCNP in the neutron energy range 10$^{-8}$-20 MeV.

  9. Particle-hole calculation of the longitudinal response function of 12C

    International Nuclear Information System (INIS)

    Dellafiore, A.; Lenz, F.; Brieva, F.A.

    1985-01-01

    The longitudinal response function of 12 C in the range of momentum transfers 200 MeV/c< or =q< or =550 MeV/c is calculated in the Tamm-Dancoff approximation. The particle-hole Green's function is evaluated by means of a doorway-state expansion. This method allows us to take into account finite-range residual interactions in the continuum, including exchange processes. At low momentum transfers, calculations agree qualitatively with the data. The data cannot be reproduced at momentum transfers around 450 MeV/c. This discrepancy can be accounted for neither by uncertainties in the residual interaction, nor by more complicated processes in the nuclear final states

  10. THE ACCOUNTING POSTEMPLOYMENT BENEFITS BASED ON ACTUARIAL CALCULATIONS

    Directory of Open Access Journals (Sweden)

    Anna CEBOTARI

    2017-11-01

    Full Text Available The accounting post-employment benefits, based on actuarial calculations, at present remains a subject studied in Moldova only theoretically. Applying actuarial calculations of accounting in fact denotes its character of evolving. Because national accounting standards have been adapted to international, which, in turn, require the valuation of assets and debts at fair value, there is a need to draw up exact calculations on which stands the theory of probability and mathematical statistics. One of the main objectives of accounting information is reflected in its financial situations and providing internal and external users of the entity. Hence, arises the need to reflect highly reliable information that can be provided by applying actuarial calculations.

  11. Calculation of Lightning Transient Responses on Wind Turbine Towers

    Directory of Open Access Journals (Sweden)

    Xiaoqing Zhang

    2013-01-01

    Full Text Available An efficient method is proposed in this paper for calculating lightning transient responses on wind turbine towers. In the proposed method, the actual tower body is simplified as a multiconductor grid in the shape of cylinder. A set of formulas are given for evaluating the circuit parameters of the branches in the multiconductor grid. On the basis of the circuit parameters, the multiconductor grid is further converted into an equivalent circuit. The circuit equation is built in frequency-domain to take into account the effect of the frequency-dependent characteristic of the resistances and inductances on lightning transients. The lightning transient responses can be obtained by using the discrete Fourier transform with exponential sampling to take the inverse transform of the frequency-domain solution of the circuit equation. A numerical example has been given for examining the applicability of the proposed method.

  12. A Proposal of Estimation Methodology to Improve Calculation Efficiency of Sampling-based Method in Nuclear Data Sensitivity and Uncertainty Analysis

    International Nuclear Information System (INIS)

    Song, Myung Sub; Kim, Song Hyun; Kim, Jong Kyung; Noh, Jae Man

    2014-01-01

    The uncertainty with the sampling-based method is evaluated by repeating transport calculations with a number of cross section data sampled from the covariance uncertainty data. In the transport calculation with the sampling-based method, the transport equation is not modified; therefore, all uncertainties of the responses such as k eff , reaction rates, flux and power distribution can be directly obtained all at one time without code modification. However, a major drawback with the sampling-based method is that it requires expensive computational load for statistically reliable results (inside confidence level 0.95) in the uncertainty analysis. The purpose of this study is to develop a method for improving the computational efficiency and obtaining highly reliable uncertainty result in using the sampling-based method with Monte Carlo simulation. The proposed method is a method to reduce the convergence time of the response uncertainty by using the multiple sets of sampled group cross sections in a single Monte Carlo simulation. The proposed method was verified by estimating GODIVA benchmark problem and the results were compared with that of conventional sampling-based method. In this study, sampling-based method based on central limit theorem is proposed to improve calculation efficiency by reducing the number of repetitive Monte Carlo transport calculation required to obtain reliable uncertainty analysis results. Each set of sampled group cross sections is assigned to each active cycle group in a single Monte Carlo simulation. The criticality uncertainty for the GODIVA problem is evaluated by the proposed and previous method. The results show that the proposed sampling-based method can efficiently decrease the number of Monte Carlo simulation required for evaluate uncertainty of k eff . It is expected that the proposed method will improve computational efficiency of uncertainty analysis with sampling-based method

  13. Wavelet-based linear-response time-dependent density-functional theory

    Science.gov (United States)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.; Philouze, Christian; Balakirev, Maxim Y.

    2012-06-01

    Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  14. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  15. Improved response function calculations for scintillation detectors using an extended version of the MCNP code

    CERN Document Server

    Schweda, K

    2002-01-01

    The analysis of (e,e'n) experiments at the Darmstadt superconducting electron linear accelerator S-DALINAC required the calculation of neutron response functions for the NE213 liquid scintillation detectors used. In an open geometry, these response functions can be obtained using the Monte Carlo codes NRESP7 and NEFF7. However, for more complex geometries, an extended version of the Monte Carlo code MCNP exists. This extended version of the MCNP code was improved upon by adding individual light-output functions for charged particles. In addition, more than one volume can be defined as a scintillator, thus allowing the simultaneous calculation of the response for multiple detector setups. With the implementation of sup 1 sup 2 C(n,n'3 alpha) reactions, all relevant reactions for neutron energies E sub n <20 MeV are now taken into consideration. The results of these calculations were compared to experimental data using monoenergetic neutrons in an open geometry and a sup 2 sup 5 sup 2 Cf neutron source in th...

  16. The giant resonances in hot nuclei. Linear response calculations

    International Nuclear Information System (INIS)

    Braghin, F.L.; Vautherin, D.; Abada, A.

    1995-01-01

    The isovector response function of hot nuclear matter is calculated using various effective Skyrme interactions. For Skyrme forces with a small effective mass the strength distribution is found to be nearly independent of temperature, and shows little collective effects. In contrast effective forces with an effective mass close to unity produce at zero temperature sizeable collective effects which disappear at temperatures of a few MeV. The relevance of these results for the saturation of the multiplicity of photons emitted by the giant dipole resonance in hot nuclei observed in recent experiments beyond T = 3 MeV is discussed. (authors). 12 refs., 3 figs

  17. Towards SSVEP-based, portable, responsive Brain-Computer Interface.

    Science.gov (United States)

    Kaczmarek, Piotr; Salomon, Pawel

    2015-08-01

    A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.

  18. Wavelet-based linear-response time-dependent density-functional theory

    International Nuclear Information System (INIS)

    Natarajan, Bhaarathi; Genovese, Luigi; Casida, Mark E.; Deutsch, Thierry; Burchak, Olga N.

    2012-01-01

    Highlights: ► We has been implemented LR-TD-DFT in the pseudopotential wavelet-based program. ► We have compared the results against all-electron Gaussian-type program. ► Orbital energies converges significantly faster for BigDFT than for DEMON2K. ► We report the X-ray crystal structure of the small organic molecule flugi6. ► Measured and calculated absorption spectrum of flugi6 is also reported. - Abstract: Linear-response time-dependent (TD) density-functional theory (DFT) has been implemented in the pseudopotential wavelet-based electronic structure program BIGDFT and results are compared against those obtained with the all-electron Gaussian-type orbital program DEMON2K for the calculation of electronic absorption spectra of N 2 using the TD local density approximation (LDA). The two programs give comparable excitation energies and absorption spectra once suitably extensive basis sets are used. Convergence of LDA density orbitals and orbital energies to the basis-set limit is significantly faster for BIGDFT than for DEMON2K. However the number of virtual orbitals used in TD-DFT calculations is a parameter in BIGDFT, while all virtual orbitals are included in TD-DFT calculations in DEMON2K. As a reality check, we report the X-ray crystal structure and the measured and calculated absorption spectrum (excitation energies and oscillator strengths) of the small organic molecule N-cyclohexyl-2-(4-methoxyphenyl)imidazo[1, 2-a]pyridin-3-amine.

  19. Calculation of foundation response to spatially varying ground motion by finite element method

    International Nuclear Information System (INIS)

    Wang, F.; Gantenbein, F.

    1995-01-01

    This paper presents a general method to compute the response of a rigid foundation of arbitrary shape resting on a homogeneous or multilayered elastic soil when subjected to a spatially varying ground motion. The foundation response is calculated from the free-field ground motion and the contact tractions between the foundation and the soil. The spatial variation of ground motion in this study is introduced by a coherence function and the contact tractions are obtained numerically using the Finite Element Method in the process of calculating the dynamic compliance of the foundation. Applications of this method to a massless rigid disc supported on an elastic half space and to that founded on an elastic medium consisting of a layer of constant thickness supported on an elastic half space are described. The numerical results obtained are in very good agreement with analytical solutions published in the literature. (authors). 5 refs., 8 figs

  20. Child-Level Predictors of Responsiveness to Evidence-Based Mathematics Intervention.

    Science.gov (United States)

    Powell, Sarah R; Cirino, Paul T; Malone, Amelia S

    2017-07-01

    We identified child-level predictors of responsiveness to 2 types of mathematics (calculation and word-problem) intervention among 2nd-grade children with mathematics difficulty. Participants were 250 children in 107 classrooms in 23 schools pretested on mathematics and general cognitive measures and posttested on mathematics measures. We assigned classrooms randomly assigned to calculation intervention, word-problem intervention, or business-as-usual control. Intervention lasted 17 weeks. Path analyses indicated that scores on working memory and language comprehension assessments moderated responsiveness to calculation intervention. No moderators were identified for responsiveness to word-problem intervention. Across both intervention groups and the control group, attentive behavior predicted both outcomes. Initial calculation skill predicted the calculation outcome, and initial language comprehension predicted word-problem outcomes. These results indicate that screening for calculation intervention should include a focus on working memory, language comprehension, attentive behavior, and calculations. Screening for word-problem intervention should focus on attentive behavior and word problems.

  1. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    Directory of Open Access Journals (Sweden)

    Wu Wan'e

    2012-01-01

    Full Text Available A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ±7.3%; in the formulation range (hydroxyl-terminated polybutadiene 28%–32%, ammonium perchlorate 30%–35%, magnalium alloy 4%–8%, catocene 0%–5%, and boron 30%, the variation of the calculation data is consistent with the experimental results.

  2. Analytic models of spectral responses of fiber-grating-based interferometers on FMC theory.

    Science.gov (United States)

    Zeng, Xiangkai; Wei, Lai; Pan, Yingjun; Liu, Shengping; Shi, Xiaohui

    2012-02-13

    In this paper the analytic models (AMs) of the spectral responses of fiber-grating-based interferometers are derived from the Fourier mode coupling (FMC) theory proposed recently. The interferometers include Fabry-Perot cavity, Mach-Zehnder and Michelson interferometers, which are constructed by uniform fiber Bragg gratings and long-period fiber gratings, and also by Gaussian-apodized ones. The calculated spectra based on the analytic models are achieved, and compared with the measured cases and those on the transfer matrix (TM) method. The calculations and comparisons have confirmed that the AM-based spectrum is in excellent agreement with the TM-based one and the measured case, of which the efficiency is improved up to ~2990 times that of the TM method for non-uniform-grating-based in-fiber interferometers.

  3. Software-Based Visual Loan Calculator For Banking Industry

    Science.gov (United States)

    Isizoh, A. N.; Anazia, A. E.; Okide, S. O. 3; Onyeyili, T. I.; Okwaraoka, C. A. P.

    2012-03-01

    industry is very necessary in modern day banking system using many design techniques for security reasons. This paper thus presents the software-based design and implementation of a Visual Loan calculator for banking industry using Visual Basic .Net (VB.Net). The fundamental approach to this is to develop a Graphical User Interface (GUI) using VB.Net operating tools, and then developing a working program which calculates the interest of any loan obtained. The VB.Net programming was done, implemented and the software proved satisfactory.

  4. MACK-IV, a new version of MACK: a program to calculate nuclear response functions from data in ENDF/B format

    International Nuclear Information System (INIS)

    Abdou, M.A.; Gohar, Y.; Wright, R.Q.

    1978-07-01

    MACK-IV calculates nuclear response functions important to the neutronics analysis of nuclear and fusion systems. A central part of the code deals with the calculation of the nuclear response function for nuclear heating more commonly known as the kerma factor. Pointwise and multigroup neutron kerma factors, individual reactions, helium, hydrogen, and tritium production response functions are calculated from any basic nuclear data library in ENDF/B format. The program processes all reactions in the energy range of 0 to 20 MeV for fissionable and nonfissionable materials. The program also calculates the gamma production cross sections and the gamma production energy matrix. A built-in computational capability permits the code to calculate the cross sections in the resolved and unresolved resonance regions from resonance parameters in ENDF/B with an option for Doppler broadening. All energy pointwise and multigroup data calculated by the code can be punched, printed and/or written on tape files. Multigroup response functions (e.g., kerma factors, reaction cross sections, gas production, atomic displacements, etc.) can be outputted in the format of MACK-ACTIVITY-Table suitable for direct use with current neutron (and photon) transport codes

  5. Calculation for Primary Combustion Characteristics of Boron-Based Fuel-Rich Propellant Based on BP Neural Network

    OpenAIRE

    Wan'e, Wu; Zuoming, Zhu

    2012-01-01

    A practical scheme for selecting characterization parameters of boron-based fuel-rich propellant formulation was put forward; a calculation model for primary combustion characteristics of boron-based fuel-rich propellant based on backpropagation neural network was established, validated, and then was used to predict primary combustion characteristics of boron-based fuel-rich propellant. The results show that the calculation error of burning rate is less than ± 7 . 3 %; in the formulation rang...

  6. Calculation of parameter failure probability of thermodynamic system by response surface and importance sampling method

    International Nuclear Information System (INIS)

    Shang Yanlong; Cai Qi; Chen Lisheng; Zhang Yangwei

    2012-01-01

    In this paper, the combined method of response surface and importance sampling was applied for calculation of parameter failure probability of the thermodynamic system. The mathematics model was present for the parameter failure of physics process in the thermodynamic system, by which the combination arithmetic model of response surface and importance sampling was established, then the performance degradation model of the components and the simulation process of parameter failure in the physics process of thermodynamic system were also present. The parameter failure probability of the purification water system in nuclear reactor was obtained by the combination method. The results show that the combination method is an effective method for the calculation of the parameter failure probability of the thermodynamic system with high dimensionality and non-linear characteristics, because of the satisfactory precision with less computing time than the direct sampling method and the drawbacks of response surface method. (authors)

  7. Analysis of the computational methods on the equipment shock response based on ANSYS environments

    International Nuclear Information System (INIS)

    Wang Yu; Li Zhaojun

    2005-01-01

    With the developments and completions of equipment shock vibration theory, math calculation method simulation technique and other aspects, equipment shock calculation methods are gradually developing form static development to dynamic and from linearity to non-linearity. Now, the equipment shock calculation methods applied worldwide in engineering practices mostly include equivalent static force method, Dynamic Design Analysis Method (abbreviated to DDAM) and real-time simulation method. The DDAM is a method based on the modal analysis theory, which inputs the shock design spectrum as shock load and gets hold of the shock response of the integrated system by applying separate cross-modal integrating method within the frequency domain. The real-time simulation method is to carry through the computational analysis of the equipment shock response within the time domain, use the time-history curves obtained from real-time measurement or spectrum transformation as the equipment shock load and find an iterative solution of a differential equation of the system movement by using the computational procedure within the time domain. Conclusions: Using the separate DDAM and Real-time Simulation Method, this paper carried through the shock analysis of a three-dimensional frame floating raft in ANSYS environments, analyzed the result, and drew the following conclusion: Because DDAM does not calculate damping, non-linear effect and phase difference between mode responses, the result is much bigger than that of real-time simulation method. The coupling response is much complex when the mode result of 3-dimension structure is being calculated, and the coupling response of non-shock direction is also much bigger than that of real-time simulation method when DDAM is applied. Both DDAM and real-time simulation method has its good points and scope of application. The designers should select the design method that is economic and in point according to the features and anti

  8. Response matrix calculation of a Bonner Sphere Spectrometer using ENDF/B-VII libraries

    Energy Technology Data Exchange (ETDEWEB)

    Morató, Sergio; Juste, Belén; Miró, Rafael; Verdú, Gumersindo [Instituto de Seguridad Industrial, Radiofísica y Medioambiental (ISIRYM), Universitat Politècnica de València (Spain); Guardia, Vicent, E-mail: bejusvi@iqn.upv.es [GD Energy Services, Valencia (Spain). Grupo dominguis

    2017-07-01

    The present work is focused on the reconstruction of a neutron spectra using a multisphere spectrometer also called Bonner Spheres System (BSS). To that, the determination of the response detector curves is necessary therefore we have obtained the response matrix of a neutron detector by Monte Carlo (MC) simulation with MCNP6 where the use of unstructured mesh geometries is introduced as a novelty. The aim of these curves was to study the theoretical response of a widespread neutron spectrometer exposed to neutron radiation. A neutron detector device has been used in this work which is formed by a multispheres spectrometer (BSS) that uses 6 high density polyethylene spheres with different diameters. The BSS consists of a set of 0.95 g/cm{sup 3} high density polyethylene spheres. The detector is composed of a lithium iodide 6LiI cylindrical scintillator crystal 4mm x 4mm size LUDLUM Model 42 coupled to a photomultiplier tube. Thermal tables are required to include polyethylene cross section in the simulation. These data are essential to get correct and accurate results in problems involving neutron thermalization. Nowadays available literature present the response matrix calculated with ENDF.B.V cross section libraries (V.Mares et al 1993) or with ENDF.B.VI (R.Vega Carrillo et al 2007). This work uses two novelties to calculate the response matrix. On the one hand the use of unstructured meshes to simulate the geometry of the detector and the Bonner Spheres and on the other hand the use of the updated ENDF.B.VII cross sections libraries. A set of simulations have been performed to obtain the detector response matrix. 29 mono energetic neutron beams between 10 KeV to 20 MeV were used as source for each moderator sphere up to a total of 174 simulations. Each mono energetic source was defined with the same diameter as the moderating sphere used in its corresponding simulation and the spheres were uniformly irradiated from the top of the photomultiplier tube. Some

  9. Calculations of the response functions of Bonner spheres with a spherical 3He proportional counter using a realistic detector model

    International Nuclear Information System (INIS)

    Wiegel, B.; Alevra, A.V.; Siebert, B.R.L.

    1994-11-01

    A realistic geometry model of a Bonner sphere system with a spherical 3 He-filled proportional counter and 12 polyethylene moderating spheres with diameters ranging from 7,62 cm (3'') to 45,72 cm (18'') is introduced. The MCNP Monte Carlo computer code is used to calculate the responses of this Bonner sphere system to monoenergetic neutrons in the energy range between 1 meV to 20 MeV. The relative uncertainties of the responses due to the Monte Carlo calculations are less than 1% for spheres up to 30,48 cm (12'') in diameter and less than 2% for the 15'' and 18'' spheres. Resonances in the carbon cross section are seen as significant structures in the response functions. Additional calculations were made to study the influence of the 3 He number density and the polyethylene mass density on the response as well as the angular dependence of the Bonner sphere system. The calculated responses can be adjusted to a large set of calibration measurements with only a single fit factor common to all sphere diameters and energies. (orig.) [de

  10. Analysis of Bi-directional Effects on the Response of a Seismic Base Isolation System

    International Nuclear Information System (INIS)

    Park, Hyung-Kui; Kim, Jung-Han; Kim, Min Kyu; Choi, In-Kil

    2014-01-01

    The floor response spectrum depends on the height of the floor of the structure. Also FRS depends on the characteristics of the seismic base isolation system such as the natural frequency, damping ratio. In the previous study, the floor response spectrum of the base isolated structure was calculated for each axis without considering bi-directional effect. However, the shear behavior of the seismic base isolation system of two horizontal directions are correlated each other by the bi-directional effects. If the shear behavior of the seismic isolation system changes, it can influence the floor response spectrum and displacement response of isolators. In this study, the analysis of a bi-directional effect on the floor response spectrum was performed. In this study, the response of the seismic base isolation system based on the bi-directional effects was analyzed. By analyzing the time history result, while there is no alteration in the maximum shear force of seismic base isolation system, it is confirmed that the shear force is generally more decreased in a one-directional that in a two-directional in most parts. Due to the overall decreased shear force, the floor response spectrum is more reduced in a two-directional than in a one-directional

  11. APPLICATION OF THE SPECTROMETRIC METHOD FOR CALCULATING THE DOSE RATE FOR CREATING CALIBRATION HIGHLY SENSITIVE INSTRUMENTS BASED ON SCINTILLATION DETECTION UNITS

    Directory of Open Access Journals (Sweden)

    R. V. Lukashevich

    2017-01-01

    Full Text Available Devices based on scintillation detector are highly sensitive to photon radiation and are widely used to measure the environment dose rate. Modernization of the measuring path to minimize the error in measuring the response of the detector to gamma radiation has already reached its technological ceiling and does not give the proper effect. More promising for this purpose are new methods of processing the obtained spectrometric information. The purpose of this work is the development of highly sensitive instruments based on scintillation detection units using a spectrometric method for calculating dose rate.In this paper we consider the spectrometric method of dosimetry of gamma radiation based on the transformation of the measured instrumental spectrum. Using predetermined or measured functions of the detector response to the action of gamma radiation of a given energy and flux density, a certain function of the energy G(E is determined. Using this function as the core of the integral transformation from the field to dose characteristic, it is possible to obtain the dose value directly from the current instrumentation spectrum. Applying the function G(E to the energy distribution of the fluence of photon radiation in the environment, the total dose rate can be determined without information on the distribution of radioisotopes in the environment.To determine G(E by Monte-Carlo method instrumental response function of the scintillator detector to monoenergetic photon radiation sources as well as other characteristics are calculated. Then the whole full-scale energy range is divided into energy ranges for which the function G(E is calculated using a linear interpolation.Spectrometric method for dose calculation using the function G(E, which allows the use of scintillation detection units for a wide range of dosimetry applications is considered in the article. As well as describes the method of calculating this function by using Monte-Carlo methods

  12. The ripple electromagnetic calculation: accuracy demand and possible responses

    International Nuclear Information System (INIS)

    Cocilovo, V.; Ramogida, G.; Formisano, A.; Martone, R.; Portone, A.; Roccella, M.; Roccella, R.

    2006-01-01

    Due to a number of causes (the finite number of toroidal field coils or the presence of concentrate blocks of magnetic materials, as the neutral beam shielding) the actual magnetic configuration in a Tokamak differs from the desired one. For example, a ripple is added to the ideal axisymmetric toroidal field, impacting the equilibrium and stability of the plasma column; as a further example the magnetic field out of plasma affects the operation of a number of critical components, included the diagnostic system and the neutral beam. Therefore the actual magnetic field has to be suitably calculated and his shape controlled within the required limits. Due to the complexity of its design, the problem is quite critical for the ITER project. In this paper the problem is discussed both from mathematical and numerical point of view. In particular, a complete formulation is proposed, taking into account both the presence of the non linear magnetic materials and the fully 3D geometry. Then the quality level requirements are discussed, included the accuracy of calculations and the spatial resolution. As a consequence, the numerical tools able to fulfil the quality needs while requiring reasonable computer burden are considered. In particular possible tools based on numerical FEM scheme are considered; in addition, in spite of the presence of non linear materials, the practical possibility to use Biot-Savart based approaches, as cross check tools, is also discussed. The paper also analyses the possible geometrical simplifications of the geometry able to make possible the actual calculation while guarantying the required accuracy. Finally the characteristics required for a correction system able to effectively counteract the magnetic field degradation are presented. Of course a number of examples will be also reported and commented. (author)

  13. SCINFUL-QMD: Monte Carlo based computer code to calculate response function and detection efficiency of a liquid organic scintillator for neutron energies up to 3 GeV

    International Nuclear Information System (INIS)

    Satoh, Daiki; Sato, Tatsuhiko; Shigyo, Nobuhiro; Ishibashi, Kenji

    2006-11-01

    The Monte Carlo based computer code SCINFUL-QMD has been developed to evaluate response function and detection efficiency of a liquid organic scintillator for neutrons from 0.1 MeV to 3 GeV. This code is a modified version of SCINFUL that was developed at Oak Ridge National Laboratory in 1988, to provide a calculated full response anticipated for neutron interactions in a scintillator. The upper limit of the applicable energy was extended from 80 MeV to 3 GeV by introducing the quantum molecular dynamics incorporated with the statistical decay model (QMD+SDM) in the high-energy nuclear reaction part. The particles generated in QMD+SDM are neutron, proton, deuteron, triton, 3 He nucleus, alpha particle, and charged pion. Secondary reactions by neutron, proton, and pion inside the scintillator are also taken into account. With the extension of the applicable energy, the database of total cross sections for hydrogen and carbon nuclei were upgraded. This report describes the physical model, computational flow and how to use the code. (author)

  14. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    Energy Technology Data Exchange (ETDEWEB)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist.

  15. Calculating Program for Decommissioning Work Productivity based on Decommissioning Activity Experience Data

    International Nuclear Information System (INIS)

    Song, Chan-Ho; Park, Seung-Kook; Park, Hee-Seong; Moon, Jei-kwon

    2014-01-01

    KAERI is performing research to calculate a coefficient for decommissioning work unit productivity to calculate the estimated time decommissioning work and estimated cost based on decommissioning activity experience data for KRR-2. KAERI used to calculate the decommissioning cost and manage decommissioning activity experience data through systems such as the decommissioning information management system (DECOMMIS), Decommissioning Facility Characterization DB System (DEFACS), decommissioning work-unit productivity calculation system (DEWOCS). In particular, KAERI used to based data for calculating the decommissioning cost with the form of a code work breakdown structure (WBS) based on decommissioning activity experience data for KRR-2.. Defined WBS code used to each system for calculate decommissioning cost. In this paper, we developed a program that can calculate the decommissioning cost using the decommissioning experience of KRR-2, UCP, and other countries through the mapping of a similar target facility between NPP and KRR-2. This paper is organized as follows. Chapter 2 discusses the decommissioning work productivity calculation method, and the mapping method of the decommissioning target facility will be described in the calculating program for decommissioning work productivity. At KAERI, research on various decommissioning methodologies of domestic NPPs will be conducted in the near future. In particular, It is difficult to determine the cost of decommissioning because such as NPP facility have the number of variables, such as the material of the target facility decommissioning, size, radiographic conditions exist

  16. Time delays between core power production and external detector response from Monte Carlo calculations

    International Nuclear Information System (INIS)

    Valentine, T.E.; Mihalczo, J.T.

    1996-01-01

    One primary concern for design of safety systems for reactors is the time response of external detectors to changes in the core. This paper describes a way to estimate the time delay between the core power production and the external detector response using Monte Carlo calculations and suggests a technique to measure the time delay. The Monte Carlo code KENO-NR was used to determine the time delay between the core power production and the external detector response for a conceptual design of the Advanced Neutron Source (ANS) reactor. The Monte Carlo estimated time delay was determined to be about 10 ms for this conceptual design of the ANS reactor

  17. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1987-11-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. Critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 [1] methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed. The effective dose equivalent determined using ICRP-26 methods is significantly smaller than the dose equivalent determined by traditional methods. No existing personnel dosimeter or health physics instrument can determine effective dose equivalent. At the present time, the conversion of dosimeter response to dose equivalent is based on calculations for maximal or ''cap'' values using homogeneous spherical or cylindrical phantoms. The evaluated dose equivalent is, therefore, a poor approximation of the effective dose equivalent as defined by ICRP Publication 26. 3 refs., 2 figs., 1 tab

  18. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  19. Simulation and analysis of main steam control system based on heat transfer calculation

    Science.gov (United States)

    Huang, Zhenqun; Li, Ruyan; Feng, Zhongbao; Wang, Songhan; Li, Wenbo; Cheng, Jiwei; Jin, Yingai

    2018-05-01

    In this paper, after thermal power plant 300MW boiler was studied, mat lab was used to write calculation program about heat transfer process between the main steam and boiler flue gas and amount of water was calculated to ensure the main steam temperature keeping in target temperature. Then heat transfer calculation program was introduced into Simulink simulation platform based on control system multiple models switching and heat transfer calculation. The results show that multiple models switching control system based on heat transfer calculation not only overcome the large inertia of main stream temperature, a large hysteresis characteristic of main stream temperature, but also adapted to the boiler load changing.

  20. Reactivity calculation with reduction of the nuclear power fluctuations

    International Nuclear Information System (INIS)

    Suescun Diaz, Daniel; Senra Martinez, Aquilino

    2009-01-01

    A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

  1. Reactivity calculation with reduction of the nuclear power fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Suescun Diaz, Daniel [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914 RJ (Brazil)], E-mail: dsuescun@hotmail.com; Senra Martinez, Aquilino [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914 RJ (Brazil)

    2009-05-15

    A new formulation is presented in this paper for the calculation of reactivity, which is simpler than the formulation that uses the Laplace and Z transforms. A treatment is also made to reduce the intensity of the noise found in the nuclear power signal used in the calculation of reactivity. Two classes of different filters are used for that. This treatment is based on the fact that the reactivity can be written by using the compose Simpson's rule resulting in a sum of two convolution terms with response to the impulse that is characteristic of a linear system. The linear part is calculated by using the filter named finite impulse response filter (FIR). The non-linear part is calculated using the filter exponentially adjusted by the least squares method, which does not cause attenuation in the reactivity calculation.

  2. A clustering approach to segmenting users of internet-based risk calculators.

    Science.gov (United States)

    Harle, C A; Downs, J S; Padman, R

    2011-01-01

    Risk calculators are widely available Internet applications that deliver quantitative health risk estimates to consumers. Although these tools are known to have varying effects on risk perceptions, little is known about who will be more likely to accept objective risk estimates. To identify clusters of online health consumers that help explain variation in individual improvement in risk perceptions from web-based quantitative disease risk information. A secondary analysis was performed on data collected in a field experiment that measured people's pre-diabetes risk perceptions before and after visiting a realistic health promotion website that provided quantitative risk information. K-means clustering was performed on numerous candidate variable sets, and the different segmentations were evaluated based on between-cluster variation in risk perception improvement. Variation in responses to risk information was best explained by clustering on pre-intervention absolute pre-diabetes risk perceptions and an objective estimate of personal risk. Members of a high-risk overestimater cluster showed large improvements in their risk perceptions, but clusters of both moderate-risk and high-risk underestimaters were much more muted in improving their optimistically biased perceptions. Cluster analysis provided a unique approach for segmenting health consumers and predicting their acceptance of quantitative disease risk information. These clusters suggest that health consumers were very responsive to good news, but tended not to incorporate bad news into their self-perceptions much. These findings help to quantify variation among online health consumers and may inform the targeted marketing of and improvements to risk communication tools on the Internet.

  3. Frequency response function (FRF) based updating of a laser spot welded structure

    Science.gov (United States)

    Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.

    2018-04-01

    The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.

  4. Independent Monte-Carlo dose calculation for MLC based CyberKnife radiotherapy

    Science.gov (United States)

    Mackeprang, P.-H.; Vuong, D.; Volken, W.; Henzen, D.; Schmidhalter, D.; Malthaner, M.; Mueller, S.; Frei, D.; Stampanoni, M. F. M.; Dal Pra, A.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2018-01-01

    This work aims to develop, implement and validate a Monte Carlo (MC)-based independent dose calculation (IDC) framework to perform patient-specific quality assurance (QA) for multi-leaf collimator (MLC)-based CyberKnife® (Accuray Inc., Sunnyvale, CA) treatment plans. The IDC framework uses an XML-format treatment plan as exported from the treatment planning system (TPS) and DICOM format patient CT data, an MC beam model using phase spaces, CyberKnife MLC beam modifier transport using the EGS++ class library, a beam sampling and coordinate transformation engine and dose scoring using DOSXYZnrc. The framework is validated against dose profiles and depth dose curves of single beams with varying field sizes in a water tank in units of cGy/Monitor Unit and against a 2D dose distribution of a full prostate treatment plan measured with Gafchromic EBT3 (Ashland Advanced Materials, Bridgewater, NJ) film in a homogeneous water-equivalent slab phantom. The film measurement is compared to IDC results by gamma analysis using 2% (global)/2 mm criteria. Further, the dose distribution of the clinical treatment plan in the patient CT is compared to TPS calculation by gamma analysis using the same criteria. Dose profiles from IDC calculation in a homogeneous water phantom agree within 2.3% of the global max dose or 1 mm distance to agreement to measurements for all except the smallest field size. Comparing the film measurement to calculated dose, 99.9% of all voxels pass gamma analysis, comparing dose calculated by the IDC framework to TPS calculated dose for the clinical prostate plan shows 99.0% passing rate. IDC calculated dose is found to be up to 5.6% lower than dose calculated by the TPS in this case near metal fiducial markers. An MC-based modular IDC framework was successfully developed, implemented and validated against measurements and is now available to perform patient-specific QA by IDC.

  5. Summary of calculations of dynamic response characteristics and design stress of the 1/5 scale PSE torus

    International Nuclear Information System (INIS)

    Arthur, D.

    1977-01-01

    The Lawrence Livermore Laboratory is currently involved in a 1/5 scale testing program on the Mark I BWR pressure suppression system. A key element of the test setup is a pressure vessel that is a 90 0 sector of a torus. Proper performance of the 90 0 torus depends on its structural integrity and structural dynamic characteristics. It must sustain the internal pressurization of the planned tests, and its dynamic response to the transient test loads should be minimal. If the structural vibrations are too great, interpretation of important load cell and pressure transducer data will be difficult. The purpose of the report is to bring together under one cover calculations pertaining to the structural dynamic characteristics and structural integrity of 90 0 torus. The report is divided into the following sections: (1) system description in which the torus and associated hardware are briefly described; (2) structural dynamics in which calculations of natural frequency and dynamic response are presented; and (3) structural integrity in which stress calculations for design purposes are presented; and an appendix which contains an LLL internal report comparing the expected load cell response for a three and four-point supported torus

  6. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  7. Feasibility of CBCT-based dose calculation: Comparative analysis of HU adjustment techniques

    International Nuclear Information System (INIS)

    Fotina, Irina; Hopfgartner, Johannes; Stock, Markus; Steininger, Thomas; Lütgendorf-Caucig, Carola; Georg, Dietmar

    2012-01-01

    Background and purpose: The aim of this work was to compare the accuracy of different HU adjustments for CBCT-based dose calculation. Methods and materials: Dose calculation was performed on CBCT images of 30 patients. In the first two approaches phantom-based (Pha-CC) and population-based (Pop-CC) conversion curves were used. The third method (WAB) represents override of the structures with standard densities for water, air and bone. In ROI mapping approach all structures were overridden with average HUs from planning CT. All techniques were benchmarked to the Pop-CC and CT-based plans by DVH comparison and γ-index analysis. Results: For prostate plans, WAB and ROI mapping compared to Pop-CC showed differences in PTV D median below 2%. The WAB and Pha-CC methods underestimated the bladder dose in IMRT plans. In lung cases PTV coverage was underestimated by Pha-CC method by 2.3% and slightly overestimated by the WAB and ROI techniques. The use of the Pha-CC method for head–neck IMRT plans resulted in difference in PTV coverage up to 5%. Dose calculation with WAB and ROI techniques showed better agreement with pCT than conversion curve-based approaches. Conclusions: Density override techniques provide an accurate alternative to the conversion curve-based methods for dose calculation on CBCT images.

  8. Evaluation of RSG-GAS Core Management Based on Burnup Calculation

    International Nuclear Information System (INIS)

    Lily Suparlina; Jati Susilo

    2009-01-01

    Evaluation of RSG-GAS Core Management Based on Burnup Calculation. Presently, U 3 Si 2 -Al dispersion fuel is used in RSG-GAS core and had passed the 60 th core. At the beginning of each cycle the 5/1 fuel reshuffling pattern is used. Since 52 nd core, operators did not use the core fuel management computer code provided by vendor for this activity. They use the manually calculation using excel software as the solving. To know the accuracy of the calculation, core calculation was carried out using two kinds of 2 dimension diffusion codes Batan-2DIFF and SRAC. The beginning of cycle burn-up fraction data were calculated start from 51 st to 60 th using Batan-EQUIL and SRAC COREBN. The analysis results showed that there is a disparity in reactivity values of the two calculation method. The 60 th core critical position resulted from Batan-2DIFF calculation provide the reduction of positive reactivity 1.84 % Δk/k, while the manually calculation results give the increase of positive reactivity 2.19 % Δk/k. The minimum shutdown margin for stuck rod condition for manual and Batan-3DIFF calculation are -3.35 % Δk/k dan -1.13 % Δk/k respectively, it means that both values met the safety criteria, i.e <-0.5 % Δk/k. Excel program can be used for burn-up calculation, but it is needed to provide core management code to reach higher accuracy. (author)

  9. Calculating the Fee-Based Services of Library Institutions: Theoretical Foundations and Practical Challenges

    Directory of Open Access Journals (Sweden)

    Sysіuk Svitlana V.

    2017-05-01

    Full Text Available The article is aimed at highlighting features of the provision of the fee-based services by library institutions, identifying problems related to the legal and regulatory framework for their calculation, and the methods to implement this. The objective of the study is to develop recommendations to improve the calculation of the fee-based library services. The theoretical foundations have been systematized, the need to develop a Provision for the procedure of the fee-based services by library institutions has been substantiated. Such a Provision would protect library institution from errors in fixing the fee for a paid service and would be an informational source of its explicability. The appropriateness of applying the market pricing law based on demand and supply has been substantiated. The development and improvement of accounting and calculation, taking into consideration both industry-specific and market-based conditions, would optimize the costs and revenues generated by the provision of the fee-based services. In addition, the complex combination of calculation leverages with development of the system of internal accounting together with use of its methodology – provides another equally efficient way of improving the efficiency of library institutions’ activity.

  10. Data base for terrestrial food pathways dose commitment calculations

    International Nuclear Information System (INIS)

    Bailey, C.E.

    1979-01-01

    A computer program is under development to allow calculation of the dose-to-man in Georgia and South Carolina from ingestion of radionuclides in terrestrial foods resulting from deposition of airborne radionuclides. This program is based on models described in Regulatory Guide 1.109 (USNRC, 1977). The data base describes the movement of radionuclides through the terrestrial food chain, growth and consumption factors for a variety of radionuclides

  11. THEXSYST - a knowledge based system for the control and analysis of technical simulation calculations

    International Nuclear Information System (INIS)

    Burger, B.

    1991-07-01

    This system (THEXSYST) will be used for control, analysis and presentation of thermal hydraulic simulation calculations of light water reactors. THEXSYST is a modular system consisting of an expert shell with user interface, a data base, and a simulation program and uses techniques available in RSYST. A knowledge base, which was created to control the simulational calculation of pressurized water reactors, includes both the steady state calculation and the transient calculation in the domain of the depressurization, as a result of a small break loss of coolant accident. The methods developed are tested using a simulational calculation with RELAP5/Mod2. It will be seen that the application of knowledge base techniques may be a helpful tool to support existing solutions especially in graphical analysis. (orig./HP) [de

  12. Ab initio Calculations of Electronic Fingerprints of DNA bases on Graphene

    Science.gov (United States)

    Ahmed, Towfiq; Rehr, John J.; Kilina, Svetlana; Das, Tanmoy; Haraldsen, Jason T.; Balatsky, Alexander V.

    2012-02-01

    We have carried out first principles DFT calculations of the electronic local density of states (LDOS) of DNA nucleotide bases (A,C,G,T) adsorbed on graphene using LDA with ultra-soft pseudo-potentials. We have also calculated the longitudinal transmission currents T(E) through graphene nano-pores as an individual DNA base passes through it, using a non-equilibrium Green's function (NEGF) formalism. We observe several dominant base-dependent features in the LDOS and T(E) in an energy range within a few eV of the Fermi level. These features can serve as electronic fingerprints for the identification of individual bases from dI/dV measurements in scanning tunneling spectroscopy (STS) and nano-pore experiments. Thus these electronic signatures can provide an alternative approach to DNA sequencing.

  13. Fragment-based quantum mechanical calculation of protein-protein binding affinities.

    Science.gov (United States)

    Wang, Yaqian; Liu, Jinfeng; Li, Jinjin; He, Xiao

    2018-04-29

    The electrostatically embedded generalized molecular fractionation with conjugate caps (EE-GMFCC) method has been successfully utilized for efficient linear-scaling quantum mechanical (QM) calculation of protein energies. In this work, we applied the EE-GMFCC method for calculation of binding affinity of Endonuclease colicin-immunity protein complex. The binding free energy changes between the wild-type and mutants of the complex calculated by EE-GMFCC are in good agreement with experimental results. The correlation coefficient (R) between the predicted binding energy changes and experimental values is 0.906 at the B3LYP/6-31G*-D level, based on the snapshot whose binding affinity is closest to the average result from the molecular mechanics/Poisson-Boltzmann surface area (MM/PBSA) calculation. The inclusion of the QM effects is important for accurate prediction of protein-protein binding affinities. Moreover, the self-consistent calculation of PB solvation energy is required for accurate calculations of protein-protein binding free energies. This study demonstrates that the EE-GMFCC method is capable of providing reliable prediction of relative binding affinities for protein-protein complexes. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  14. Nonperturbative non-Markovian quantum master equation: Validity and limitation to calculate nonlinear response functions

    Science.gov (United States)

    Ishizaki, Akihito; Tanimura, Yoshitaka

    2008-05-01

    Based on the influence functional formalism, we have derived a nonperturbative equation of motion for a reduced system coupled to a harmonic bath with colored noise in which the system-bath coupling operator does not necessarily commute with the system Hamiltonian. The resultant expression coincides with the time-convolutionless quantum master equation derived from the second-order perturbative approximation, which is also equivalent to a generalized Redfield equation. This agreement occurs because, in the nonperturbative case, the relaxation operators arise from the higher-order system-bath interaction that can be incorporated into the reduced density matrix as the influence operator; while the second-order interaction remains as a relaxation operator in the equation of motion. While the equation describes the exact dynamics of the density matrix beyond weak system-bath interactions, it does not have the capability to calculate nonlinear response functions appropriately. This is because the equation cannot describe memory effects which straddle the external system interactions due to the reduced description of the bath. To illustrate this point, we have calculated the third-order two-dimensional (2D) spectra for a two-level system from the present approach and the hierarchically coupled equations approach that can handle quantal system-bath coherence thanks to its hierarchical formalism. The numerical demonstration clearly indicates the lack of the system-bath correlation in the present formalism as fast dephasing profiles of the 2D spectra.

  15. Applications of thermodynamic calculations to Mg alloy design: Mg-Sn based alloy development

    International Nuclear Information System (INIS)

    Jung, In-Ho; Park, Woo-Jin; Ahn, Sang Ho; Kang, Dae Hoon; Kim, Nack J.

    2007-01-01

    Recently an Mg-Sn based alloy system has been investigated actively in order to develop new magnesium alloys which have a stable structure and good mechanical properties at high temperatures. Thermodynamic modeling of the Mg-Al-Mn-Sb-Si-Sn-Zn system was performed based on available thermodynamic, phase equilibria and phase diagram data. Using the optimized database, the phase relationships of the Mg-Sn-Al-Zn alloys with additions of Si and Sb were calculated and compared with their experimental microstructures. It is shown that the calculated results are in good agreement with experimental microstructures, which proves the applicability of thermodynamic calculations for new Mg alloy design. All calculations were performed using FactSage thermochemical software. (orig.)

  16. SCALE Sensitivity Calculations Using Contributon Theory

    International Nuclear Information System (INIS)

    Rearden, Bradley T.; Perfetti, Chris; Williams, Mark L.; Petrie, Lester M. Jr.

    2010-01-01

    The SCALE TSUNAMI-3D sensitivity and uncertainty analysis sequence computes the sensitivity of k-eff to each constituent multigroup cross section using adjoint techniques with the KENO Monte Carlo codes. A new technique to simultaneously obtain the product of the forward and adjoint angular flux moments within a single Monte Carlo calculation has been developed and implemented in the SCALE TSUNAMI-3D analysis sequence. A new concept in Monte Carlo theory has been developed for this work, an eigenvalue contributon estimator, which is an extension of previously developed fixed-source contributon estimators. A contributon is a particle for which the forward solution is accumulated, and its importance to the response, which is equivalent to the adjoint solution, is simultaneously accumulated. Thus, the contributon is a particle coupled with its contribution to the response, in this case k-eff. As implemented in SCALE, the contributon provides the importance of a particle exiting at any energy or direction for each location, energy and direction at which the forward flux solution is sampled. Although currently implemented for eigenvalue calculations in multigroup mode in KENO, this technique is directly applicable to continuous-energy calculations for many other responses such as fixed-source sensitivity analysis and quantification of reactor kinetics parameters. This paper provides the physical bases of eigenvalue contributon theory, provides details of implementation into TSUNAMI-3D, and provides results of sample calculations.

  17. Many-body calculations with deuteron based single-particle bases and their associated natural orbits

    Science.gov (United States)

    Puddu, G.

    2018-06-01

    We use the recently introduced single-particle states obtained from localized deuteron wave-functions as a basis for nuclear many-body calculations. We show that energies can be substantially lowered if the natural orbits (NOs) obtained from this basis are used. We use this modified basis for {}10{{B}}, {}16{{O}} and {}24{{Mg}} employing the bare NNLOopt nucleon–nucleon interaction. The lowering of the energies increases with the mass. Although in principle NOs require a full scale preliminary many-body calculation, we found that an approximate preliminary many-body calculation, with a marginal increase in the computational cost, is sufficient. The use of natural orbits based on an harmonic oscillator basis leads to a much smaller lowering of the energies for a comparable computational cost.

  18. A program to calculate pulse transmission responses through transversely isotropic media

    Science.gov (United States)

    Li, Wei; Schmitt, Douglas R.; Zou, Changchun; Chen, Xiwei

    2018-05-01

    We provide a program (AOTI2D) to model responses of ultrasonic pulse transmission measurements through arbitrarily oriented transversely isotropic rocks. The program is built with the distributed point source method that treats the transducers as a series of point sources. The response of each point source is calculated according to the ray-tracing theory of elastic plane waves. The program could offer basic wave parameters including phase and group velocities, polarization, anisotropic reflection coefficients and directivity patterns, and model the wave fields, static wave beam, and the observed signals for pulse transmission measurements considering the material's elastic stiffnesses and orientations, sample dimensions, and the size and positions of the transmitters and the receivers. The program could be applied to exhibit the ultrasonic beam behaviors in anisotropic media, such as the skew and diffraction of ultrasonic beams, and analyze its effect on pulse transmission measurements. The program would be a useful tool to help design the experimental configuration and interpret the results of ultrasonic pulse transmission measurements through either isotropic or transversely isotropic rock samples.

  19. Development of SCINFUL-CG code to calculate response functions of scintillators in various shapes used for neutron measurement

    Energy Technology Data Exchange (ETDEWEB)

    Endo, Akira; Kim, Eunjoo; Yamaguchi, Yasuhiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-10-01

    A Monte Carlo code SCINFUL has been utilized for calculating response functions of organic scintillators for high-energy neutron spectroscopy. However, the applicability of SCINFUL is limited to the calculations for cylindrical NE213 and NE110 scintillators. In the present study, SCINFUL-CG was developed by introducing a geometry specifying function and high-energy neutron cross section data into SCINFUL. The geometry package MARS-CG, the extended version of the CG (Combinatorial Geometry), was programmed into SCINFUL-CG to express various geometries of detectors. Neutron spectra in the regions specified by the CG can be evaluated by the track length estimator. The cross section data of silicon, oxygen and aluminum for neutron transport calculation were incorporated up to 100 MeV using the data of LA150 library. Validity of SCINFUL-CG was examined by comparing calculated results with those by SCINFUL and MCNP and experimental data measured using high-energy neutron fields. SCINFUL-CG can be used for the calculations of the response functions and neutron spectra in the organic scintillators in various shapes. The computer code will be applicable to the designs of high-energy neutron spectrometers and neutron monitors using the organic scintillators. The present report describes the new features of SCINFUL-CG and explains how to use the code. (author)

  20. Dose calculations for severe LWR accident scenarios

    International Nuclear Information System (INIS)

    Margulies, T.S.; Martin, J.A. Jr.

    1984-05-01

    This report presents a set of precalculated doses based on a set of postulated accident releases and intended for use in emergency planning and emergency response. Doses were calculated for the PWR (Pressurized Water Reactor) accident categories of the Reactor Safety Study (WASH-1400) using the CRAC (Calculations of Reactor Accident Consequences) code. Whole body and thyroid doses are presented for a selected set of weather cases. For each weather case these calculations were performed for various times and distances including three different dose pathways - cloud (plume) shine, ground shine and inhalation. During an emergency this information can be useful since it is immediately available for projecting offsite radiological doses based on reactor accident sequence information in the absence of plant measurements of emission rates (source terms). It can be used for emergency drill scenario development as well

  1. 19 CFR 351.405 - Calculation of normal value based on constructed value.

    Science.gov (United States)

    2010-04-01

    ... 19 Customs Duties 3 2010-04-01 2010-04-01 false Calculation of normal value based on constructed value. 351.405 Section 351.405 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE ANTIDUMPING AND COUNTERVAILING DUTIES Calculation of Export Price, Constructed Export Price, Fair Value, and...

  2. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    Directory of Open Access Journals (Sweden)

    Shan Yang

    2016-01-01

    Full Text Available Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverter based distributed generation is proposed. The proposed method let the inverter based distributed generation be equivalent to Iθ bus, which makes it suitable to calculate the power flow of distribution network with a current limited inverter based distributed generation. And the low voltage ride through capability of inverter based distributed generation can be considered as well in this paper. Finally, some tests of power flow and short circuit current calculation are performed on a 33-bus distribution network. The calculated results from the proposed method in this paper are contrasted with those by the traditional method and the simulation method, whose results have verified the effectiveness of the integrated method suggested in this paper.

  3. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    Science.gov (United States)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  4. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki [Hanyang Univ., Seoul (Korea, Republic of); Lee, Jong-Il; Kim, Jang-Lyul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language.

  5. Development of Calculation Module for Intake Retention Functions based on Occupational Intakes of Radionuclides

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki; Lee, Jong-Il; Kim, Jang-Lyul

    2014-01-01

    In internal dosimetry, intake retention and excretion functions are essential to estimate intake activity using bioassay sample such as whole body counter, lung counter, and urine sample. Even though ICRP (International Commission on Radiological Protection)provides the functions in some ICRP publications, it is needed to calculate the functions because the functions from the publications are provided for very limited time. Thus, some computer program are generally used to calculate intake retention and excretion functions and estimate intake activity. OIR (Occupational Intakes of Radionuclides) will be published soon by ICRP, which totally replaces existing internal dosimetry models and relevant data including intake retention and excretion functions. Thus, the calculation tool for the functions is needed based on OIR. In this study, we developed calculation module for intake retention and excretion functions based on OIR using C++ programming language with Intel Math Kernel Library. In this study, we developed the intake retention and excretion function calculation module based on OIR using C++ programing language

  6. Prediction of fission mass-yield distributions based on cross section calculations

    International Nuclear Information System (INIS)

    Hambsch, F.-J.; G.Vladuca; Tudora, Anabella; Oberstedt, S.; Ruskov, I.

    2005-01-01

    For the first time, fission mass-yield distributions have been predicted based on an extended statistical model for fission cross section calculations. In this model, the concept of the multi-modality of the fission process has been incorporated. The three most dominant fission modes, the two asymmetric standard I (S1) and standard II (S2) modes and the symmetric superlong (SL) mode are taken into account. De-convoluted fission cross sections for S1, S2 and SL modes for 235,238 U(n, f) and 237 Np(n, f), based on experimental branching ratios, were calculated for the first time in the incident neutron energy range from 0.01 to 5.5 MeV providing good agreement with the experimental fission cross section data. The branching ratios obtained from the modal fission cross section calculations have been used to deduce the corresponding fission yield distributions, including mean values also for incident neutron energies hitherto not accessible to experiment

  7. Response matrix Monte Carlo based on a general geometry local calculation for electron transport

    International Nuclear Information System (INIS)

    Ballinger, C.T.; Rathkopf, J.A.; Martin, W.R.

    1991-01-01

    A Response Matrix Monte Carlo (RMMC) method has been developed for solving electron transport problems. This method was born of the need to have a reliable, computationally efficient transport method for low energy electrons (below a few hundred keV) in all materials. Today, condensed history methods are used which reduce the computation time by modeling the combined effect of many collisions but fail at low energy because of the assumptions required to characterize the electron scattering. Analog Monte Carlo simulations are prohibitively expensive since electrons undergo coulombic scattering with little state change after a collision. The RMMC method attempts to combine the accuracy of an analog Monte Carlo simulation with the speed of the condensed history methods. Like condensed history, the RMMC method uses probability distributions functions (PDFs) to describe the energy and direction of the electron after several collisions. However, unlike the condensed history method the PDFs are based on an analog Monte Carlo simulation over a small region. Condensed history theories require assumptions about the electron scattering to derive the PDFs for direction and energy. Thus the RMMC method samples from PDFs which more accurately represent the electron random walk. Results show good agreement between the RMMC method and analog Monte Carlo. 13 refs., 8 figs

  8. Application of γ field theory based calculation method to the monitoring of mine nuclear radiation environment

    International Nuclear Information System (INIS)

    Du Yanjun; Liu Qingcheng; Liu Hongzhang; Qin Guoxiu

    2009-01-01

    In order to find the feasibility of calculating mine radiation dose based on γ field theory, this paper calculates the γ radiation dose of a mine by means of γ field theory based calculation method. The results show that the calculated radiation dose is of small error and can be used to monitor mine environment of nuclear radiation. (authors)

  9. Medication calculation: the potential role of digital game-based learning in nurse education.

    Science.gov (United States)

    Foss, Brynjar; Mordt Ba, Petter; Oftedal, Bjørg F; Løkken, Atle

    2013-12-01

    Medication dose calculation is one of several medication-related activities that are conducted by nurses daily. However, medication calculation skills appear to be an area of global concern, possibly because of low numeracy skills, test anxiety, low self-confidence, and low self-efficacy among student nurses. Various didactic strategies have been developed for student nurses who still lack basic mathematical competence. However, we suggest that the critical nature of these skills demands the investigation of alternative and/or supplementary didactic approaches to improve medication calculation skills and to reduce failure rates. Digital game-based learning is a possible solution because of the following reasons. First, mathematical drills may improve medication calculation skills. Second, games are known to be useful during nursing education. Finally, mathematical drill games appear to improve the attitudes of students toward mathematics. The aim of this article was to discuss common challenges of medication calculation skills in nurse education, and we highlight the potential role of digital game-based learning in this area.

  10. The MiAge Calculator: a DNA methylation-based mitotic age calculator of human tissue types.

    Science.gov (United States)

    Youn, Ahrim; Wang, Shuang

    2018-01-01

    Cell division is important in human aging and cancer. The estimation of the number of cell divisions (mitotic age) of a given tissue type in individuals is of great interest as it allows not only the study of biological aging (using a new molecular aging target) but also the stratification of prospective cancer risk. Here, we introduce the MiAge Calculator, a mitotic age calculator based on a novel statistical framework, the MiAge model. MiAge is designed to quantitatively estimate mitotic age (total number of lifetime cell divisions) of a tissue using the stochastic replication errors accumulated in the epigenetic inheritance process during cell divisions. With the MiAge model, the MiAge Calculator was built using the training data of DNA methylation measures of 4,020 tumor and adjacent normal tissue samples from eight TCGA cancer types and was tested using the testing data of DNA methylation measures of 2,221 tumor and adjacent normal tissue samples of five other TCGA cancer types. We showed that within each of the thirteen cancer types studied, the estimated mitotic age is universally accelerated in tumor tissues compared to adjacent normal tissues. Across the thirteen cancer types, we showed that worse cancer survivals are associated with more accelerated mitotic age in tumor tissues. Importantly, we demonstrated the utility of mitotic age by showing that the integration of mitotic age and clinical information leads to improved survival prediction in six out of the thirteen cancer types studied. The MiAge Calculator is available at http://www.columbia.edu/∼sw2206/softwares.htm .

  11. CO2 impulse response curves for GWP calculations

    International Nuclear Information System (INIS)

    Jain, A.K.; Wuebbles, D.J.

    1993-01-01

    The primary purpose of Global Warming Potential (GWP) is to compare the effectiveness of emission strategies for various greenhouse gases to those for CO 2 , GWPs are quite sensitive to the amount of CO 2 . Unlike all other gases emitted in the atmosphere, CO 2 does not have a chemical or photochemical sink within the atmosphere. Removal of CO 2 is therefore dependent on exchanges with other carbon reservoirs, namely, ocean and terrestrial biosphere. The climatic-induced changes in ocean circulation or marine biological productivity could significantly alter the atmospheric CO 2 lifetime. Moreover, continuing forest destruction, nutrient limitations or temperature induced increases of respiration could also dramatically change the lifetime of CO 2 in the atmosphere. Determination of the current CO 2 sinks, and how these sinks are likely to change with increasing CO 2 emissions, is crucial to the calculations of GWPs. It is interesting to note that the impulse response function is sensitive to the initial state of the ocean-atmosphere system into which CO 2 is emitted. This is due to the fact that in our model the CO 2 flux from the atmosphere to the mixed layer is a nonlinear function of ocean surface total carbon

  12. Development of Subspace-based Hybrid Monte Carlo-Deterministric Algorithms for Reactor Physics Calculations

    International Nuclear Information System (INIS)

    Abdel-Khalik, Hany S.; Zhang, Qiong

    2014-01-01

    The development of hybrid Monte-Carlo-Deterministic (MC-DT) approaches, taking place over the past few decades, have primarily focused on shielding and detection applications where the analysis requires a small number of responses, i.e. at the detector locations(s). This work further develops a recently introduced global variance reduction approach, denoted by the SUBSPACE approach is designed to allow the use of MC simulation, currently limited to benchmarking calculations, for routine engineering calculations. By way of demonstration, the SUBSPACE approach is applied to assembly level calculations used to generate the few-group homogenized cross-sections. These models are typically expensive and need to be executed in the order of 10 3 - 10 5 times to properly characterize the few-group cross-sections for downstream core-wide calculations. Applicability to k-eigenvalue core-wide models is also demonstrated in this work. Given the favorable results obtained in this work, we believe the applicability of the MC method for reactor analysis calculations could be realized in the near future.

  13. Plasma density calculation based on the HCN waveform data

    International Nuclear Information System (INIS)

    Chen Liaoyuan; Pan Li; Luo Cuiwen; Zhou Yan; Deng Zhongchao

    2004-01-01

    A method to improve the plasma density calculation is introduced using the base voltage and the phase zero points obtained from the HCN interference waveform data. The method includes making the signal quality higher by putting the signal control device and the analog-to-digit converters in the same location and charging them by the same power, and excluding the noise's effect according to the possible changing rate of the signal's phase, and to make the base voltage more accurate by dynamical data processing. (authors)

  14. Conductance calculations with a wavelet basis set

    DEFF Research Database (Denmark)

    Thygesen, Kristian Sommer; Bollinger, Mikkel; Jacobsen, Karsten Wedel

    2003-01-01

    We present a method based on density functional theory (DFT) for calculating the conductance of a phase-coherent system. The metallic contacts and the central region where the electron scattering occurs, are treated on the same footing taking their full atomic and electronic structure into account....... The linear-response conductance is calculated from the Green's function which is represented in terms of a system-independent basis set containing wavelets with compact support. This allows us to rigorously separate the central region from the contacts and to test for convergence in a systematic way...

  15. Calculation of Excore Detector Responses upon Control Rods Movement in PGSFR

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Pham Nhu Viet; Lee, Min Jae; Kang, Chang Moo; Kim, Sang Ji [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    The Prototype Generation-IV Sodium-cooled Fast Reactor (PGSFR) safety design concept, which aims at achieving IAEA's safety objectives and GIF's safety goals for Generation-IV reactor systems, is mainly focused on the defense in depth for accident detection, prevention, control, mitigation and termination. In practice, excore neutron detectors are widely used to determine the spatial power distribution and power level in a nuclear reactor core. Based on the excore detector signals, the reactor control and protection systems infer the corresponding core power and then provide appropriate actions for safe and reliable reactor operation. To this end, robust reactor power monitoring, control and core protection systems are indispensable to prevent accidents and reduce its detrimental effect should one occur. To design such power monitoring and control systems, numerical investigation of excore neutron detector responses upon various changes in the core power level/distribution and reactor conditions is required in advance. In this study, numerical analysis of excore neutron detector responses (DRs) upon control rods (CRs) movement in PGSFR was carried out. The objective is to examine the sensitivity of excore neutron detectors to the core power change induced by moving CRs and thereby recommend appropriate locations to locate excore neutron detectors for the designing process of the PGSFR power monitoring systems. Section 2 describes the PGSFR core model and calculation method as well as the numerical results for the excore detector spatial weighting functions, core power changes and detector responses upon various scenarios of moving CRs in PGSFR. The top detector is conservatively safe because it overestimated the core power level. However, the lower and bottom detectors still functioned well in this case because they exhibited a minor underestimation of core power of less than ∼0.5%. As a secondary CR was dropped into the core, the lower detector was

  16. Modeling and Calculation of Dent Based on Pipeline Bending Strain

    Directory of Open Access Journals (Sweden)

    Qingshan Feng

    2016-01-01

    Full Text Available The bending strain of long-distance oil and gas pipelines can be calculated by the in-line inspection tool which used inertial measurement unit (IMU. The bending strain is used to evaluate the strain and displacement of the pipeline. During the bending strain inspection, the dent existing in the pipeline can affect the bending strain data as well. This paper presents a novel method to model and calculate the pipeline dent based on the bending strain. The technique takes inertial mapping data from in-line inspection and calculates depth of dent in the pipeline using Bayesian statistical theory and neural network. To verify accuracy of the proposed method, an in-line inspection tool is used to inspect pipeline to gather data. The calculation of dent shows the method is accurate for the dent, and the mean relative error is 2.44%. The new method provides not only strain of the pipeline dent but also the depth of dent. It is more benefit for integrity management of pipeline for the safety of the pipeline.

  17. CT-based dose calculations and in vivo dosimetry for lung cancer treatment

    International Nuclear Information System (INIS)

    Essers, M.; Lanson, J.H.; Leunens, G.; Schnabel, T.; Mijnheer, B.J.

    1995-01-01

    Reliable CT-based dose calculations and dosimetric quality control are essential for the introduction of new conformal techniques for the treatment of lung cancer. The first aim of this study was therefore to check the accuracy of dose calculations based on CT-densities, using a simple inhomogeneity correction model, for lung cancer patients irradiated with an AP-PA treatment technique. Second, the use of diodes for absolute exit dose measurements and an Electronic Portal Imaging Device (EPID) for relative transmission dose verification was investigated for 22 and 12 patients, respectively. The measured dose values were compared with calculations performed using our 3-dimensional treatment planning system, using CT-densities or assuming the patient to be water-equivalent. Using water-equivalent calculations, the actual exit dose value under lung was, on average, underestimated by 30%, with an overall spread of 10% (1 SD). Using inhomogeneity corrections, the exit dose was, on average, overestimated by 4%, with an overall spread of 6% (1 SD). Only 2% of the average deviation was due to the inhomogeneity correction model. An uncertainty in exit dose calculation of 2.5% (1 SD) could be explained by organ motion, resulting from the ventilatory or cardiac cycle. The most important reason for the large overall spread was, however, the uncertainty involved in performing point measurements: about 4% (1 SD). This difference resulted from the systematic and random deviation in patient set-up and therefore in diode position with respect to patient anatomy. Transmission and exit dose values agreed with an average difference of 1.1%. Transmission dose profiles also showed good agreement with calculated exit dose profiles. Our study shows that, for this treatment technique, the dose in the thorax region is quite accurately predicted using CT-based dose calculations, even if a simple inhomogeneity correction model is used. Point detectors such as diodes are not suitable for exit

  18. Calculating the knowledge-based similarity of functional groups using crystallographic data

    Science.gov (United States)

    Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.

    2001-09-01

    A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.

  19. Code accuracy evaluation of ISP 35 calculations based on NUPEC M-7-1 test

    International Nuclear Information System (INIS)

    Auria, F.D.; Oriolo, F.; Leonardi, M.; Paci, S.

    1995-01-01

    Quantitative evaluation of code uncertainties is a necessary step in the code assessment process, above all if best-estimate codes are utilised for licensing purposes. Aiming at quantifying the code accuracy, an integral methodology based on the Fast Fourier Transform (FFT) has been developed at the University of Pisa (DCMN) and has been already applied to several calculations related to primary system test analyses. This paper deals with the first application of the FFT based methodology to containment code calculations based on a hydrogen mixing and distribution test performed in the NUPEC (Nuclear Power Engineering Corporation) facility. It is referred to pre-test and post-test calculations submitted for the International Standard Problem (ISP) n. 35. This is a blind exercise, simulating the effects of steam injection and spray behaviour on gas distribution and mixing. The result of the application of this methodology to nineteen selected variables calculated by ten participants are here summarized, and the comparison (where possible) of the accuracy evaluated for the pre-test and for the post-test calculations of a same user is also presented. (author)

  20. Integrated Power Flow and Short Circuit Calculation Method for Distribution Network with Inverter Based Distributed Generation

    OpenAIRE

    Yang, Shan; Tong, Xiangqian

    2016-01-01

    Power flow calculation and short circuit calculation are the basis of theoretical research for distribution network with inverter based distributed generation. The similarity of equivalent model for inverter based distributed generation during normal and fault conditions of distribution network and the differences between power flow and short circuit calculation are analyzed in this paper. Then an integrated power flow and short circuit calculation method for distribution network with inverte...

  1. Friction stir welding: multi-response optimisation using Taguchi-based GRA

    Directory of Open Access Journals (Sweden)

    Jitender Kundu

    2016-01-01

    Full Text Available In present experimental work, friction stir welding of aluminium alloy 5083- H321 is performed for optimisation of process parameters for maximum tensile strength. Taguchi’s L9 orthogonal array has been used for three parameters – tool rotational speed (TRS, traverse speed (TS, and tool tilt angle (TTA with three levels. Multi-response optimisation has been carried out through Taguchi-based grey relational analysis. The grey relational grade has been calculated for all three responses – ultimate tensile strength, percentage elongation, and micro-hardness. Analysis of variance is the tool used for obtaining grey relational grade to find out the significant process parameters. TRS and TS are the two most significant parameters which influence most of the quality characteristics of friction stir welded joint. Validation of predicted values done through confirmation experiments at optimum setting shows a good agreement with experimental values.

  2. Study on the Seismic Response of a Portal Frame Structure Based on the Transfer Matrix Method of Multibody System

    Directory of Open Access Journals (Sweden)

    Jianguo Ding

    2014-11-01

    Full Text Available Portal frame structures are widely used in industrial building design but unfortunately are often damaged during an earthquake. As a result, a study on the seismic response of this type of structure is important to both human safety and future building designs. Traditionally, finite element methods such as the ANSYS and MIDAS have been used as the primary methods of computing the response of such a structure during an earthquake; however, these methods yield low calculation efficiencies. In this paper, the mechanical model of a single-story portal frame structure with two spans is constructed based on the transfer matrix method of multibody system (MS-TMM; both the transfer matrix of the components in the model and the total transfer matrix equation of the structure are derived, and the corresponding MATLAB program is compiled to determine the natural period and seismic response of the structure. The results show that the results based on the MS-TMM are similar to those obtained by ANSYS, but the calculation time of the MS-TMM method is only 1/20 of that of the ANSYS method. Additionally, it is shown that the MS-TMM method greatly increases the calculation efficiency while maintaining accuracy.

  3. Reactor calculation in coarse mesh by finite element method applied to matrix response method

    International Nuclear Information System (INIS)

    Nakata, H.

    1982-01-01

    The finite element method is applied to the solution of the modified formulation of the matrix-response method aiming to do reactor calculations in coarse mesh. Good results are obtained with a short running time. The method is applicable to problems where the heterogeneity is predominant and to problems of evolution in coarse meshes where the burnup is variable in one same coarse mesh, making the cross section vary spatially with the evolution. (E.G.) [pt

  4. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    Science.gov (United States)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.

  5. New Products and Technologies, Based on Calculations Developed Areas

    Directory of Open Access Journals (Sweden)

    Gheorghe Vertan

    2013-09-01

    Full Text Available Following statistics, currently prosperous and have high GDP / capita, only countries that have and fructify intensively large natural resources and/or produce and export products massive based on patented inventions accordingly. Without great natural wealth and the lowest GDP / capita in the EU, Romania will prosper only with such products. Starting from the top experience in the country, some patented, can develop new and competitive technologies and patentable and exportable products, based on exact calculations of developed areas, such as that double shells welded assemblies and plating of ships' propellers and blade pump and hydraulic turbines.

  6. SU-E-T-538: Evaluation of IMRT Dose Calculation Based on Pencil-Beam and AAA Algorithms.

    Science.gov (United States)

    Yuan, Y; Duan, J; Popple, R; Brezovich, I

    2012-06-01

    To evaluate the accuracy of dose calculation for intensity modulated radiation therapy (IMRT) based on Pencil Beam (PB) and Analytical Anisotropic Algorithm (AAA) computation algorithms. IMRT plans of twelve patients with different treatment sites, including head/neck, lung and pelvis, were investigated. For each patient, dose calculation with PB and AAA algorithms using dose grid sizes of 0.5 mm, 0.25 mm, and 0.125 mm, were compared with composite-beam ion chamber and film measurements in patient specific QA. Discrepancies between the calculation and the measurement were evaluated by percentage error for ion chamber dose and γ〉l failure rate in gamma analysis (3%/3mm) for film dosimetry. For 9 patients, ion chamber dose calculated with AAA-algorithms is closer to ion chamber measurement than that calculated with PB algorithm with grid size of 2.5 mm, though all calculated ion chamber doses are within 3% of the measurements. For head/neck patients and other patients with large treatment volumes, γ〉l failure rate is significantly reduced (within 5%) with AAA-based treatment planning compared to generally more than 10% with PB-based treatment planning (grid size=2.5 mm). For lung and brain cancer patients with medium and small treatment volumes, γ〉l failure rates are typically within 5% for both AAA and PB-based treatment planning (grid size=2.5 mm). For both PB and AAA-based treatment planning, improvements of dose calculation accuracy with finer dose grids were observed in film dosimetry of 11 patients and in ion chamber measurements for 3 patients. AAA-based treatment planning provides more accurate dose calculation for head/neck patients and other patients with large treatment volumes. Compared with film dosimetry, a γ〉l failure rate within 5% can be achieved for AAA-based treatment planning. © 2012 American Association of Physicists in Medicine.

  7. Accuracy of radiotherapy dose calculations based on cone-beam CT: comparison of deformable registration and image correction based methods

    Science.gov (United States)

    Marchant, T. E.; Joshi, K. D.; Moore, C. J.

    2018-03-01

    Radiotherapy dose calculations based on cone-beam CT (CBCT) images can be inaccurate due to unreliable Hounsfield units (HU) in the CBCT. Deformable image registration of planning CT images to CBCT, and direct correction of CBCT image values are two methods proposed to allow heterogeneity corrected dose calculations based on CBCT. In this paper we compare the accuracy and robustness of these two approaches. CBCT images for 44 patients were used including pelvis, lung and head & neck sites. CBCT HU were corrected using a ‘shading correction’ algorithm and via deformable registration of planning CT to CBCT using either Elastix or Niftyreg. Radiotherapy dose distributions were re-calculated with heterogeneity correction based on the corrected CBCT and several relevant dose metrics for target and OAR volumes were calculated. Accuracy of CBCT based dose metrics was determined using an ‘override ratio’ method where the ratio of the dose metric to that calculated on a bulk-density assigned version of the same image is assumed to be constant for each patient, allowing comparison to the patient’s planning CT as a gold standard. Similar performance is achieved by shading corrected CBCT and both deformable registration algorithms, with mean and standard deviation of dose metric error less than 1% for all sites studied. For lung images, use of deformed CT leads to slightly larger standard deviation of dose metric error than shading corrected CBCT with more dose metric errors greater than 2% observed (7% versus 1%).

  8. A parallel orbital-updating based plane-wave basis method for electronic structure calculations

    International Nuclear Information System (INIS)

    Pan, Yan; Dai, Xiaoying; Gironcoli, Stefano de; Gong, Xin-Gao; Rignanese, Gian-Marco; Zhou, Aihui

    2017-01-01

    Highlights: • Propose three parallel orbital-updating based plane-wave basis methods for electronic structure calculations. • These new methods can avoid the generating of large scale eigenvalue problems and then reduce the computational cost. • These new methods allow for two-level parallelization which is particularly interesting for large scale parallelization. • Numerical experiments show that these new methods are reliable and efficient for large scale calculations on modern supercomputers. - Abstract: Motivated by the recently proposed parallel orbital-updating approach in real space method , we propose a parallel orbital-updating based plane-wave basis method for electronic structure calculations, for solving the corresponding eigenvalue problems. In addition, we propose two new modified parallel orbital-updating methods. Compared to the traditional plane-wave methods, our methods allow for two-level parallelization, which is particularly interesting for large scale parallelization. Numerical experiments show that these new methods are more reliable and efficient for large scale calculations on modern supercomputers.

  9. Technical Work Plan For: Calculation of Waste Package and Drip Shield Response to Vibratory Ground Motion and Revision of the Seismic Consequence Abstraction

    International Nuclear Information System (INIS)

    M. Gross

    2006-01-01

    The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC)

  10. Technical Work Plan For: Calculation of Waste Packave and Drip Shield Response to Vibratory Ground Motion and Revision of the Seismic Consequence Abstraction

    Energy Technology Data Exchange (ETDEWEB)

    M. Gross

    2006-12-08

    The overall objective of the work scope covered by this technical work plan (TWP) is to develop new damage abstractions for the seismic scenario class in total system performance assessment (TSPA). The new abstractions will be based on a new set of waste package and drip shield damage calculations in response to vibratory ground motion and fault displacement. The new damage calculations, which are collectively referred to as damage models in this TWP, are required to represent recent changes in waste form packaging and in the regulatory time frame. The new damage models also respond to comments from the Independent Validation Review Team (IVRT) postvalidation review of the draft TSPA model regarding performance of the drip shield and to an Additional Information Need (AIN) from the U.S. Nuclear Regulatory Commission (NRC).

  11. Validation of GPU based TomoTherapy dose calculation engine.

    Science.gov (United States)

    Chen, Quan; Lu, Weiguo; Chen, Yu; Chen, Mingli; Henderson, Douglas; Sterpin, Edmond

    2012-04-01

    The graphic processing unit (GPU) based TomoTherapy convolution/superposition(C/S) dose engine (GPU dose engine) achieves a dramatic performance improvement over the traditional CPU-cluster based TomoTherapy dose engine (CPU dose engine). Besides the architecture difference between the GPU and CPU, there are several algorithm changes from the CPU dose engine to the GPU dose engine. These changes made the GPU dose slightly different from the CPU-cluster dose. In order for the commercial release of the GPU dose engine, its accuracy has to be validated. Thirty eight TomoTherapy phantom plans and 19 patient plans were calculated with both dose engines to evaluate the equivalency between the two dose engines. Gamma indices (Γ) were used for the equivalency evaluation. The GPU dose was further verified with the absolute point dose measurement with ion chamber and film measurements for phantom plans. Monte Carlo calculation was used as a reference for both dose engines in the accuracy evaluation in heterogeneous phantom and actual patients. The GPU dose engine showed excellent agreement with the current CPU dose engine. The majority of cases had over 99.99% of voxels with Γ(1%, 1 mm) engine also showed similar degree of accuracy in heterogeneous media as the current TomoTherapy dose engine. It is verified and validated that the ultrafast TomoTherapy GPU dose engine can safely replace the existing TomoTherapy cluster based dose engine without degradation in dose accuracy.

  12. Dose calculation algorithm for the Department of Energy Laboratory Accreditation Program

    International Nuclear Information System (INIS)

    Moscovitch, M.; Tawil, R.A.; Thompson, D.; Rhea, T.A.

    1991-01-01

    The dose calculation algorithm for a symmetric four-element LiF:Mg,Ti based thermoluminescent dosimeter is presented. The algorithm is based on the parameterization of the response of the dosimeter when exposed to both pure and mixed fields of various types and compositions. The experimental results were then used to develop the algorithm as a series of empirical response functions. Experiments to determine the response of the dosimeter and to test the dose calculation algorithm were performed according to the standard established by the Department of Energy Laboratory Accreditation Program (DOELAP). The test radiation fields include: 137 Cs gamma rays, 90 Sr/ 90 Y and 204 Tl beta particles, low energy photons of 20-120 keV and moderated 252 Cf neutron fields. The accuracy of the system has been demonstrated in an official DOELAP blind test conducted at Sandia National Laboratory. The test results were well within DOELAP tolerance limits. The results of this test are presented and discussed

  13. Determination of water pH using absorption-based optical sensors: evaluation of different calculation methods

    Science.gov (United States)

    Wang, Hongliang; Liu, Baohua; Ding, Zhongjun; Wang, Xiangxin

    2017-02-01

    Absorption-based optical sensors have been developed for the determination of water pH. In this paper, based on the preparation of a transparent sol-gel thin film with a phenol red (PR) indicator, several calculation methods, including simple linear regression analysis, quadratic regression analysis and dual-wavelength absorbance ratio analysis, were used to calculate water pH. Results of MSSRR show that dual-wavelength absorbance ratio analysis can improve the calculation accuracy of water pH in long-term measurement.

  14. Consolidating duodenal and small bowel toxicity data via isoeffective dose calculations based on compiled clinical data.

    Science.gov (United States)

    Prior, Phillip; Tai, An; Erickson, Beth; Li, X Allen

    2014-01-01

    To consolidate duodenum and small bowel toxicity data from clinical studies with different dose fractionation schedules using the modified linear quadratic (MLQ) model. A methodology of adjusting the dose-volume (D,v) parameters to different levels of normal tissue complication probability (NTCP) was presented. A set of NTCP model parameters for duodenum toxicity were estimated by the χ(2) fitting method using literature-based tolerance dose and generalized equivalent uniform dose (gEUD) data. These model parameters were then used to convert (D,v) data into the isoeffective dose in 2 Gy per fraction, (D(MLQED2),v) and convert these parameters to an isoeffective dose at another NTCP (D(MLQED2'),v). The literature search yielded 5 reports useful in making estimates of duodenum and small bowel toxicity. The NTCP model parameters were found to be TD50(1)(model) = 60.9 ± 7.9 Gy, m = 0.21 ± 0.05, and δ = 0.09 ± 0.03 Gy(-1). Isoeffective dose calculations and toxicity rates associated with hypofractionated radiation therapy reports were found to be consistent with clinical data having different fractionation schedules. Values of (D(MLQED2'),v) between different NTCP levels remain consistent over a range of 5%-20%. MLQ-based isoeffective calculations of dose-response data corresponding to grade ≥2 duodenum toxicity were found to be consistent with one another within the calculation uncertainty. The (D(MLQED2),v) data could be used to determine duodenum and small bowel dose-volume constraints for new dose escalation strategies. Copyright © 2014 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  15. Radial electromagnetic force calculation of induction motor based on multi-loop theory

    Directory of Open Access Journals (Sweden)

    HE Haibo

    2017-12-01

    Full Text Available [Objectives] In order to study the vibration and noise of induction motors, a method of radial electromagnetic force calculation is established on the basis of the multi-loop model.[Methods] Based on the method of calculating air-gap magneto motive force according to stator and rotor fundamental wave current, the analytic formulas are deduced for calculating the air-gap magneto motive force and radial electromagnetic force generated in accordance with any stator winding and rotor conducting bar current. The multi-loop theory and calculation method for the electromagnetic parameters of a motor are introduced, and a dynamic simulation model of an induction motor built to achieve the current of the stator winding and rotor conducting bars, and obtain the calculation formula of radial electromagnetic force. The radial electromagnetic force and vibration are then estimated.[Results] The experimental results indicate that the vibration acceleration frequency and amplitude of the motor are consistent with the experimental results.[Conclusions] The results and calculation method can support the low noise design of converters.

  16. Calculation of marine propeller static strength based on coupled BEM/FEM

    Directory of Open Access Journals (Sweden)

    YE Liyu

    2017-10-01

    Full Text Available [Objectives] The reliability of propeller stress has a great influence on the safe navigation of a ship. To predict propeller stress quickly and accurately,[Methods] a new numerical prediction model is developed by coupling the Boundary Element Method(BEMwith the Finite Element Method (FEM. The low order BEM is used to calculate the hydrodynamic load on the blades, and the Prandtl-Schlichting plate friction resistance formula is used to calculate the viscous load. Next, the calculated hydrodynamic load and viscous correction load are transmitted to the calculation of the Finite Element as surface loads. Considering the particularity of propeller geometry, a continuous contact detection algorithm is developed; an automatic method for generating the finite element mesh is developed for the propeller blade; a code based on the FEM is compiled for predicting blade stress and deformation; the DTRC 4119 propeller model is applied to validate the reliability of the method; and mesh independence is confirmed by comparing the calculated results with different sizes and types of mesh.[Results] The results show that the calculated blade stress and displacement distribution are reliable. This method avoids the process of artificial modeling and finite element mesh generation, and has the advantages of simple program implementation and high calculation efficiency.[Conclusions] The code can be embedded into the code of theoretical and optimized propeller designs, thereby helping to ensure the strength of designed propellers and improve the efficiency of propeller design.

  17. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  18. Sequential Objective Structured Clinical Examination based on item response theory in Iran

    Directory of Open Access Journals (Sweden)

    Sara Mortaz Hejri

    2017-09-01

    Full Text Available Purpose In a sequential objective structured clinical examination (OSCE, all students initially take a short screening OSCE. Examinees who pass are excused from further testing, but an additional OSCE is administered to the remaining examinees. Previous investigations of sequential OSCE were based on classical test theory. We aimed to design and evaluate screening OSCEs based on item response theory (IRT. Methods We carried out a retrospective observational study. At each station of a 10-station OSCE, the students’ performance was graded on a Likert-type scale. Since the data were polytomous, the difficulty parameters, discrimination parameters, and students’ ability were calculated using a graded response model. To design several screening OSCEs, we identified the 5 most difficult stations and the 5 most discriminative ones. For each test, 5, 4, or 3 stations were selected. Normal and stringent cut-scores were defined for each test. We compared the results of each of the 12 screening OSCEs to the main OSCE and calculated the positive and negative predictive values (PPV and NPV, as well as the exam cost. Results A total of 253 students (95.1% passed the main OSCE, while 72.6% to 94.4% of examinees passed the screening tests. The PPV values ranged from 0.98 to 1.00, and the NPV values ranged from 0.18 to 0.59. Two tests effectively predicted the results of the main exam, resulting in financial savings of 34% to 40%. Conclusion If stations with the highest IRT-based discrimination values and stringent cut-scores are utilized in the screening test, sequential OSCE can be an efficient and convenient way to conduct an OSCE.

  19. Sequential Objective Structured Clinical Examination based on item response theory in Iran.

    Science.gov (United States)

    Hejri, Sara Mortaz; Jalili, Mohammad

    2017-01-01

    In a sequential objective structured clinical examination (OSCE), all students initially take a short screening OSCE. Examinees who pass are excused from further testing, but an additional OSCE is administered to the remaining examinees. Previous investigations of sequential OSCE were based on classical test theory. We aimed to design and evaluate screening OSCEs based on item response theory (IRT). We carried out a retrospective observational study. At each station of a 10-station OSCE, the students' performance was graded on a Likert-type scale. Since the data were polytomous, the difficulty parameters, discrimination parameters, and students' ability were calculated using a graded response model. To design several screening OSCEs, we identified the 5 most difficult stations and the 5 most discriminative ones. For each test, 5, 4, or 3 stations were selected. Normal and stringent cut-scores were defined for each test. We compared the results of each of the 12 screening OSCEs to the main OSCE and calculated the positive and negative predictive values (PPV and NPV), as well as the exam cost. A total of 253 students (95.1%) passed the main OSCE, while 72.6% to 94.4% of examinees passed the screening tests. The PPV values ranged from 0.98 to 1.00, and the NPV values ranged from 0.18 to 0.59. Two tests effectively predicted the results of the main exam, resulting in financial savings of 34% to 40%. If stations with the highest IRT-based discrimination values and stringent cut-scores are utilized in the screening test, sequential OSCE can be an efficient and convenient way to conduct an OSCE.

  20. An independent dose calculation algorithm for MLC-based stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Lorenz, Friedlieb; Killoran, Joseph H.; Wenz, Frederik; Zygmanski, Piotr

    2007-01-01

    We have developed an algorithm to calculate dose in a homogeneous phantom for radiotherapy fields defined by multi-leaf collimator (MLC) for both static and dynamic MLC delivery. The algorithm was developed to supplement the dose algorithms of the commercial treatment planning systems (TPS). The motivation for this work is to provide an independent dose calculation primarily for quality assurance (QA) and secondarily for the development of static MLC field based inverse planning. The dose calculation utilizes a pencil-beam kernel. However, an explicit analytical integration results in a closed form for rectangular-shaped beamlets, defined by single leaf pairs. This approach reduces spatial integration to summation, and leads to a simple method of determination of model parameters. The total dose for any static or dynamic MLC field is obtained by summing over all individual rectangles from each segment which offers faster speed to calculate two-dimensional dose distributions at any depth in the phantom. Standard beam data used in the commissioning of the TPS was used as input data for the algorithm. The calculated results were compared with the TPS and measurements for static and dynamic MLC. The agreement was very good (<2.5%) for all tested cases except for very small static MLC sizes of 0.6 cmx0.6 cm (<6%) and some ion chamber measurements in a high gradient region (<4.4%). This finding enables us to use the algorithm for routine QA as well as for research developments

  1. A New Optimization Method for Centrifugal Compressors Based on 1D Calculations and Analyses

    Directory of Open Access Journals (Sweden)

    Pei-Yuan Li

    2015-05-01

    Full Text Available This paper presents an optimization design method for centrifugal compressors based on one-dimensional calculations and analyses. It consists of two parts: (1 centrifugal compressor geometry optimization based on one-dimensional calculations and (2 matching optimization of the vaned diffuser with an impeller based on the required throat area. A low pressure stage centrifugal compressor in a MW level gas turbine is optimized by this method. One-dimensional calculation results show that D3/D2 is too large in the original design, resulting in the low efficiency of the entire stage. Based on the one-dimensional optimization results, the geometry of the diffuser has been redesigned. The outlet diameter of the vaneless diffuser has been reduced, and the original single stage diffuser has been replaced by a tandem vaned diffuser. After optimization, the entire stage pressure ratio is increased by approximately 4%, and the efficiency is increased by approximately 2%.

  2. Real-Time Continuous Response Spectra Exceedance Calculation Displayed in a Web-Browser Enables Rapid and Robust Damage Evaluation by First Responders

    Science.gov (United States)

    Franke, M.; Skolnik, D. A.; Harvey, D.; Lindquist, K.

    2014-12-01

    A novel and robust approach is presented that provides near real-time earthquake alarms for critical structures at distributed locations and large facilities using real-time estimation of response spectra obtained from near free-field motions. Influential studies dating back to the 1980s identified spectral response acceleration as a key ground motion characteristic that correlates well with observed damage in structures. Thus, monitoring and reporting on exceedance of spectra-based thresholds are useful tools for assessing the potential for damage to facilities or multi-structure campuses based on input ground motions only. With as little as one strong-motion station per site, this scalable approach can provide rapid alarms on the damage status of remote towns, critical infrastructure (e.g., hospitals, schools) and points of interests (e.g., bridges) for a very large number of locations enabling better rapid decision making during critical and difficult immediate post-earthquake response actions. Details on the novel approach are presented along with an example implementation for a large energy company. Real-time calculation of PSA exceedance and alarm dissemination are enabled with Bighorn, an extension module based on the Antelope software package that combines real-time spectral monitoring and alarm capabilities with a robust built-in web display server. Antelope is an environmental data collection software package from Boulder Real Time Technologies (BRTT) typically used for very large seismic networks and real-time seismic data analyses. The primary processing engine produces continuous time-dependent response spectra for incoming acceleration streams. It utilizes expanded floating-point data representations within object ring-buffer packets and waveform files in a relational database. This leads to a very fast method for computing response spectra for a large number of channels. A Python script evaluates these response spectra for exceedance of one or more

  3. Calculating evidence-based renal replacement therapy - Introducing an excel-based calculator to improve prescribing and delivery in renal replacement therapy - A before and after study.

    Science.gov (United States)

    Cottle, Daniel; Mousdale, Stephen; Waqar-Uddin, Haroon; Tully, Redmond; Taylor, Benjamin

    2016-02-01

    Transferring the theoretical aspect of continuous renal replacement therapy to the bedside and delivering a given "dose" can be difficult. In research, the "dose" of renal replacement therapy is given as effluent flow rate in ml kg -1  h -1 . Unfortunately, most machines require other information when they are initiating therapy, including blood flow rate, pre-blood pump flow rate, dialysate flow rate, etc. This can lead to confusion, resulting in patients receiving inappropriate doses of renal replacement therapy. Our aim was to design an excel calculator which would personalise patient's treatment, deliver an effective, evidence-based dose of renal replacement therapy without large variations in practice and prolong filter life. Our calculator prescribes a haemodialfiltration dose of 25 ml kg -1  h -1 whilst limiting the filtration fraction to 15%. We compared the episodes of renal replacement therapy received by a historical group of patients, by retrieving their data stored on the haemofiltration machines, to a group where the calculator was used. In the second group, the data were gathered prospectively. The median delivered dose reduced from 41.0 ml kg -1  h -1 to 26.8 ml kg -1  h -1 with reduced variability that was significantly closer to the aim of 25 ml kg -1 .h -1 ( p  < 0.0001). The median treatment time increased from 8.5 h to 22.2 h ( p  = 0.00001). Our calculator significantly reduces variation in prescriptions of continuous veno-venous haemodiafiltration and provides an evidence-based dose. It is easy to use and provides personal care for patients whilst optimizing continuous veno-venous haemodiafiltration delivery and treatment times.

  4. Fast CPU-based Monte Carlo simulation for radiotherapy dose calculation

    Science.gov (United States)

    Ziegenhein, Peter; Pirner, Sven; Kamerling, Cornelis Ph; Oelfke, Uwe

    2015-08-01

    Monte-Carlo (MC) simulations are considered to be the most accurate method for calculating dose distributions in radiotherapy. Its clinical application, however, still is limited by the long runtimes conventional implementations of MC algorithms require to deliver sufficiently accurate results on high resolution imaging data. In order to overcome this obstacle we developed the software-package PhiMC, which is capable of computing precise dose distributions in a sub-minute time-frame by leveraging the potential of modern many- and multi-core CPU-based computers. PhiMC is based on the well verified dose planning method (DPM). We could demonstrate that PhiMC delivers dose distributions which are in excellent agreement to DPM. The multi-core implementation of PhiMC scales well between different computer architectures and achieves a speed-up of up to 37× compared to the original DPM code executed on a modern system. Furthermore, we could show that our CPU-based implementation on a modern workstation is between 1.25× and 1.95× faster than a well-known GPU implementation of the same simulation method on a NVIDIA Tesla C2050. Since CPUs work on several hundreds of GB RAM the typical GPU memory limitation does not apply for our implementation and high resolution clinical plans can be calculated.

  5. Calculation of the RPA response function of nuclei to quasi-elastic electron scattering with a density-dependent NN interaction

    International Nuclear Information System (INIS)

    Caillon, J-C.; Labarsouque, J.

    1997-01-01

    So far, the non-relativistic longitudinal and transverse functions in electron quasi-elastic scattering on the nuclei failed in reproducing satisfactorily the existent experimental data. The calculations including relativistic RPA correlations utilize until now the relativistic Hartree approximation to describe the nuclear matter. But, this provides an incompressibility module two times higher than its experimental value what is an important drawback for the calculation of realistic relativistic RPA correlations. Hence, we have determined the RPA response functions of nuclei by utilising a description of the relativistic nuclear matter leading to an incompressibility module in agreement with the empirical value. To do that we have utilized an interaction in the relativistic Hartree approximation in which we have determined the coupling constants σ-N and ω-N as a function of the density in order to reproduce the saturation curve obtained by a Dirac-Brueckner calculation. The results which we have obtained show that the longitudinal response function and the Coulomb sum generally overestimated when one utilizes the pure relativistic Hartree approximation, are here in good agreement with the experimental data for several nuclei

  6. Ab initio theory and calculations of X-ray spectra

    International Nuclear Information System (INIS)

    Rehr, J.J.; Kas, J.J.; Prange, M.P.; Sorini, A.P.; Takimoto, Y.; Vila, F.

    2009-01-01

    There has been dramatic progress in recent years both in the calculation and interpretation of various x-ray spectroscopies. However, current theoretical calculations often use a number of simplified models to account for many-body effects, in lieu of first principles calculations. In an effort to overcome these limitations we describe in this article a number of recent advances in theory and in theoretical codes which offer the prospect of parameter free calculations that include the dominant many-body effects. These advances are based on ab initio calculations of the dielectric and vibrational response of a system. Calculations of the dielectric function over a broad spectrum yield system dependent self-energies and mean-free paths, as well as intrinsic losses due to multielectron excitations. Calculations of the dynamical matrix yield vibrational damping in terms of multiple-scattering Debye-Waller factors. Our ab initio methods for determining these many-body effects have led to new, improved, and broadly applicable x-ray and electron spectroscopy codes. (authors)

  7. Method to Calculate the Electricity Generated by a Photovoltaic Cell, Based on Its Mathematical Model Simulations in MATLAB

    Directory of Open Access Journals (Sweden)

    Carlos Morcillo-Herrera

    2015-01-01

    Full Text Available This paper presents a practical method for calculating the electrical energy generated by a PV panel (kWhr through MATLAB simulations based on the mathematical model of the cell, which obtains the “Mean Maximum Power Point” (MMPP in the characteristic V-P curve, in response to evaluating historical climate data at specific location. This five-step method calculates through MMPP per day, per month, or per year, the power yield by unit area, then electrical energy generated by PV panel, and its real conversion efficiency. To validate the method, it was applied to Sewage Treatment Plant for a Group of Drinking Water and Sewerage of Yucatan (JAPAY, México, testing 250 Wp photovoltaic panels of five different manufacturers. As a result, the performance, the real conversion efficiency, and the electricity generated by five different PV panels in evaluation were obtained and show the best technical-economic option to develop the PV generation project.

  8. Weldon Spring dose calculations

    International Nuclear Information System (INIS)

    Dickson, H.W.; Hill, G.S.; Perdue, P.T.

    1978-09-01

    In response to a request by the Oak Ridge Operations (ORO) Office of the Department of Energy (DOE) for assistance to the Department of the Army (DA) on the decommissioning of the Weldon Spring Chemical Plant, the Health and Safety Research Division of the Oak Ridge National Laboratory (ORNL) performed limited dose assessment calculations for that site. Based upon radiological measurements from a number of soil samples analyzed by ORNL and from previously acquired radiological data for the Weldon Spring site, source terms were derived to calculate radiation doses for three specific site scenarios. These three hypothetical scenarios are: a wildlife refuge for hunting, fishing, and general outdoor recreation; a school with 40 hr per week occupancy by students and a custodian; and a truck farm producing fruits, vegetables, meat, and dairy products which may be consumed on site. Radiation doses are reported for each of these scenarios both for measured uranium daughter equilibrium ratios and for assumed secular equilibrium. Doses are lower for the nonequilibrium case

  9. Applying Activity Based Costing (ABC) Method to Calculate Cost Price in Hospital and Remedy Services.

    Science.gov (United States)

    Rajabi, A; Dabiri, A

    2012-01-01

    Activity Based Costing (ABC) is one of the new methods began appearing as a costing methodology in the 1990's. It calculates cost price by determining the usage of resources. In this study, ABC method was used for calculating cost price of remedial services in hospitals. To apply ABC method, Shahid Faghihi Hospital was selected. First, hospital units were divided into three main departments: administrative, diagnostic, and hospitalized. Second, activity centers were defined by the activity analysis method. Third, costs of administrative activity centers were allocated into diagnostic and operational departments based on the cost driver. Finally, with regard to the usage of cost objectives from services of activity centers, the cost price of medical services was calculated. The cost price from ABC method significantly differs from tariff method. In addition, high amount of indirect costs in the hospital indicates that capacities of resources are not used properly. Cost price of remedial services with tariff method is not properly calculated when compared with ABC method. ABC calculates cost price by applying suitable mechanisms but tariff method is based on the fixed price. In addition, ABC represents useful information about the amount and combination of cost price services.

  10. Calculation of passive earth pressure of cohesive soil based on Culmann's method

    Directory of Open Access Journals (Sweden)

    Hai-feng Lu

    2011-03-01

    Full Text Available Based on the sliding plane hypothesis of Coulumb earth pressure theory, a new method for calculation of the passive earth pressure of cohesive soil was constructed with Culmann's graphical construction. The influences of the cohesive force, adhesive force, and the fill surface form were considered in this method. In order to obtain the passive earth pressure and sliding plane angle, a program based on the sliding surface assumption was developed with the VB.NET programming language. The calculated results from this method were basically the same as those from the Rankine theory and Coulumb theory formulas. This method is conceptually clear, and the corresponding formulas given in this paper are simple and convenient for application when the fill surface form is complex.

  11. A New Power Calculation Method for Single-Phase Grid-Connected Systems

    DEFF Research Database (Denmark)

    Yang, Yongheng; Blaabjerg, Frede

    2013-01-01

    A new method to calculate average active power and reactive power for single-phase systems is proposed in this paper. It can be used in different applications where the output active power and reactive power need to be calculated accurately and fast. For example, a grid-connected photovoltaic...... system in low voltage ride through operation mode requires a power feedback for the power control loop. Commonly, a Discrete Fourier Transform (DFT) based power calculation method can be adopted in such systems. However, the DFT method introduces at least a one-cycle time delay. The new power calculation...... method, which is based on the adaptive filtering technique, can achieve a faster response. The performance of the proposed method is verified by experiments and demonstrated in a 1 kW single-phase grid-connected system operating under different conditions.Experimental results show the effectiveness...

  12. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, M.J., E-mail: mark.j.jackson@awe.co.uk; Britton, R.; Davies, A.V.; McLarty, J.L.; Goodwin, M.

    2016-10-21

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ–γ, γ–X, γ–511 and γ–e{sup −} coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted. - Highlights: • Versatile method to calculate coincidence summing factors for gamma-spectrometry analysis. • Based solely on ENSDF format nuclear data and detector efficiency characterisations. • Enables generation of a CSF library for any detector, geometry and radionuclide. • Improves measurement accuracy and reduces acquisition times required to meet MDA.

  13. Earthquake accelerations estimation for construction calculating with different responsibility degrees

    International Nuclear Information System (INIS)

    Dolgaya, A.A.; Uzdin, A.M.; Indeykin, A.V.

    1993-01-01

    The investigation object is the design amplitude of accelerograms, which are used in the evaluation of seismic stability of responsible structures, first and foremost, NPS. The amplitude level is established depending on the degree of responsibility of the structure and on the prevailing period of earthquake action on the construction site. The investigation procedure is based on statistical analysis of 310 earthquakes. At the first stage of statistical data-processing we established the correlation dependence of both the mathematical expectation and root-mean-square deviation of peak acceleration of the earthquake on its prevailing period. At the second stage the most suitable law of acceleration distribution about the mean was chosen. To determine of this distribution parameters, we specified the maximum conceivable acceleration, the excess of which is not allowed. Other parameters of distribution are determined according to statistical data. At the third stage the dependencies of design amplitude on the prevailing period of seismic effect for different structures and equipment were established. The obtained data made it possible to recommend to fix the level of safe-shutdown (SSB) and operating basis earthquakes (OBE) for objects of various responsibility categories when designing NPS. (author)

  14. Online plasma calculator

    Science.gov (United States)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  15. The internal radiation dose calculations based on Chinese mathematical phantom

    International Nuclear Information System (INIS)

    Wang Haiyan; Li Junli; Cheng Jianping; Fan Jiajin

    2006-01-01

    The internal radiation dose calculations built on Chinese facts become more and more important according to the development of nuclear medicine. the MIRD method developed and consummated by the society of Nuclear Medicine (America) is based on the European and American mathematical phantom and can't fit Chinese well. The transport of γ-ray in the Chinese mathematical phantom was simulated with Monte Carlo method in programs as MCNP4C. the specific absorbed fraction (Φ) of Chinese were calculated and the Chinese Φ database was created. The results were compared with the recommended values by ORNL. the method was proved correct by the coherence when the target organ was the same with the source organ. Else, the difference was due to the different phantom and the choice of different physical model. (authors)

  16. Two-dimensional sensitivity calculation code: SENSETWO

    International Nuclear Information System (INIS)

    Yamauchi, Michinori; Nakayama, Mitsuo; Minami, Kazuyoshi; Seki, Yasushi; Iida, Hiromasa.

    1979-05-01

    A SENSETWO code for the calculation of cross section sensitivities with a two-dimensional model has been developed, on the basis of first order perturbation theory. It uses forward neutron and/or gamma-ray fluxes and adjoint fluxes obtained by two-dimensional discrete ordinates code TWOTRAN-II. The data and informations of cross sections, geometry, nuclide density, response functions, etc. are transmitted to SENSETWO by the dump magnetic tape made in TWOTRAN calculations. The required input for SENSETWO calculations is thus very simple. The SENSETWO yields as printed output the cross section sensitivities for each coarse mesh zone and for each energy group, as well as the plotted output of sensitivity profiles specified by the input. A special feature of the code is that it also calculates the reaction rate with the response function used as the adjoint source in TWOTRAN adjoint calculation and the calculated forward flux from the TWOTRAN forward calculation. (author)

  17. Phase-only stereoscopic hologram calculation based on Gerchberg–Saxton iterative algorithm

    International Nuclear Information System (INIS)

    Xia Xinyi; Xia Jun

    2016-01-01

    A phase-only computer-generated holography (CGH) calculation method for stereoscopic holography is proposed in this paper. The two-dimensional (2D) perspective projection views of the three-dimensional (3D) object are generated by the computer graphics rendering techniques. Based on these views, a phase-only hologram is calculated by using the Gerchberg–Saxton (GS) iterative algorithm. Comparing with the non-iterative algorithm in the conventional stereoscopic holography, the proposed method improves the holographic image quality, especially for the phase-only hologram encoded from the complex distribution. Both simulation and optical experiment results demonstrate that our proposed method can give higher quality reconstruction comparing with the traditional method. (special topic)

  18. Calculations of radiation fields and monkey mid-head and mid-thorax responses in AFRRI-TRIGA reactor facility experiments

    International Nuclear Information System (INIS)

    Johnson, J.O.; Emmett, M.B.; Pace, J.V. III.

    1983-07-01

    A computational study was performed to characterize the radiation exposure fields and the mid-head and mid-thorax response functions for monkeys irradiated in the Armed Forces Radiobiological Research Institute (AFRRI) reactor exposure facilities. Discrete ordinates radiation transport calculations were performed in one-dimensional spherical geometry to obtain the energy spectra of the neutrons and gamma rays entering the room through various spectrum modifiers and reaching the irradiation position. Adjoint calculations performed in two-dimensional cylindrical geometry yielded the mid-head and mid-thorax response functions, which were then folded with flux spectra to obtain the monkey mid-head and mid-thorax doses (kerma rates) received at the irradiation position. The results of the study are presented both as graphs and as tables. The resulting spectral shapes compared favorably with previous work; however, the magnitudes of the fluxes did not. The differences in the magnitudes may be due to the normalization factor used

  19. Jet identification based on probability calculations using Bayes' theorem

    International Nuclear Information System (INIS)

    Jacobsson, C.; Joensson, L.; Lindgren, G.; Nyberg-Werther, M.

    1994-11-01

    The problem of identifying jets at LEP and HERA has been studied. Identification using jet energies and fragmentation properties was treated separately in order to investigate the degree of quark-gluon separation that can be achieved by either of these approaches. In the case of the fragmentation-based identification, a neural network was used, and a test of the dependence on the jet production process and the fragmentation model was done. Instead of working with the separation variables directly, these have been used to calculate probabilities of having a specific type of jet, according to Bayes' theorem. This offers a direct interpretation of the performance of the jet identification and provides a simple means of combining the results of the energy- and fragmentation-based identifications. (orig.)

  20. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  1. Peak-Broadening of Floor Response Spectra for Base Isolated Nuclear Structures

    International Nuclear Information System (INIS)

    Ju, Heekun; Choun, Young-Sun; Kim, Min-Kyu

    2015-01-01

    In this paper, uncertainties in developing FRS are explained first. Then FDRS of a fixed structure is computed using a conventional method as an example. Lastly FRS of a base-isolated structure is computed and suitability of current peak-broadening method is examined. Uncertainties in the material property of structure influence FRS of fixed structures significantly, but their effect on FRS of base-isolated structures is negligible. Nuclear structures should be designed to ensure the safety of equipment and components mounted on their floors. However, coupled analysis of a structure and components is complex, so equipment is separately analyzed using floor response spectra (FRS). FRS calculated from dynamic analysis of structural model should be modified to create floor design response spectra (FDRS), the input for seismic design of equipment. For nuclear structures, smoothing and broadening peaks of FRS is required to account for uncertainties owing to material properties of structures, soil, modeling techniques, and others. The peak broadening method proposed for fixed based structures may not be appropriate for base-isolated structures because of additional uncertainties in the property of isolation bearings. For base-isolated structures, mechanical property of isolator plays a dominant role on the change of FRS. As base-isolated nuclear plants should meet the ASCE provisions, uncertainty in the isolation system would be around 10%. For the base isolated 3-storied beam model with 2.5-sec isolation period, 6.9% of broadening ratio was enough for development of FDRS at the required variation condition. Also for the models with various isolation periods, less than 10% of broadening ratio was sufficient

  2. Design of software for calculation of shielding based on various standards radiodiagnostic calculation

    International Nuclear Information System (INIS)

    Falero, B.; Bueno, P.; Chaves, M. A.; Ordiales, J. M.; Villafana, O.; Gonzalez, M. J.

    2013-01-01

    The aim of this study was to develop a software application that performs calculation shields in radiology room depending on the type of equipment. The calculation will be done by selecting the user, the method proposed in the Guide 5.11, the Report 144 and 147 and also for the methodology given by the Portuguese Health Ministry. (Author)

  3. A simple method for calculating power based on a prior trial.

    NARCIS (Netherlands)

    Borm, G.F.; Bloem, B.R.; Munneke, M.; Teerenstra, S.

    2010-01-01

    OBJECTIVE: When an investigator wants to base the power of a planned clinical trial on the outcome of another trial, the latter study may not have been reported in sufficient detail to allow this. For example, when the outcome is a change from baseline, the power calculation requires the standard

  4. Oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor: Sensing ability, TD-DFT calculations and its application as an efficient solid state sensor

    Science.gov (United States)

    Lan, Linxin; Li, Tianduo; Wei, Tao; Pang, He; Sun, Tao; Wang, Enhua; Liu, Haixia; Niu, Qingfen

    2018-03-01

    An oligothiophene-based colorimetric and ratiometric fluorescence dual-channel cyanide chemosensor 3 T-2CN was reported. Sensor 3 T-2CN showed both naked-eye recognition and ratiometric fluorescence response for CN- with an excellent selectivity and high sensitivity. The sensing mechanism based on the nucleophilic attack of CN- on the vinyl Cdbnd C bond has been successfully confirmed by the optical measurements, 1H NMR titration, FT-IR spectra as well as the DFT/TD-DFT calculations. Moreover, the detection limit was calculated to be 0.19 μM, which is much lower than the maximum permission concentration in drinking water (1.9 μM). Importantly, test strips (filter paper and TLC plates) containing 3 T-2CN were fabricated, which could act as a practical and efficient solid state optical sensor for CN- in field measurements.

  5. Waste Package Lifting Calculation

    International Nuclear Information System (INIS)

    H. Marr

    2000-01-01

    The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation

  6. GPU based acceleration of first principles calculation

    International Nuclear Information System (INIS)

    Tomono, H; Tsumuraya, K; Aoki, M; Iitaka, T

    2010-01-01

    We present a Graphics Processing Unit (GPU) accelerated simulations of first principles electronic structure calculations. The FFT, which is the most time-consuming part, is about 10 times accelerated. As the result, the total computation time of a first principles calculation is reduced to 15 percent of that of the CPU.

  7. Effects of B site doping on electronic structures of InNbO4 based on hybrid density functional calculations

    Science.gov (United States)

    Lu, M. F.; Zhou, C. P.; Li, Q. Q.; Zhang, C. L.; Shi, H. F.

    2018-01-01

    In order to improve the photocatalytic activity under visible-light irradiation, we adopted first principle calculations based on density functional theory (DFT) to calculate the electronic structures of B site transition metal element doped InNbO4. The results indicated that the complete hybridization of Nb 4d states and some Ti 3d states contributed to the new conduction band of Ti doped InNbO4, barely changing the position of band edge. For Cr doping, some localized Cr 3d states were introduced into the band gap. Nonetheless, the potential of localized levels was too positive to cause visible-light reaction. When it came to Cu doping, the band gap was almost same with that of InNbO4 as well as some localized Cu 3d states appeared above the top of VB. The introduction of localized energy levels benefited electrons to migrate from valence band (VB) to conduction band (CB) by absorbing lower energy photons, realizing visible-light response.

  8. A density gradient theory based method for surface tension calculations

    DEFF Research Database (Denmark)

    Liang, Xiaodong; Michelsen, Michael Locht; Kontogeorgis, Georgios

    2016-01-01

    The density gradient theory has been becoming a widely used framework for calculating surface tension, within which the same equation of state is used for the interface and bulk phases, because it is a theoretically sound, consistent and computationally affordable approach. Based on the observation...... that the optimal density path from the geometric mean density gradient theory passes the saddle point of the tangent plane distance to the bulk phases, we propose to estimate surface tension with an approximate density path profile that goes through this saddle point. The linear density gradient theory, which...... assumes linearly distributed densities between the two bulk phases, has also been investigated. Numerical problems do not occur with these density path profiles. These two approximation methods together with the full density gradient theory have been used to calculate the surface tension of various...

  9. Methods for U.S. shielding calculations: applications to FFTF and CRBR designs

    International Nuclear Information System (INIS)

    Engle, W.W. Jr.; Mynatt, F.R.; Disney, R.K.

    1978-01-01

    The primary components of the U.S. reactor shielding methodology consist of: (1) computer code systems based on discrete ordinates or Monte Carlo radiation transport calculational methods; (2) a data base of neutron and gamma-ray interaction and gamma-ray-production cross sections used as input in the codes; (3) a capability for processing the cross sections into multigroup or point energy formats as required by the codes; (4) large-scale integral shielding experiments designed to test cross-section data or techniques utilized in the calculations; and (5) a ''sensitivity'' analysis capability that can identify the most important interactions in a transport calculation and assign uncertainties to the calculated result that are based on uncertainties in all of the input data. The required accuracy for the methodology is to within 5 to 10% for responses at locations near the core to within a factor of 2 for responses at distant locations. Under these criteria, the methodology has proved to be adequate for in-vessel LMFBR calculations of neutron transport through deep sodium and thick iron and stainless steel shields, of neutron streaming through lower axial coolant channels and primary pipe chaseways, and of the effects of fuel stored within the reactor vessel. For ex-vessel LMFBR problems, the methodology requires considerable improvement, the areas of concern including neutron streaming through heating and ventilation ducts, through the cavity surrounding the reactor vessel, and through gaps around rotating plugs in the reactor heat, as well as gamma-ray streaming through plant shield penetrations

  10. Calculation of the Energy Dependence of Dosimeter Response to Ionizing Photons

    DEFF Research Database (Denmark)

    Miller, Arne; McLaughlin, W. L.

    1982-01-01

    Using a program in BASIC applied to a desk-top calculator, simplified calculations provide approximate energy dependence correction factors of dosimeter readings of absorbed dose according to Bragg-Gray cavity theories. Burlin's general cavity theory is applied in the present calculations, and ce...

  11. FADDEEV: A fortran code for the calculation of the frequency response matrix of multiple-input, multiple-output dynamic systems

    International Nuclear Information System (INIS)

    Owens, D.H.

    1972-06-01

    The KDF9/EGDON programme FADDEEV has been written to investigate a technique for the calculation of the matrix of frequency responses G(jw) describing the response of the output vector y from the multivariable differential/algebraic system S to the drive of the system input vector u. S: Ex = Ax + Bu, y = Cx, G(jw) = C(jw E - A ) -1 B. The programme uses an algorithm due to Faddeev and has been written with emphasis upon: (a) simplicity of programme structure and computational technique which should enable a user to find his way through the programme fairly easily, and hence facilitate its manipulation as a subroutine in a larger code; (b) rapid computational ability, particularly in systems with fairly large number of inputs and outputs and requiring the evaluation of the frequency responses at a large number of frequencies. Transport or time delays must be converted by the user to Pade or Bode approximations prior to input. Conditions under which the algorithm fails to give accurate results are identified, and methods for increasing the accuracy of the calculations are discussed. The conditions for accurate results using FADDEEV indicate that its application is specialized. (author)

  12. Continuous energy Monte Carlo method based homogenization multi-group constants calculation

    International Nuclear Information System (INIS)

    Li Mancang; Wang Kan; Yao Dong

    2012-01-01

    The efficiency of the standard two-step reactor physics calculation relies on the accuracy of multi-group constants from the assembly-level homogenization process. In contrast to the traditional deterministic methods, generating the homogenization cross sections via Monte Carlo method overcomes the difficulties in geometry and treats energy in continuum, thus provides more accuracy parameters. Besides, the same code and data bank can be used for a wide range of applications, resulting in the versatility using Monte Carlo codes for homogenization. As the first stage to realize Monte Carlo based lattice homogenization, the track length scheme is used as the foundation of cross section generation, which is straight forward. The scattering matrix and Legendre components, however, require special techniques. The Scattering Event method was proposed to solve the problem. There are no continuous energy counterparts in the Monte Carlo calculation for neutron diffusion coefficients. P 1 cross sections were used to calculate the diffusion coefficients for diffusion reactor simulator codes. B N theory is applied to take the leakage effect into account when the infinite lattice of identical symmetric motives is assumed. The MCMC code was developed and the code was applied in four assembly configurations to assess the accuracy and the applicability. At core-level, A PWR prototype core is examined. The results show that the Monte Carlo based multi-group constants behave well in average. The method could be applied to complicated configuration nuclear reactor core to gain higher accuracy. (authors)

  13. Calculations of helium separation via uniform pores of stanene-based membranes

    Directory of Open Access Journals (Sweden)

    Guoping Gao

    2015-12-01

    Full Text Available The development of low energy cost membranes to separate He from noble gas mixtures is highly desired. In this work, we studied He purification using recently experimentally realized, two-dimensional stanene (2D Sn and decorated 2D Sn (SnH and SnF honeycomb lattices by density functional theory calculations. To increase the permeability of noble gases through pristine 2D Sn at room temperature (298 K, two practical strategies (i.e., the application of strain and functionalization are proposed. With their high concentration of large pores, 2D Sn-based membrane materials demonstrate excellent helium purification and can serve as a superior membrane over traditionally used, porous materials. In addition, the separation performance of these 2D Sn-based membrane materials can be significantly tuned by application of strain to optimize the He purification properties by taking both diffusion and selectivity into account. Our results are the first calculations of He separation in a defect-free honeycomb lattice, highlighting new interesting materials for helium separation for future experimental validation.

  14. Calculation of Spectra of Solids:

    DEFF Research Database (Denmark)

    Lindgård, Per-Anker

    1975-01-01

    The Gilat-Raubenheimer method simplified to tetrahedron division is used to calculate the real and imaginary part of the dynamical response function for electrons. A frequency expansion for the real part is discussed. The Lindhard function is calculated as a test for numerical accuracy...

  15. Evaluation of students' knowledge about paediatric dosage calculations.

    Science.gov (United States)

    Özyazıcıoğlu, Nurcan; Aydın, Ayla İrem; Sürenler, Semra; Çinar, Hava Gökdere; Yılmaz, Dilek; Arkan, Burcu; Tunç, Gülseren Çıtak

    2018-01-01

    Medication errors are common and may jeopardize the patient safety. As paediatric dosages are calculated based on the child's age and weight, risk of error in dosage calculations is increasing. In paediatric patients, overdose drug prescribed regardless of the child's weight, age and clinical picture may lead to excessive toxicity and mortalities while low doses may delay the treatment. This study was carried out to evaluate the knowledge of nursing students about paediatric dosage calculations. This research, which is of retrospective type, covers a population consisting of all the 3rd grade students at the bachelor's degree in May, 2015 (148 students). Drug dose calculation questions in exam papers including 3 open ended questions on dosage calculation problems, addressing 5 variables were distributed to the students and their responses were evaluated by the researchers. In the evaluation of the data, figures and percentage distribution were calculated and Spearman correlation analysis was applied. Exam question on the dosage calculation based on child's age, which is the most common method in paediatrics, and which ensures right dosages and drug dilution was answered correctly by 87.1% of the students while 9.5% answered it wrong and 3.4% left it blank. 69.6% of the students was successful in finding the safe dose range, and 79.1% in finding the right ratio/proportion. 65.5% of the answers with regard to Ml/dzy calculation were correct. Moreover, student's four operation skills were assessed and 68.2% of the students were determined to have found the correct answer. When the relation among the questions on medication was examined, a significant relation (correlation) was determined between them. It is seen that in dosage calculations, the students failed mostly in calculating ml/dzy (decimal). This result means that as dosage calculations are based on decimal values, calculations may be ten times erroneous when the decimal point is placed wrongly. Moreover, it

  16. Calculation Scheme Based on a Weighted Primitive: Application to Image Processing Transforms

    Directory of Open Access Journals (Sweden)

    Gregorio de Miguel Casado

    2007-01-01

    Full Text Available This paper presents a method to improve the calculation of functions which specially demand a great amount of computing resources. The method is based on the choice of a weighted primitive which enables the calculation of function values under the scope of a recursive operation. When tackling the design level, the method shows suitable for developing a processor which achieves a satisfying trade-off between time delay, area costs, and stability. The method is particularly suitable for the mathematical transforms used in signal processing applications. A generic calculation scheme is developed for the discrete fast Fourier transform (DFT and then applied to other integral transforms such as the discrete Hartley transform (DHT, the discrete cosine transform (DCT, and the discrete sine transform (DST. Some comparisons with other well-known proposals are also provided.

  17. Data base structure and Management for Automatic Calculation of 210Pb Dating Methods Applying Different Models

    International Nuclear Information System (INIS)

    Gasco, C.; Anton, M. P.; Ampudia, J.

    2003-01-01

    The introduction of macros in try calculation sheets allows the automatic application of various dating models using unsupported ''210 Pb data from a data base. The calculation books the contain the models have been modified to permit the implementation of these macros. The Marine and Aquatic Radioecology group of CIEMAT (MARG) will be involved in new European Projects, thus new models have been developed. This report contains a detailed description of: a) the new implement macros b) the design of a dating Menu in the calculation sheet and c) organization and structure of the data base. (Author) 4 refs

  18. A RTS-based method for direct and consistent calculating intermittent peak cooling loads

    International Nuclear Information System (INIS)

    Chen Tingyao; Cui, Mingxian

    2010-01-01

    The RTS method currently recommended by ASHRAE Handbook is based on continuous operation. However, most of air-conditioning systems, if not all, in commercial buildings, are intermittently operated in practice. The application of the current RTS method to intermittent air-conditioning in nonresidential buildings could result in largely underestimated design cooling loads, and inconsistently sized air-conditioning systems. Improperly sized systems could seriously deteriorate the performance of system operation and management. Therefore, a new method based on both the current RTS method and the principles of heat transfer has been developed. The first part of the new method is the same as the current RTS method in principle, but its calculation procedure is simplified by the derived equations in a close form. The technical data available in the current RTS method can be utilized to compute zone responses to a change in space air temperature so that no efforts are needed for regenerating new technical data. Both the overall RTS coefficients and the hourly cooling loads computed in the first part are used to estimate the additional peak cooling load due to a change from continuous operation to intermittent operation. It only needs one more step after the current RTS method to determine the intermittent peak cooling load. The new RTS-based method has been validated by EnergyPlus simulations. The root mean square deviation (RMSD) between the relative additional peak cooling loads (RAPCLs) computed by the two methods is 1.8%. The deviation of the RAPCL varies from -3.0% to 5.0%, and the mean deviation is 1.35%.

  19. Calculation of wind turbine aeroelastic behaviour. The Garrad Hassan approach

    Energy Technology Data Exchange (ETDEWEB)

    Quarton, D C [Garrad Hassan and Partners Ltd., Bristol (United Kingdom)

    1996-09-01

    The Garrad Hassan approach to the prediction of wind turbine loading and response has been developed over the last decade. The goal of this development has been to produce calculation methods that contain realistic representation of the wind, include sensible aerodynamic and dynamic models of the turbine and can be used to predict fatigue and extreme loads for design purposes. The Garrad Hassan calculation method is based on a suite of four key computer programs: WIND3D for generation of the turbulent wind field; EIGEN for modal analysis of the rotor and support structure; BLADED for time domain calculation of the structural loads; and SIGNAL for post-processing of the BLADED predictions. The interaction of these computer programs is illustrated. A description of the main elements of the calculation method will be presented. (au)

  20. An Analysis on the Calculation Efficiency of the Responses Caused by the Biased Adjoint Fluxes in Hybrid Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Khuat, Quang Huy; Kim, Song Hyun; Kim, Do Hyun; Shin, Chang Ho

    2015-01-01

    This technique is known as Consistent Adjoint Driven Importance Sampling (CADIS) method and it is implemented in SCALE code system. In the CADIS method, adjoint transport equation has to be solved to determine deterministic importance functions. Using the CADIS method, a problem was noted that the biased adjoint flux estimated by deterministic methods can affect the calculation efficiency and error. The biases of adjoint function are caused by the methodology, calculation strategy, tolerance of result calculated by the deterministic method and inaccurate multi-group cross section libraries. In this paper, a study to analyze the influence of the biased adjoint functions into Monte Carlo computational efficiency is pursued. In this study, a method to estimate the calculation efficiency was proposed for applying the biased adjoint fluxes in the CADIS approach. For a benchmark problem, the responses and FOMs using SCALE code system were evaluated as applying the adjoint fluxes. The results show that the biased adjoint fluxes significantly affects the calculation efficiencies

  1. Three-phase short circuit calculation method based on pre-computed surface for doubly fed induction generator

    Science.gov (United States)

    Ma, J.; Liu, Q.

    2018-02-01

    This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.

  2. Calculation of the Instream Ecological Flow of the Wei River Based on Hydrological Variation

    Directory of Open Access Journals (Sweden)

    Shengzhi Huang

    2014-01-01

    Full Text Available It is of great significance for the watershed management department to reasonably allocate water resources and ensure the sustainable development of river ecosystems. The greatly important issue is to accurately calculate instream ecological flow. In order to precisely compute instream ecological flow, flow variation is taken into account in this study. Moreover, the heuristic segmentation algorithm that is suitable to detect the mutation points of flow series is employed to identify the change points. Besides, based on the law of tolerance and ecological adaptation theory, the maximum instream ecological flow is calculated, which is the highest frequency of the monthly flow based on the GEV distribution and very suitable for healthy development of the river ecosystems. Furthermore, in order to guarantee the sustainable development of river ecosystems under some bad circumstances, minimum instream ecological flow is calculated by a modified Tennant method which is improved by replacing the average flow with the highest frequency of flow. Since the modified Tennant method is more suitable to reflect the law of flow, it has physical significance, and the calculation results are more reasonable.

  3. New flux based dose–response relationships for ozone for European forest tree species

    International Nuclear Information System (INIS)

    Büker, P.; Feng, Z.; Uddling, J.; Briolat, A.; Alonso, R.; Braun, S.; Elvira, S.; Gerosa, G.; Karlsson, P.E.; Le Thiec, D.

    2015-01-01

    To derive O 3 dose–response relationships (DRR) for five European forest trees species and broadleaf deciduous and needleleaf tree plant functional types (PFTs), phytotoxic O 3 doses (PODy) were related to biomass reductions. PODy was calculated using a stomatal flux model with a range of cut-off thresholds (y) indicative of varying detoxification capacities. Linear regression analysis showed that DRR for PFT and individual tree species differed in their robustness. A simplified parameterisation of the flux model was tested and showed that for most non-Mediterranean tree species, this simplified model led to similarly robust DRR as compared to a species- and climate region-specific parameterisation. Experimentally induced soil water stress was not found to substantially reduce PODy, mainly due to the short duration of soil water stress periods. This study validates the stomatal O 3 flux concept and represents a step forward in predicting O 3 damage to forests in a spatially and temporally varying climate. - Highlights: • We present new ozone flux based dose–response relationships for European trees. • The model-based study accounted for the soil water effect on stomatal flux. • Different statistically derived ozone flux thresholds were applied. • Climate region specific parameterisation often outperformed simplified parameterisation. • Findings could help redefining critical levels for ozone effects on trees. - New stomatal flux based ozone dose–response relationships for tree species are derived for the regional risk assessment of ozone effects on European forest ecosystems.

  4. Damage identification in beams by a response surface based technique

    Directory of Open Access Journals (Sweden)

    Teidj S.

    2014-01-01

    Full Text Available In this work, identification of damage in uniform homogeneous metallic beams was considered through the propagation of non dispersive elastic torsional waves. The proposed damage detection procedure consisted of the following sequence. Giving a localized torque excitation, having the form of a short half-sine pulse, the first step was calculating the transient solution of the resulting torsional wave. This torque could be generated in practice by means of asymmetric laser irradiation of the beam surface. Then, a localized defect assumed to be characterized by an abrupt reduction of beam section area with a given height and extent was placed at a known location of the beam. Next, the response in terms of transverse section rotation rate was obtained for a point situated afterwards the defect, where the sensor was positioned. This last could utilize in practice the concept of laser vibrometry. A parametric study has been conducted after that by using a full factorial design of experiments table and numerical simulations based on a finite difference characteristic scheme. This has enabled the derivation of a response surface model that was shown to represent adequately the response of the system in terms of the following factors: defect extent and severity. The final step was performing the inverse problem solution in order to identify the defect characteristics by using measurement.

  5. Calculator: A Hardware Design, Math and Software Programming Project Base Learning

    Directory of Open Access Journals (Sweden)

    F. Criado

    2015-03-01

    Full Text Available This paper presents the implementation by the students of a complex calculator in hardware. This project meets hardware design goals, and also highly motivates them to use competences learned in others subjects. The learning process, associated to System Design, is hard enough because the students have to deal with parallel execution, signal delay, synchronization … Then, to strengthen the knowledge of hardware design a methodology as project based learning (PBL is proposed. Moreover, it is also used to reinforce cross subjects like math and software programming. This methodology creates a course dynamics that is closer to a professional environment where they will work with software and mathematics to resolve the hardware design problems. The students design from zero the functionality of the calculator. They are who make the decisions about the math operations that it is able to resolve it, and also the operands format or how to introduce a complex equation into the calculator. This will increase the student intrinsic motivation. In addition, since the choices may have consequences on the reliability of the calculator, students are encouraged to program in software the decisions about how implement the selected mathematical algorithm. Although math and hardware design are two tough subjects for students, the perception that they get at the end of the course is quite positive.

  6. General Method for Calculating the Response and Noise Spectra of Active Fabry-Perot Semiconductor Waveguides With External Optical Injection

    DEFF Research Database (Denmark)

    Blaaberg, Søren; Mørk, Jesper

    2009-01-01

    We present a theoretical method for calculating small-signal modulation responses and noise spectra of active Fabry-Perot semiconductor waveguides with external light injection. Small-signal responses due to either a modulation of the pump current or due to an optical amplitude or phase modulatio...... amplifiers and an injection-locked laser. We also demonstrate the applicability of the method to analyze slow and fast light effects in semiconductor waveguides. Finite reflectivities of the facets are found to influence the phase changes of the injected microwave-modulated light....

  7. Development of a power-period calculation unit for nuclear reactor Control

    International Nuclear Information System (INIS)

    Martin, J.

    1966-10-01

    The apparatus studied is a digital calculating assembly which makes it possible to prepare and to present numerically the period and power of a nuclear reactor during operation, from start-up to nominal power. The pulses from a fission chamber are analyzed continuously, using real time. A small number of elements is required because of the systematic use of a calculation technique comprising the determination of a base 2 logarithm by a linear approximation. The accuracy obtained for the period is of the order of 14%; the response time of the order of the calculated period value. An approximate value of the power (30%) is given at each calculation cycle together with the power thresholds required for the control. (author) [fr

  8. A drainage data-based calculation method for coalbed permeability

    International Nuclear Information System (INIS)

    Lai, Feng-peng; Li, Zhi-ping; Fu, Ying-kun; Yang, Zhi-hao

    2013-01-01

    This paper establishes a drainage data-based calculation method for coalbed permeability. The method combines material balance and production equations. We use a material balance equation to derive the average pressure of the coalbed in the production process. The dimensionless water production index is introduced into the production equation for the water production stage. In the subsequent stage, which uses both gas and water, the gas and water production ratio is introduced to eliminate the effect of flush-flow radius, skin factor, and other uncertain factors in the calculation of coalbed methane permeability. The relationship between permeability and surface cumulative liquid production can be described as a single-variable cubic equation by derivation. The trend shows that the permeability initially declines and then increases after ten wells in the southern Qinshui coalbed methane field. The results show an exponential relationship between permeability and cumulative water production. The relationship between permeability and cumulative gas production is represented by a linear curve and that between permeability and surface cumulative liquid production is represented by a cubic polynomial curve. The regression result of the permeability and surface cumulative liquid production agrees with the theoretical mathematical relationship. (paper)

  9. Evaluation of signal energy calculation methods for a light-sharing SiPM-based PET detector

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qingyang [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China); Beijing Engineering Research Center of Industrial Spectrum Imaging, University of Science and Technology Beijing, Beijing 100083 (China); Ma, Tianyu; Xu, Tianpeng; Liu, Yaqiang; Wang, Shi [Department of Engineering Physics, Tsinghua University, Beijing 100084 (China); Gu, Yu, E-mail: guyu@ustb.edu.cn [School of Automation and Electrical Engineering, University of Science & Technology Beijing, Beijing 100083 (China)

    2017-03-11

    Signals of a light-sharing positron emission tomography (PET) detector are commonly multiplexed to three analog pulses (E, X, and Y) and then digitally sampled. From this procedure, the signal energy that are critical to detector performance are obtained. In this paper, different signal energy calculation strategies for a self-developed SiPM-based PET detector, including pulse height and different integration methods, are evaluated in terms of energy resolution and spread of the crystal response in the flood histogram using a root-mean-squared (RMS) index. Results show that integrations outperform the pulse height. Integration using the maximum derivative value of the pulse E as the landmark point and 28 integrated points (448 ns) has the best performance in these evaluated methods for our detector. Detector performance in terms of energy and position is improved with this integration method. The proposed methodology is expected to be applicable for other light-sharing PET detectors.

  10. Microcontroller-based network for meteorological sensing and weather forecast calculations

    Directory of Open Access Journals (Sweden)

    A. Vas

    2012-06-01

    Full Text Available Weather forecasting needs a lot of computing power. It is generally accomplished by using supercomputers which are expensive to rent and to maintain. In addition, weather services also have to maintain radars, balloons and pay for worldwide weather data measured by stations and satellites. Weather forecasting computations usually consist of solving differential equations based on the measured parameters. To do that, the computer uses the data of close and distant neighbor points. Accordingly, if small-sized weather stations, which are capable of making measurements, calculations and communication, are connected through the Internet, then they can be used to run weather forecasting calculations like a supercomputer does. It doesn’t need any central server to achieve this, because this network operates as a distributed system. We chose Microchip’s PIC18 microcontroller (μC platform in the implementation of the hardware, and the embedded software uses the TCP/IP Stack v5.41 provided by Microchip.

  11. Optimization of the beam shaping assembly in the D-D neutron generators-based BNCT using the response matrix method.

    Science.gov (United States)

    Kasesaz, Y; Khalafi, H; Rahmani, F

    2013-12-01

    Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Model-based flaw localization from perturbations in the dynamic response of complex mechanical structures

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H

    2009-02-24

    A new method of locating structural damage using measured differences in vibrational response and a numerical model of the undamaged structure has been presented. This method is particularly suited for complex structures with little or no symmetry. In a prior study the method successively located simulated damage from measurements of the vibrational response on two simple structures. Here we demonstrate that it can locate simulated damage in a complex structure. A numerical model of a complex structure was used to calculate the structural response before and after the introduction of a void. The method can now be considered for application to structures of programmatic interest. It could be used to monitor the structural integrity of complex mechanical structures and assemblies over their lifetimes. This would allow early detection of damage, when repair is relatively easy and inexpensive. It would also allow one to schedule maintenance based on actual damage instead of a time schedule.

  13. Medication calculation skills of graduating nursing students in Finland.

    Science.gov (United States)

    Grandell-Niemi, H; Hupli, M; Leino-Kilpi, H

    2001-01-01

    The aim of this study was to describe the basic mathematical proficiency and the medication calculation skills of graduating nursing students in Finland. A further concern was with how students experienced the teaching of medication calculation. We wanted to find out whether these experiences were associated with various background factors and the students' medication calculation skills. In spring 1997 the population of graduating nursing students in Finland numbered around 1280; the figure for the whole year was 2640. A convenience sample of 204 students completed a questionnaire specially developed for this study. The instrument included structured questions, statements and a medication calculation test. The response rate was 88%. Data analysis was based on descriptive statistics. The students found it hard to learn mathematics and medication calculation skills. Those who evaluated their mathematical and medication calculation skills as sufficient successfully solved the problems included in the questionnaire. It was felt that the introductory course on medication calculation was uninteresting and poorly organised. Overall the students' mathematical skills were inadequate. One-fifth of the students failed to pass the medication calculation test. A positive correlation was shown between the student's grade in mathematics (Sixth Form College) and her skills in medication calculation.

  14. Using risk based tools in emergency response

    International Nuclear Information System (INIS)

    Dixon, B.W.; Ferns, K.G.

    1987-01-01

    Probabilistic Risk Assessment (PRA) techniques are used by the nuclear industry to model the potential response of a reactor subjected to unusual conditions. The knowledge contained in these models can aid in emergency response decision making. This paper presents requirements for a PRA based emergency response support system to date. A brief discussion of published work provides background for a detailed description of recent developments. A rapid deep assessment capability for specific portions of full plant models is presented. The program uses a screening rule base to control search space expansion in a combinational algorithm

  15. Calculation of generalized Lorenz-Mie theory based on the localized beam models

    International Nuclear Information System (INIS)

    Jia, Xiaowei; Shen, Jianqi; Yu, Haitao

    2017-01-01

    It has been proved that localized approximation (LA) is the most efficient way to evaluate the beam shape coefficients (BSCs) in generalized Lorenz-Mie theory (GLMT). The numerical calculation of relevant physical quantities is a challenge for its practical applications due to the limit of computer resources. The study presents an improved algorithm of the GLMT calculation based on the localized beam models. The BSCs and the angular functions are calculated by multiplying them with pre-factors so as to keep their values in a reasonable range. The algorithm is primarily developed for the original localized approximation (OLA) and is further extended to the modified localized approximation (MLA). Numerical results show that the algorithm is efficient, reliable and robust. - Highlights: • In this work, we introduce the proper pre-factors to the Bessel functions, BSCs and the angular functions. With this improvement, all the quantities involved in the numerical calculation are scaled into a reasonable range of values so that the algorithm can be used for computing the physical quantities of the GLMT. • The algorithm is not only an improvement in numerical technique, it also implies that the set of basic functions involved in the electromagnetic scattering (and sonic scattering) can be reasonably chosen. • The algorithms of the GLMT computations introduced in previous references suggested that the order of the n and m sums is interchanged. In this work, the sum of azimuth modes is performed for each partial wave. This offers the possibility to speed up the computation, since the sum of partial waves can be optimized according to the illumination conditions and the sum of azimuth modes can be truncated by selecting a criterion discussed in . • Numerical results show that the algorithm is efficient, reliable and robust, even in very exotic cases. The algorithm presented in this paper is based on the original localized approximation and it can also be used for the

  16. Calculation of acoustic field based on laser-measured vibration velocities on ultrasonic transducer surface

    Science.gov (United States)

    Hu, Liang; Zhao, Nannan; Gao, Zhijian; Mao, Kai; Chen, Wenyu; Fu, Xin

    2018-05-01

    Determination of the distribution of a generated acoustic field is valuable for studying ultrasonic transducers, including providing the guidance for transducer design and the basis for analyzing their performance, etc. A method calculating the acoustic field based on laser-measured vibration velocities on the ultrasonic transducer surface is proposed in this paper. Without knowing the inner structure of the transducer, the acoustic field outside it can be calculated by solving the governing partial differential equation (PDE) of the field based on the specified boundary conditions (BCs). In our study, the BC on the transducer surface, i.e. the distribution of the vibration velocity on the surface, is accurately determined by laser scanning measurement of discrete points and follows a data fitting computation. In addition, to ensure the calculation accuracy for the whole field even in an inhomogeneous medium, a finite element method is used to solve the governing PDE based on the mixed BCs, including the discretely measured velocity data and other specified BCs. The method is firstly validated on numerical piezoelectric transducer models. The acoustic pressure distributions generated by a transducer operating in an homogeneous and inhomogeneous medium, respectively, are both calculated by the proposed method and compared with the results from other existing methods. Then, the method is further experimentally validated with two actual ultrasonic transducers used for flow measurement in our lab. The amplitude change of the output voltage signal from the receiver transducer due to changing the relative position of the two transducers is calculated by the proposed method and compared with the experimental data. This method can also provide the basis for complex multi-physical coupling computations where the effect of the acoustic field should be taken into account.

  17. Sensor response time calculation with no stationary signals from a Nuclear Power Plant

    International Nuclear Information System (INIS)

    Vela, O.; Vallejo, I.

    1998-01-01

    Protection systems in a Nuclear Power Plant have to response in a specific time fixed by design requirements. This time includes the event detection (sensor delay) and the actuation time system. This time is obtained in refuel simulating the physics event, which trigger the protection system, with an electric signal and measuring the protection system actuation time. Nowadays sensor delay is calculated with noise analysis techniques. The signals are measured in Control Room during the normal operation of the Plant, decreasing both the cost in time and personal radioactive exposure. The noise analysis techniques require stationary signals but normally the data collected are mixed with process signals that are no stationary. This work shows the signals processing to avoid no-stationary components using conventional filters and new wavelets analysis. (Author) 2 refs

  18. BaTiO3-based nanolayers and nanotubes: first-principles calculations.

    Science.gov (United States)

    Evarestov, Robert A; Bandura, Andrei V; Kuruch, Dmitrii D

    2013-01-30

    The first-principles calculations using hybrid exchange-correlation functional and localized atomic basis set are performed for BaTiO(3) (BTO) nanolayers and nanotubes (NTs) with the structure optimization. Both the cubic and the ferroelectric BTO phases are used for the nanolayers and NTs modeling. It follows from the calculations that nanolayers of the different ferroelectric BTO phases have the practically identical surface energies and are more stable than nanolayers of the cubic phase. Thin nanosheets composed of three or more dense layers of (0 1 0) and (0 1 1[overline]) faces preserve the ferroelectric displacements inherent to the initial bulk phase. The structure and stability of BTO single-wall NTs depends on the original bulk crystal phase and a wall thickness. The majority of the considered NTs with the low formation and strain energies has the mirror plane perpendicular to the tube axis and therefore cannot exhibit ferroelectricity. The NTs folded from (0 1 1[overline]) layers may show antiferroelectric arrangement of Ti-O bonds. Comparison of stability of the BTO-based and SrTiO(3)-based NTs shows that the former are more stable than the latter. Copyright © 2012 Wiley Periodicals, Inc.

  19. Development of a micro-depletion model to us WIMS properties in history-based local-parameter calculations in RFSP

    International Nuclear Information System (INIS)

    Shen, W.

    2004-01-01

    A micro-depletion model has been developed and implemented in the *SIMULATE module of RFSP to use WIMS-calculated lattice properties in history-based local-parameter calculations. A comparison between the micro-depletion and WIMS results for each type of lattice cross section and for the infinite-lattice multiplication factor was also performed for a fuel similar to that which may be used in the ACR fuel. The comparison shows that the micro-depletion calculation agrees well with the WIMS-IST calculation. The relative differences in k-infinity are within ±0.5 mk and ±0.9 mk for perturbation and depletion calculations, respectively. The micro-depletion model gives the *SIMULATE module of RFSP the capability to use WIMS-calculated lattice properties in history-based local-parameter calculations without resorting to the Simple-Cell-Methodology (SCM) surrogate for CANDU core-tracking simulations. (author)

  20. The finite element response Matrix method

    International Nuclear Information System (INIS)

    Nakata, H.; Martin, W.R.

    1983-01-01

    A new method for global reactor core calculations is described. This method is based on a unique formulation of the response matrix method, implemented with a higher order finite element method. The unique aspects of this approach are twofold. First, there are two levels to the overall calculational scheme: the local or assembly level and the global or core level. Second, the response matrix scheme, which is formulated at both levels, consists of two separate response matrices rather than one response matrix as is generally the case. These separate response matrices are seen to be quite beneficial for the criticality eigenvalue calculation, because they are independent of k /SUB eff/. The response matrices are generated from a Galerkin finite element solution to the weak form of the diffusion equation, subject to an arbitrary incoming current and an arbitrary distributed source. Calculational results are reported for two test problems, the two-dimensional International Atomic Energy Agency benchmark problem and a two-dimensional pressurized water reactor test problem (Biblis reactor), and they compare well with standard coarse mesh methods with respect to accuracy and efficiency. Moreover, the accuracy (and capability) is comparable to fine mesh for a fraction of the computational cost. Extension of the method to treat heterogeneous assemblies and spatial depletion effects is discussed

  1. A New Displacement-based Approach to Calculate Stress Intensity Factors With the Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Marco Gonzalez

    Full Text Available Abstract The analysis of cracked brittle mechanical components considering linear elastic fracture mechanics is usually reduced to the evaluation of stress intensity factors (SIFs. The SIF calculation can be carried out experimentally, theoretically or numerically. Each methodology has its own advantages but the use of numerical methods has become very popular. Several schemes for numerical SIF calculations have been developed, the J-integral method being one of the most widely used because of its energy-like formulation. Additionally, some variations of the J-integral method, such as displacement-based methods, are also becoming popular due to their simplicity. In this work, a simple displacement-based scheme is proposed to calculate SIFs, and its performance is compared with contour integrals. These schemes are all implemented with the Boundary Element Method (BEM in order to exploit its advantages in crack growth modelling. Some simple examples are solved with the BEM and the calculated SIF values are compared against available solutions, showing good agreement between the different schemes.

  2. Application of the perturbation theory for sensitivity calculations in thermalhydraulics reactor calculations

    International Nuclear Information System (INIS)

    Andrade Lima, F.R. de

    1986-01-01

    The sensitivity of non linear responses associated with physical quantities governed by non linear differential systems can be studied using perturbation theory. The equivalence and formal differences between the differential and GPT formalisms are shown and both are used for sensitivity calculations of transient problems in a typical PWR coolant channel. The results obtained are encouraging with respect to the potential of the method for thermalhydraulics calculations normally performed for reactor design and safety analysis. (Author) [pt

  3. Two-dimensional core calculation research for fuel management optimization based on CPACT code

    International Nuclear Information System (INIS)

    Chen Xiaosong; Peng Lianghui; Gang Zhi

    2013-01-01

    Fuel management optimization process requires rapid assessment for the core layout program, and the commonly used methods include two-dimensional diffusion nodal method, perturbation method, neural network method and etc. A two-dimensional loading patterns evaluation code was developed based on the three-dimensional LWR diffusion calculation program CPACT. Axial buckling introduced to simulate the axial leakage was searched in sub-burnup sections to correct the two-dimensional core diffusion calculation results. Meanwhile, in order to get better accuracy, the weight equivalent volume method of the control rod assembly cross-section was improved. (authors)

  4. Mobile application-based Seoul National University Prostate Cancer Risk Calculator: development, validation, and comparative analysis with two Western risk calculators in Korean men.

    Directory of Open Access Journals (Sweden)

    Chang Wook Jeong

    Full Text Available OBJECTIVES: We developed a mobile application-based Seoul National University Prostate Cancer Risk Calculator (SNUPC-RC that predicts the probability of prostate cancer (PC at the initial prostate biopsy in a Korean cohort. Additionally, the application was validated and subjected to head-to-head comparisons with internet-based Western risk calculators in a validation cohort. Here, we describe its development and validation. PATIENTS AND METHODS: As a retrospective study, consecutive men who underwent initial prostate biopsy with more than 12 cores at a tertiary center were included. In the development stage, 3,482 cases from May 2003 through November 2010 were analyzed. Clinical variables were evaluated, and the final prediction model was developed using the logistic regression model. In the validation stage, 1,112 cases from December 2010 through June 2012 were used. SNUPC-RC was compared with the European Randomized Study of Screening for PC Risk Calculator (ERSPC-RC and the Prostate Cancer Prevention Trial Risk Calculator (PCPT-RC. The predictive accuracy was assessed using the area under the receiver operating characteristic curve (AUC. The clinical value was evaluated using decision curve analysis. RESULTS: PC was diagnosed in 1,240 (35.6% and 417 (37.5% men in the development and validation cohorts, respectively. Age, prostate-specific antigen level, prostate size, and abnormality on digital rectal examination or transrectal ultrasonography were significant factors of PC and were included in the final model. The predictive accuracy in the development cohort was 0.786. In the validation cohort, AUC was significantly higher for the SNUPC-RC (0.811 than for ERSPC-RC (0.768, p<0.001 and PCPT-RC (0.704, p<0.001. Decision curve analysis also showed higher net benefits with SNUPC-RC than with the other calculators. CONCLUSIONS: SNUPC-RC has a higher predictive accuracy and clinical benefit than Western risk calculators. Furthermore, it is easy

  5. KBERG: KnowledgeBase for Estrogen Responsive Genes

    DEFF Research Database (Denmark)

    Tang, Suisheng; Zhang, Zhuo; Tan, Sin Lam

    2007-01-01

    Estrogen has a profound impact on human physiology affecting transcription of numerous genes. To decipher functional characteristics of estrogen responsive genes, we developed KnowledgeBase for Estrogen Responsive Genes (KBERG). Genes in KBERG were derived from Estrogen Responsive Gene Database...... (ERGDB) and were analyzed from multiple aspects. We explored the possible transcription regulation mechanism by capturing highly conserved promoter motifs across orthologous genes, using promoter regions that cover the range of [-1200, +500] relative to the transcription start sites. The motif detection...... is based on ab initio discovery of common cis-elements from the orthologous gene cluster from human, mouse and rat, thus reflecting a degree of promoter sequence preservation during evolution. The identified motifs are linked to transcription factor binding sites based on the TRANSFAC database. In addition...

  6. Wall attenuation and scatter corrections for ion chambers: measurements versus calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rogers, D W.O.; Bielajew, A F [National Research Council of Canada, Ottawa, ON (Canada). Div. of Physics

    1990-08-01

    In precision ion chamber dosimetry in air, wall attenuation and scatter are corrected for A{sub wall} (K{sub att} in IAEA terminology, K{sub w}{sup -1} in standards laboratory terminology). Using the EGS4 system the authors show that Monte Carlo calculated A{sub wall} factors predict relative variations in detector response with wall thickness which agree with all available experimental data within a statistical uncertainty of less than 0.1%. They calculated correction factors for use in exposure and air kerma standards are different by up to 1% from those obtained by extrapolating these same measurements. Using calculated correction factors would imply increases of 0.7-1.0% in the exposure and air kerma standards based on spherical and large diameter, large length cylindrical chambers and decreases of 0.3-0.5% for standards based on large diameter pancake chambers. (author).

  7. Response surface methodology to simplify calculation of wood energy potency from tropical short rotation coppice species

    Science.gov (United States)

    Haqiqi, M. T.; Yuliansyah; Suwinarti, W.; Amirta, R.

    2018-04-01

    Short Rotation Coppice (SRC) system is an option to provide renewable and sustainable feedstock in generating electricity for rural area. Here in this study, we focussed on application of Response Surface Methodology (RSM) to simplify calculation protocols to point out wood chip production and energy potency from some tropical SRC species identified as Bauhinia purpurea, Bridelia tomentosa, Calliandra calothyrsus, Fagraea racemosa, Gliricidia sepium, Melastoma malabathricum, Piper aduncum, Vernonia amygdalina, Vernonia arborea and Vitex pinnata. The result showed that the highest calorific value was obtained from V. pinnata wood (19.97 MJ kg-1) due to its high lignin content (29.84 %, w/w). Our findings also indicated that the use of RSM for estimating energy-electricity of SRC wood had significant term regarding to the quadratic model (R2 = 0.953), whereas the solid-chip ratio prediction was accurate (R2 = 1.000). In the near future, the simple formula will be promising to calculate energy production easily from woody biomass, especially from SRC species.

  8. Gender Responsive Community Based Planning and Budgeting ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    ... Responsive Community Based Planning and Budgeting Tool for Local Governance ... in data collection, and another module that facilitates gender responsive and ... In partnership with UNESCO's Organization for Women in Science for the ...

  9. Development and validation of a criticality calculation scheme based on French deterministic transport codes

    International Nuclear Information System (INIS)

    Santamarina, A.

    1991-01-01

    A criticality-safety calculational scheme using the automated deterministic code system, APOLLO-BISTRO, has been developed. The cell/assembly code APOLLO is used mainly in LWR and HCR design calculations, and its validation spans a wide range of moderation ratios, including voided configurations. Its recent 99-group library and self-shielded cross-sections has been extensively qualified through critical experiments and PWR spent fuel analysis. The PIC self-shielding formalism enables a rigorous treatment of the fuel double heterogeneity in dissolver medium calculations. BISTRO is an optimized multidimensional SN code, part of the modular CCRR package used mainly in FBR calculations. The APOLLO-BISTRO scheme was applied to the 18 experimental benchmarks selected by the OECD/NEACRP Criticality Calculation Working Group. The Calculation-Experiment discrepancy was within ± 1% in ΔK/K and always looked consistent with the experimental uncertainty margin. In the critical experiments corresponding to a dissolver type benchmark, our tools computed a satisfactory Keff. In the VALDUC fuel storage experiments, with hafnium plates, the computed Keff ranged between 0.994 and 1.003 for the various watergaps spacing the fuel clusters from the absorber plates. The APOLLO-KENOEUR statistic calculational scheme, based on the same self-shielded multigroup library, supplied consistent results within 0.3% in ΔK/K. (Author)

  10. Evaluation bases for calculation methods in radioecology

    International Nuclear Information System (INIS)

    Bleck-Neuhaus, J.; Boikat, U.; Franke, B.; Hinrichsen, K.; Hoepfner, U.; Ratka, R.; Steinhilber-Schwab, B.; Teufel, D.; Urbach, M.

    1982-03-01

    The seven contributions in this book deal with the state and problems of radioecology. In particular it analyses: The propagation of radioactive materials in the atmosphere, the transfer of radioactive substances from the soil into plants, respectively from animal feed into meat, the exposure pathways for, and high-risk groups of the population, the uncertainties and the band width of the ingestion factor, as well as the treatment of questions of radioecology in practice. The calculation model is assessed and the difficulty evaluated of laying down data in the general calculation basis. (DG) [de

  11. CALCULATION OF LASER CUTTING COSTS

    OpenAIRE

    Bogdan Nedic; Milan Eric; Marijana Aleksijevic

    2016-01-01

    The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, compar...

  12. Poker-camp: a program for calculating detector responses and phantom organ doses in environmental gamma fields

    International Nuclear Information System (INIS)

    Koblinger, L.

    1981-09-01

    A general description, user's manual and a sample problem are given in this report on the POKER-CAMP adjoint Monte Carlo photon transport program. Gamma fields of different environmental sources which are uniformly or exponentially distributed sources or plane sources in the air, in the soil or in an intermediate layer placed between them are simulated in the code. Calculations can be made on flux, kerma and spectra of photons at any point; and on responses of point-like, cylindrical, or spherical detectors; and on doses absorbed in anthropomorphic phantoms. (author)

  13. Tunable photoelectric response in NiO-based heterostructures by various orientations

    Science.gov (United States)

    Luo, Yidong; Qiao, Lina; Zhang, Qinghua; Xu, Haomin; Shen, Yang; Lin, Yuanhua; Nan, Cewen

    2018-02-01

    We engineered various orientations of NiO layers for NiO-based heterostructures (NiO/Au/STO) to investigate their effects on the generation of hot electrons and holes. Our calculation and experimental results suggested that bandgap engineering and the orientation of the hole transport layer (NiO) were crucial elements for the optimization of photoelectric responses. The (100)-orientated NiO/Au/STO achieved the highest photo-current density (˜30 μA/cm2) compared with (111) and (110)-orientated NiO films, which was attributed to the (100) films's lowest effective mass of photogenerated holes (˜1.82 m0) and the highest efficiency of separating and transferring electron-holes of the (100)-orientated sample. Our results opened a direction to design a high efficiency photoelectric solar cell.

  14. Calculation of parameters of radial-piston reducer based on the use of functional semantic networks

    Directory of Open Access Journals (Sweden)

    Pashkevich V.M.

    2016-12-01

    Full Text Available The questions of сalculation of parameters of radial-piston reducer are considered in this article. It is used the approach which is based technologies of functional semantic networks. It is considered possibility applications of functional se-mantic networks for calculation of parameters of radial-piston reducer. Semantic networks to calculate the mass of the radial piston reducer are given.

  15. An automated Monte-Carlo based method for the calculation of cascade summing factors

    Science.gov (United States)

    Jackson, M. J.; Britton, R.; Davies, A. V.; McLarty, J. L.; Goodwin, M.

    2016-10-01

    A versatile method has been developed to calculate cascade summing factors for use in quantitative gamma-spectrometry analysis procedures. The proposed method is based solely on Evaluated Nuclear Structure Data File (ENSDF) nuclear data, an X-ray energy library, and accurate efficiency characterisations for single detector counting geometries. The algorithm, which accounts for γ-γ, γ-X, γ-511 and γ-e- coincidences, can be applied to any design of gamma spectrometer and can be expanded to incorporate any number of nuclides. Efficiency characterisations can be derived from measured or mathematically modelled functions, and can accommodate both point and volumetric source types. The calculated results are shown to be consistent with an industry standard gamma-spectrometry software package. Additional benefits including calculation of cascade summing factors for all gamma and X-ray emissions, not just the major emission lines, are also highlighted.

  16. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    International Nuclear Information System (INIS)

    Puigdomenech, I.; Bruno, J.

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T r Cdeg pm values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs

  17. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    International Nuclear Information System (INIS)

    Park, Peter C.; Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian; Fox, Tim; Zhu, X. Ronald; Dong, Lei; Dhabaan, Anees

    2015-01-01

    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts

  18. The PHREEQE Geochemical equilibrium code data base and calculations

    International Nuclear Information System (INIS)

    Andersoon, K.

    1987-01-01

    Compilation of a thermodynamic data base for actinides and fission products for use with PHREEQE has begun and a preliminary set of actinide data has been tested for the PHREEQE code in a version run on an IBM XT computer. The work until now has shown that the PHREEQE code mostly gives satisfying results for specification of actinides in natural water environment. For U and Np under oxidizing conditions, however, the code has difficulties to converge with pH and Eh conserved when a solubility limit is applied. For further calculations of actinide and fission product specification and solubility in a waste repository and in the surrounding geosphere, more data are needed. It is necessary to evaluate the influence of the large uncertainties of some data. A quality assurance and a check on the consistency of the data base is also needed. Further work with data bases should include: an extension to fission products, an extension to engineering materials, an extension to other ligands than hydroxide and carbonate, inclusion of more mineral phases, inclusion of enthalpy data, a control of primary references in order to decide if values from different compilations are taken from the same primary reference and contacts and discussions with other groups, working with actinide data bases, e.g. at the OECD/NEA and at the IAEA. (author)

  19. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K [Department of Radiation Oncology, NYU Langone Medical Center, New York, NY (United States)

    2016-06-15

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  20. SU-F-J-109: Generate Synthetic CT From Cone Beam CT for CBCT-Based Dose Calculation

    International Nuclear Information System (INIS)

    Wang, H; Barbee, D; Wang, W; Pennell, R; Hu, K; Osterman, K

    2016-01-01

    Purpose: The use of CBCT for dose calculation is limited by its HU inaccuracy from increased scatter. This study presents a method to generate synthetic CT images from CBCT data by a probabilistic classification that may be robust to CBCT noise. The feasibility of using the synthetic CT for dose calculation is evaluated in IMRT for unilateral H&N cancer. Methods: In the training phase, a fuzzy c-means classification was performed on HU vectors (CBCT, CT) of planning CT and registered day-1 CBCT image pair. Using the resulting centroid CBCT and CT values for five classified “tissue” types, a synthetic CT for a daily CBCT was created by classifying each CBCT voxel to obtain its probability belonging to each tissue class, then assigning a CT HU with a probability-weighted summation of the classes’ CT centroids. Two synthetic CTs from a CBCT were generated: s-CT using the centroids from classification of individual patient CBCT/CT data; s2-CT using the same centroids for all patients to investigate the applicability of group-based centroids. IMRT dose calculations for five patients were performed on the synthetic CTs and compared with CT-planning doses by dose-volume statistics. Results: DVH curves of PTVs and critical organs calculated on s-CT and s2-CT agree with those from planning-CT within 3%, while doses calculated with heterogeneity off or on raw CBCT show DVH differences up to 15%. The differences in PTV D95% and spinal cord max are 0.6±0.6% and 0.6±0.3% for s-CT, and 1.6±1.7% and 1.9±1.7% for s2-CT. Gamma analysis (2%/2mm) shows 97.5±1.6% and 97.6±1.6% pass rates for using s-CTs and s2-CTs compared with CT-based doses, respectively. Conclusion: CBCT-synthesized CTs using individual or group-based centroids resulted in dose calculations that are comparable to CT-planning dose for unilateral H&N cancer. The method may provide a tool for accurate dose calculation based on daily CBCT.

  1. Band structure calculation of GaSe-based nanostructures using empirical pseudopotential method

    International Nuclear Information System (INIS)

    Osadchy, A V; Obraztsova, E D; Volotovskiy, S G; Golovashkin, D L; Savin, V V

    2016-01-01

    In this paper we present the results of band structure computer simulation of GaSe- based nanostructures using the empirical pseudopotential method. Calculations were performed using a specially developed software that allows performing simulations using cluster computing. Application of this method significantly reduces the demands on computing resources compared to traditional approaches based on ab-initio techniques and provides receiving the adequate comparable results. The use of cluster computing allows to obtain information for structures that require an explicit account of a significant number of atoms, such as quantum dots and quantum pillars. (paper)

  2. a New Method for Calculating Fractal Dimensions of Porous Media Based on Pore Size Distribution

    Science.gov (United States)

    Xia, Yuxuan; Cai, Jianchao; Wei, Wei; Hu, Xiangyun; Wang, Xin; Ge, Xinmin

    Fractal theory has been widely used in petrophysical properties of porous rocks over several decades and determination of fractal dimensions is always the focus of researches and applications by means of fractal-based methods. In this work, a new method for calculating pore space fractal dimension and tortuosity fractal dimension of porous media is derived based on fractal capillary model assumption. The presented work establishes relationship between fractal dimensions and pore size distribution, which can be directly used to calculate the fractal dimensions. The published pore size distribution data for eight sandstone samples are used to calculate the fractal dimensions and simultaneously compared with prediction results from analytical expression. In addition, the proposed fractal dimension method is also tested through Micro-CT images of three sandstone cores, and are compared with fractal dimensions by box-counting algorithm. The test results also prove a self-similar fractal range in sandstone when excluding smaller pores.

  3. Fundamental principles of earthquake resistance calculation to be reflected in the next generation regulations

    Directory of Open Access Journals (Sweden)

    Mkrtychev Oleg

    2016-01-01

    Full Text Available The article scrutinizes the pressing issues of regulation in the domain of seismic construction. The existing code of rules SNIP II-7-81* “Construction in seismic areas” provides that earthquake resistance calculation be performed on two levels of impact: basic safety earthquake (BSE and maximum considered earthquake (MCE. However, the very nature of such calculation cannot be deemed well-founded and contradicts the fundamental standards of foreign countries. The authors of the article have identified the main problems of the conceptual foundation underlying the current regulation. The first and foremost step intended to overcome the discrepancy in question is renunciation of the K1 damage tolerance factor when calculating the BSE. The second measure to be taken is implementing the response spectrum method of calculation, but the β spectral curve of the dynamic response factor must be replaced by a spectrum of worst-case accelerograms for this particular structure or a spectrum of simulated accelerograms obtained for the specific construction site. Application of the response spectrum method when calculating the MCE impact level makes it possible to proceed into the frequency domain and to eventually obtain spectra of the accelerograms. As a result we get to know the response of the building to some extent, i.e. forces, the required reinforcement, and it can be checked whether the conditions of the ultimate limit state apply. Then, the elements under the most intense load are excluded from the design model the way it is done in case of progressive collapse calculations, because the assumption is that these elements are destroyed locally by seismic load. This procedure is based on the already existing design practices of progressive collapse calculation.

  4. Dissociating action-effect activation and effect-based response selection.

    Science.gov (United States)

    Schwarz, Katharina A; Pfister, Roland; Wirth, Robert; Kunde, Wilfried

    2018-05-25

    Anticipated action effects have been shown to govern action selection and initiation, as described in ideomotor theory, and they have also been demonstrated to determine crosstalk between different tasks in multitasking studies. Such effect-based crosstalk was observed not only in a forward manner (with a first task influencing performance in a following second task) but also in a backward manner (the second task influencing the preceding first task), suggesting that action effect codes can become activated prior to a capacity-limited processing stage often denoted as response selection. The process of effect-based response production, by contrast, has been proposed to be capacity-limited. These observations jointly suggest that effect code activation can occur independently of effect-based response production, though this theoretical implication has not been tested directly at present. We tested this hypothesis by employing a dual-task set-up in which we manipulated the ease of effect-based response production (via response-effect compatibility) in an experimental design that allows for observing forward and backward crosstalk. We observed robust crosstalk effects and response-effect compatibility effects alike, but no interaction between both effects. These results indicate that effect activation can occur in parallel for several tasks, independently of effect-based response production, which is confined to one task at a time. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Characterization of Ferrofluid-based Stimuli-responsive Elastomers

    OpenAIRE

    Sandra dePedro; Xavier Munoz-Berbel; Rosalia Rodríguez-Rodríguez; Jordi Sort; Jose Antonio Plaza; Juergen Brugger; Andreu Llobera; Victor J Cadarso

    2016-01-01

    Stimuli-responsive materials undergo physicochemical and/or structural changes when a specific actuation is applied. They are heterogeneous composites, consisting of a non-responsive matrix where functionality is provided by the filler. Surprisingly, the synthesis of polydimethylsiloxane (PDMS)-based stimuli-responsive elastomers (SRE) has seldomly been presented. Here, we present the structural, biological, optical, magnetic, and mechanical properties of several magnetic SRE (M-SRE) obtained...

  6. Calculating acid-base and oxygenation status during COPD exacerbation using mathematically arterialised venous blood

    DEFF Research Database (Denmark)

    Rees, Stephen Edward; Rychwicka-Kielek, Beate A; Andersen, Bjarne F

    2012-01-01

    Abstract Background: Repeated arterial puncture is painful. A mathematical method exists for transforming peripheral venous pH, PCO2 and PO2 to arterial eliminating the need for arterial sampling. This study evaluates this method to monitor acid-base and oxygenation during admission...... for exacerbation of chronic obstructive pulmonary disease (COPD). Methods: Simultaneous arterial and peripheral venous blood was analysed. Venous values were used to calculate arterial pH, PCO2 and PO2, with these compared to measured values using Bland-Altman analysis and scatter plots. Calculated values of PO2......H, PCO2 and PO2 were 7.432±0.047, 6.8±1.7 kPa and 9.2±1.5 kPa, respectively. Calculated and measured arterial pH and PCO2 agreed well, differences having small bias and SD (0.000±0.022 pH, -0.06±0.50 kPa PCO2), significantly better than venous blood alone. Calculated PO2 obeyed the clinical rules...

  7. Semiclassical Path Integral Calculation of Nonlinear Optical Spectroscopy.

    Science.gov (United States)

    Provazza, Justin; Segatta, Francesco; Garavelli, Marco; Coker, David F

    2018-02-13

    Computation of nonlinear optical response functions allows for an in-depth connection between theory and experiment. Experimentally recorded spectra provide a high density of information, but to objectively disentangle overlapping signals and to reach a detailed and reliable understanding of the system dynamics, measurements must be integrated with theoretical approaches. Here, we present a new, highly accurate and efficient trajectory-based semiclassical path integral method for computing higher order nonlinear optical response functions for non-Markovian open quantum systems. The approach is, in principle, applicable to general Hamiltonians and does not require any restrictions on the form of the intrasystem or system-bath couplings. This method is systematically improvable and is shown to be valid in parameter regimes where perturbation theory-based methods qualitatively breakdown. As a test of the methodology presented here, we study a system-bath model for a coupled dimer for which we compare against numerically exact results and standard approximate perturbation theory-based calculations. Additionally, we study a monomer with discrete vibronic states that serves as the starting point for future investigation of vibronic signatures in nonlinear electronic spectroscopy.

  8. Improvement in MFTF data base system response times

    International Nuclear Information System (INIS)

    Lang, N.C.; Nelson, B.C.

    1983-01-01

    The Supervisory Control and Diagnostic System for the Mirror Fusion Test Facility (MFTF) has been designed as an event driven system. To this end we have designed a data base notification facility in which a task can request that it be loaded and started whenever an element in the data base is changed beyond some user defined range. Our initial implementation of the notify facility exhibited marginal response times whenever a data base table with a large number of outstanding notifies was written into. In this paper we discuss the sources of the slow response and describe in detail a new structure for the list of notifies which minimizes search time resulting in significantly faster response

  9. DEPDOSE: An interactive, microcomputer based program to calculate doses from exposure to radionuclides deposited on the ground

    International Nuclear Information System (INIS)

    Beres, D.A.; Hull, A.P.

    1991-12-01

    DEPDOSE is an interactive, menu driven, microcomputer based program designed to rapidly calculate committed dose from radionuclides deposited on the ground. The program is designed to require little or no computer expertise on the part of the user. The program consisting of a dose calculation section and a library maintenance section. These selections are available to the user from the main menu. The dose calculation section provides the user with the ability to calculate committed doses, determine the decay time needed to reach a particular dose, cross compare deposition data from separate locations, and approximate a committed dose based on a measured exposure rate. The library maintenance section allows the user to review and update dose modifier data as well as to build and maintain libraries of radionuclide data, dose conversion factors, and default deposition data. The program is structured to provide the user easy access for reviewing data prior to running the calculation. Deposition data can either be entered by the user or imported from other databases. Results can either be displayed on the screen or sent to the printer

  10. Wave resistance calculation method combining Green functions based on Rankine and Kelvin source

    Directory of Open Access Journals (Sweden)

    LI Jingyu

    2017-12-01

    Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.

  11. Absorbed doses behind bones with MR image-based dose calculations for radiotherapy treatment planning.

    Science.gov (United States)

    Korhonen, Juha; Kapanen, Mika; Keyrilainen, Jani; Seppala, Tiina; Tuomikoski, Laura; Tenhunen, Mikko

    2013-01-01

    Magnetic resonance (MR) images are used increasingly in external radiotherapy target delineation because of their superior soft tissue contrast compared to computed tomography (CT) images. Nevertheless, radiotherapy treatment planning has traditionally been based on the use of CT images, due to the restrictive features of MR images such as lack of electron density information. This research aimed to measure absorbed radiation doses in material behind different bone parts, and to evaluate dose calculation errors in two pseudo-CT images; first, by assuming a single electron density value for the bones, and second, by converting the electron density values inside bones from T(1)∕T(2)∗-weighted MR image intensity values. A dedicated phantom was constructed using fresh deer bones and gelatine. The effect of different bone parts to the absorbed dose behind them was investigated with a single open field at 6 and 15 MV, and measuring clinically detectable dose deviations by an ionization chamber matrix. Dose calculation deviations in a conversion-based pseudo-CT image and in a bulk density pseudo-CT image, where the relative electron density to water for the bones was set as 1.3, were quantified by comparing the calculation results with those obtained in a standard CT image by superposition and Monte Carlo algorithms. The calculations revealed that the applied bulk density pseudo-CT image causes deviations up to 2.7% (6 MV) and 2.0% (15 MV) to the dose behind the examined bones. The corresponding values in the conversion-based pseudo-CT image were 1.3% (6 MV) and 1.0% (15 MV). The examinations illustrated that the representation of the heterogeneous femoral bone (cortex denser compared to core) by using a bulk density for the whole bone causes dose deviations up to 2% both behind the bone edge and the middle part of the bone (diameter bones). This study indicates that the decrease in absorbed dose is not dependent on the bone diameter with all types of bones. Thus

  12. Calculation of surface acoustic waves in a multilayered piezoelectric structure

    International Nuclear Information System (INIS)

    Zhang Zuwei; Wen Zhiyu; Hu Jing

    2013-01-01

    The propagation properties of the surface acoustic waves (SAWs) in a ZnO—SiO 2 —Si multilayered piezoelectric structure are calculated by using the recursive asymptotic method. The phase velocities and the electromechanical coupling coefficients for the Rayleigh wave and the Love wave in the different ZnO—SiO 2 —Si structures are calculated and analyzed. The Love mode wave is found to be predominantly generated since the c-axis of the ZnO film is generally perpendicular to the substrate. In order to prove the calculated results, a Love mode SAW device based on the ZnO—SiO 2 —Si multilayered structure is fabricated by micromachining, and its frequency responses are detected. The experimental results are found to be mainly consistent with the calculated ones, except for the slightly larger velocities induced by the residual stresses produced in the fabrication process of the films. The deviation of the experimental results from the calculated ones is reduced by thermal annealing. (semiconductor physics)

  13. Calculation of nuclear spin-spin coupling constants using frozen density embedding

    Energy Technology Data Exchange (ETDEWEB)

    Götz, Andreas W., E-mail: agoetz@sdsc.edu [San Diego Supercomputer Center, University of California San Diego, 9500 Gilman Dr MC 0505, La Jolla, California 92093-0505 (United States); Autschbach, Jochen [Department of Chemistry, University at Buffalo, State University of New York, Buffalo, New York 14260-3000 (United States); Visscher, Lucas, E-mail: visscher@chem.vu.nl [Amsterdam Center for Multiscale Modeling (ACMM), VU University Amsterdam, Theoretical Chemistry, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands)

    2014-03-14

    We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects in the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.

  14. CALCULATION OF LASER CUTTING COSTS

    Directory of Open Access Journals (Sweden)

    Bogdan Nedic

    2016-09-01

    Full Text Available The paper presents description methods of metal cutting and calculation of treatment costs based on model that is developed on Faculty of mechanical engineering in Kragujevac. Based on systematization and analysis of large number of calculation models of cutting with unconventional methods, mathematical model is derived, which is used for creating a software for calculation costs of metal cutting. Software solution enables resolving the problem of calculating the cost of laser cutting, comparison' of costs made by other unconventional methods and provides documentation that consists of reports on estimated costs.

  15. Comparison of Conductor-Temperature Calculations Based on Different Radial-Position-Temperature Detections for High-Voltage Power Cable

    Directory of Open Access Journals (Sweden)

    Lin Yang

    2018-01-01

    Full Text Available In this paper, the calculation of the conductor temperature is related to the temperature sensor position in high-voltage power cables and four thermal circuits—based on the temperatures of insulation shield, the center of waterproof compound, the aluminum sheath, and the jacket surface are established to calculate the conductor temperature. To examine the effectiveness of conductor temperature calculations, simulation models based on flow characteristics of the air gap between the waterproof compound and the aluminum are built up, and thermocouples are placed at the four radial positions in a 110 kV cross-linked polyethylene (XLPE insulated power cable to measure the temperatures of four positions. In measurements, six cases of current heating test under three laying environments, such as duct, water, and backfilled soil were carried out. Both errors of the conductor temperature calculation and the simulation based on the temperature of insulation shield were significantly smaller than others under all laying environments. It is the uncertainty of the thermal resistivity, together with the difference of the initial temperature of each radial position by the solar radiation, which led to the above results. The thermal capacitance of the air has little impact on errors. The thermal resistance of the air gap is the largest error source. Compromising the temperature-estimation accuracy and the insulation-damage risk, the waterproof compound is the recommended sensor position to improve the accuracy of conductor-temperature calculation. When the thermal resistances were calculated correctly, the aluminum sheath is also the recommended sensor position besides the waterproof compound.

  16. Prospects in deterministic three dimensional whole-core transport calculations

    International Nuclear Information System (INIS)

    Sanchez, Richard

    2012-01-01

    The point we made in this paper is that, although detailed and precise three-dimensional (3D) whole-core transport calculations may be obtained in the future with massively parallel computers, they would have an application to only some of the problems of the nuclear industry, more precisely those regarding multiphysics or for methodology validation or nuclear safety calculations. On the other hand, typical design reactor cycle calculations comprising many one-point core calculations can have very strict constraints in computing time and will not directly benefit from the advances in computations in large scale computers. Consequently, in this paper we review some of the deterministic 3D transport methods which in the very near future may have potential for industrial applications and, even with low-order approximations such as a low resolution in energy, might represent an advantage as compared with present industrial methodology, for which one of the main approximations is due to power reconstruction. These methods comprise the response-matrix method and methods based on the two-dimensional (2D) method of characteristics, such as the fusion method.

  17. A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    International Nuclear Information System (INIS)

    Gu Xuejun; Jia Xun; Jiang, Steve B; Jelen, Urszula; Li Jinsheng

    2011-01-01

    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (∼5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.

  18. Calculations of the resonant response of carbon nanotubes to binding of DNA

    International Nuclear Information System (INIS)

    Zheng Meng; Ke Changhong; Eom, Kilho

    2009-01-01

    We theoretically study the dynamical response of carbon nanotubes (CNTs) to the binding of DNA in an aqueous environment by considering two major interactions in DNA helical binding to the CNT side surface: adhesion between DNA nucleobases and CNT surfaces and electrostatic interactions between negative charges on DNA backbones. The equilibrium DNA helical wrapping angle is obtained using the minimum potential energy method. Our results show that the preferred DNA wrapping angle in the equilibrium binding to CNT is dependent on both DNA length and DNA base. The equilibrium wrapping angle for a poly(dT) chain is larger than a comparable poly(dA) chain as a result of dT in a homopolymer chain having a higher effective binding energy to CNT than dA. Our results also interestingly reveal a sharp transition in the wrapping angle-DNA length profile for both homopolymers, implying that the equilibrium helical wrapping configuration does not exist for a certain range of wrapping angles. Furthermore, the resonant response of the DNA-CNT complex is analysed based on the variational method with a Hamiltonian which takes into account the CNT bending energy as well as DNA-CNT interactions. The closed-form analytical solution for predicting the resonant frequency of the DNA-CNT complex is presented. Our results show that the hydrodynamic loading on the oscillating CNT in aqueous environments has profound impacts on the resonance behaviour of DNA-CNT complexes. Our results suggest that detection of DNA molecules using CNT resonators based on DNA-CNT interactions through frequency measurements should be conducted in media with low hydrodynamic loading on CNTs. Our theoretical framework provides a fundamental principle for label-free detection using CNT resonators based on DNA-CNT interactions.

  19. Understanding the biological activity of high rate algae ponds through the calculation of oxygen balances.

    Science.gov (United States)

    Arbib, Zouhayr; de Godos Crespo, Ignacio; Corona, Enrique Lara; Rogalla, Frank

    2017-06-01

    Microalgae culture in high rate algae ponds (HRAP) is an environmentally friendly technology for wastewater treatment. However, for the implementation of these systems, a better understanding of the oxygenation potential and the influence of climate conditions is required. In this work, the rates of oxygen production, consumption, and exchange with the atmosphere were calculated under varying conditions of solar irradiance and dilution rate during six months of operation in a real scale unit. This analysis allowed determining the biological response of these dynamic systems. The rates of oxygen consumption measured were considerably higher than the values calculated based on the organic loading rate. The response to light intensity in terms of oxygen production in the bioreactor was described with one of the models proposed for microalgae culture in dense concentrations. This model is based on the availability of light inside the culture and the specific response of microalgae to this parameter. The specific response to solar radiation intensity showed a reasonable stability in spite of the fluctuations due to meteorological conditions. The methodology developed is a useful tool for optimization and prediction of the performance of these systems.

  20. In-plant considerations for optimal offsite response to reactor accidents

    International Nuclear Information System (INIS)

    Burke, R.P.; Heising, C.D.; Aldrich, D.C.

    1982-11-01

    Offsite response decision-making methods based on in-plant conditions are developed for use during severe reactor-accident situations. Dose projections are used to eliminate all LWR plant systems except the reactor core and the spent-fuel storage pool from consideration for immediate offsite emergency response during accident situations. A simple plant information-management scheme is developed for use in offsite response decision-making. Detailed consequence calculations performed with the CRAC2 model are used to determine the appropriate timing of offsite-response implementation for a range of PWR accidents involving the reactor core. In-plant decision criteria for offsite-response implementation are defined. The definition of decision criteria is based on consideration of core-accident physical processes, in-plant accident monitoring information, and results of consequence calculations performed to determine the effectiveness of various public-protective measures. The benefits and negative aspects of the proposed response-implementation criteria are detailed

  1. Uncertainty analysis of neutron transport calculation

    International Nuclear Information System (INIS)

    Oka, Y.; Furuta, K.; Kondo, S.

    1987-01-01

    A cross section sensitivity-uncertainty analysis code, SUSD was developed. The code calculates sensitivity coefficients for one and two-dimensional transport problems based on the first order perturbation theory. Variance and standard deviation of detector responses or design parameters can be obtained using cross section covariance matrix. The code is able to perform sensitivity-uncertainty analysis for secondary neutron angular distribution(SAD) and secondary neutron energy distribution(SED). Covariances of 6 Li and 7 Li neutron cross sections in JENDL-3PR1 were evaluated including SAD and SED. Covariances of Fe and Be were also evaluated. The uncertainty of tritium breeding ratio, fast neutron leakage flux and neutron heating was analysed on four types of blanket concepts for a commercial tokamak fusion reactor. The uncertainty of tritium breeding ratio was less than 6 percent. Contribution from SAD/SED uncertainties are significant for some parameters. Formulas to estimate the errors of numerical solution of the transport equation were derived based on the perturbation theory. This method enables us to deterministically estimate the numerical errors due to iterative solution, spacial discretization and Legendre polynomial expansion of transfer cross-sections. The calculational errors of the tritium breeding ratio and the fast neutron leakage flux of the fusion blankets were analysed. (author)

  2. Three-Phase Short-Circuit Current Calculation of Power Systems with High Penetration of VSC-Based Renewable Energy

    Directory of Open Access Journals (Sweden)

    Niancheng Zhou

    2018-03-01

    Full Text Available Short-circuit current level of power grid will be increased with high penetration of VSC-based renewable energy, and a strong coupling between transient fault process and control strategy will change the fault features. The full current expression of VSC-based renewable energy was obtained according to transient characteristics of short-circuit current. Furtherly, by analyzing the closed-loop transfer function model of controller and current source characteristics presented in steady state during a fault, equivalent circuits of VSC-based renewable energy of fault transient state and steady state were proposed, respectively. Then the correctness of the theory was verified by experimental tests. In addition, for power grid with VSC-based renewable energy, superposition theorem was used to calculate AC component and DC component of short-circuit current, respectively, then the peak value of short-circuit current was evaluated effectively. The calculated results could be used for grid planning and design, short-circuit current management as well as adjustment of relay protection. Based on comparing calculation and simulation results of 6-node 500 kV Huainan power grid and 35-node 220 kV Huaisu power grid, the effectiveness of the proposed method was verified.

  3. Python-based framework for coupled MC-TH reactor calculations

    International Nuclear Information System (INIS)

    Travleev, A.A.; Molitor, R.; Sanchez, V.

    2013-01-01

    We have developed a set of Python packages to provide a modern programming interface to codes used for analysis of nuclear reactors. Python classes can be classified by their functionality into three groups: low-level interfaces, general model classes and high-level interfaces. A low-level interface describes an interface between Python and a particular code. General model classes can be used to describe calculation geometry and meshes to represent system variables. High-level interface classes are used to convert geometry described with general model classes into instances of low-level interface classes and to put results of code calculations (read by low-interface classes) back to general model. The implementation of Python interfaces to the Monte Carlo neutronics code MCNP and thermo-hydraulic code SCF allow efficient description of calculation models and provide a framework for coupled calculations. In this paper we illustrate how these interfaces can be used to describe a pin model, and report results of coupled MCNP-SCF calculations performed for a PWR fuel assembly, organized by means of the interfaces

  4. A Cultural Study of a Science Classroom and Graphing Calculator-based Technology

    OpenAIRE

    Casey, Dennis Alan

    2001-01-01

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology...

  5. Initialization bias suppression in iterative Monte Carlo calculations: benchmarks on criticality calculation

    International Nuclear Information System (INIS)

    Richet, Y.; Jacquet, O.; Bay, X.

    2005-01-01

    The accuracy of an Iterative Monte Carlo calculation requires the convergence of the simulation output process. The present paper deals with a post processing algorithm to suppress the transient due to initialization applied on criticality calculations. It should be noticed that this initial transient suppression aims only at obtaining a stationary output series, then the convergence of the calculation needs to be guaranteed independently. The transient suppression algorithm consists in a repeated truncation of the first observations of the output process. The truncation of the first observations is performed as long as a steadiness test based on Brownian bridge theory is negative. This transient suppression method was previously tuned for a simplified model of criticality calculations, although this paper focuses on the efficiency on real criticality calculations. The efficiency test is based on four benchmarks with strong source convergence problems: 1) a checkerboard storage of fuel assemblies, 2) a pin cell array with irradiated fuel, 3) 3 one-dimensional thick slabs, and 4) an array of interacting fuel spheres. It appears that the transient suppression method needs to be more widely validated on real criticality calculations before any blind using as a post processing in criticality codes

  6. Immunotoxicity of perfluorinated alkylates: calculation of benchmark doses based on serum concentrations in children

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Joergensen, Esben

    2013-01-01

    BACKGROUND: Immune suppression may be a critical effect associated with exposure to perfluorinated compounds (PFCs), as indicated by recent data on vaccine antibody responses in children. Therefore, this information may be crucial when deciding on exposure limits. METHODS: Results obtained from...... follow-up of a Faroese birth cohort were used. Serum-PFC concentrations were measured at age 5 years, and serum antibody concentrations against tetanus and diphtheria toxoids were obtained at ages 7 years. Benchmark dose results were calculated in terms of serum concentrations for 431 children...

  7. [Calculation and analysis of arc temperature field of pulsed TIG welding based on Fowler-Milne method].

    Science.gov (United States)

    Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang

    2012-09-01

    Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding.

  8. The calculation of surface free energy based on embedded atom method for solid nickel

    International Nuclear Information System (INIS)

    Luo Wenhua; Hu Wangyu; Su Kalin; Liu Fusheng

    2013-01-01

    Highlights: ► A new solution for accurate prediction of surface free energy based on embedded atom method was proposed. ► The temperature dependent anisotropic surface energy of solid nickel was obtained. ► In isotropic environment, the approach does not change most predictions of bulk material properties. - Abstract: Accurate prediction of surface free energy of crystalline metals is a challenging task. The theory calculations based on embedded atom method potentials often underestimate surface free energy of metals. With an analytical charge density correction to the argument of the embedding energy of embedded atom method, an approach to improve the prediction for surface free energy is presented. This approach is applied to calculate the temperature dependent anisotropic surface energy of bulk nickel and surface energies of nickel nanoparticles, and the obtained results are in good agreement with available experimental data.

  9. High surface adsorption properties of carbon-based nanomaterials are responsible for mortality, swimming inhibition, and biochemical responses in Artemia salina larvae

    International Nuclear Information System (INIS)

    Mesarič, Tina; Gambardella, Chiara; Milivojević, Tamara; Faimali, Marco; Drobne, Damjana; Falugi, Carla; Makovec, Darko; Jemec, Anita; Sepčić, Kristina

    2015-01-01

    Highlights: • Carbon-based nanomaterials adsorb onto the body surface of A. salina larvae. • Surface adsorption results in concentration–dependent inhibition of larval swimming. • Carbon-based nanomaterials induce no significant mortality of A. salina larvae. - Abstract: We investigated the effects of three different carbon-based nanomaterials on brine shrimp (Artemia salina) larvae. The larvae were exposed to different concentrations of carbon black, graphene oxide, and multiwall carbon nanotubes for 48 h, and observed using phase contrast and scanning electron microscopy. Acute (mortality) and behavioural (swimming speed alteration) responses and cholinesterase, glutathione-S-transferase and catalase enzyme activities were evaluated. These nanomaterials were ingested and concentrated in the gut, and attached onto the body surface of the A. salina larvae. This attachment was responsible for concentration–dependent inhibition of larval swimming, and partly for alterations in the enzyme activities, that differed according to the type of tested nanomaterials. No lethal effects were observed up to 0.5 mg/mL carbon black and 0.1 mg/mL multiwall carbon nanotubes, while graphene oxide showed a threshold whereby it had no effects at 0.6 mg/mL, and more than 90% mortality at 0.7 mg/mL. Risk quotients calculated on the basis of predicted environmental concentrations indicate that carbon black and multiwall carbon nanotubes currently do not pose a serious risk to the marine environment, however if uncontrolled release of nanomaterials continues, this scenario can rapidly change

  10. High surface adsorption properties of carbon-based nanomaterials are responsible for mortality, swimming inhibition, and biochemical responses in Artemia salina larvae

    Energy Technology Data Exchange (ETDEWEB)

    Mesarič, Tina, E-mail: tina.mesaric84@gmail.com [Department of Biology, Biotechnical Faculty, University of Ljubljana (Slovenia); Gambardella, Chiara, E-mail: chiara.gambardella@ge.ismar.cnr.it [Institute of Marine Sciences, National Research Council, Genova (Italy); Milivojević, Tamara, E-mail: milivojevictamara@gmail.com [Department of Biology, Biotechnical Faculty, University of Ljubljana (Slovenia); Faimali, Marco, E-mail: marco.faimali@ismar.cnr.it [Institute of Marine Sciences, National Research Council, Genova (Italy); Drobne, Damjana, E-mail: damjana.drobne@bf.uni-lj.si [Department of Biology, Biotechnical Faculty, University of Ljubljana (Slovenia); Centre of Excellence in Nanoscience and Nanotechnology (CO Nanocentre), Ljubljana (Slovenia); Centre of Excellence in Advanced Materials and Technologies for the Future (CO NAMASTE), Ljubljana (Slovenia); Falugi, Carla, E-mail: carlafalugi@hotmail.it [Department of Earth, Environment and Life Sciences, University of Genova, Genova (Italy); Makovec, Darko, E-mail: darko.makovec@ijs.si [Jožef Stefan Institute, Jamova 39, 1000 Ljubljana (Slovenia); Jemec, Anita, E-mail: anita.jemec@bf.uni-lj.si [Department of Biology, Biotechnical Faculty, University of Ljubljana (Slovenia); Sepčić, Kristina, E-mail: kristina.sepcic@bf.uni-lj.si [Department of Biology, Biotechnical Faculty, University of Ljubljana (Slovenia)

    2015-06-15

    Highlights: • Carbon-based nanomaterials adsorb onto the body surface of A. salina larvae. • Surface adsorption results in concentration–dependent inhibition of larval swimming. • Carbon-based nanomaterials induce no significant mortality of A. salina larvae. - Abstract: We investigated the effects of three different carbon-based nanomaterials on brine shrimp (Artemia salina) larvae. The larvae were exposed to different concentrations of carbon black, graphene oxide, and multiwall carbon nanotubes for 48 h, and observed using phase contrast and scanning electron microscopy. Acute (mortality) and behavioural (swimming speed alteration) responses and cholinesterase, glutathione-S-transferase and catalase enzyme activities were evaluated. These nanomaterials were ingested and concentrated in the gut, and attached onto the body surface of the A. salina larvae. This attachment was responsible for concentration–dependent inhibition of larval swimming, and partly for alterations in the enzyme activities, that differed according to the type of tested nanomaterials. No lethal effects were observed up to 0.5 mg/mL carbon black and 0.1 mg/mL multiwall carbon nanotubes, while graphene oxide showed a threshold whereby it had no effects at 0.6 mg/mL, and more than 90% mortality at 0.7 mg/mL. Risk quotients calculated on the basis of predicted environmental concentrations indicate that carbon black and multiwall carbon nanotubes currently do not pose a serious risk to the marine environment, however if uncontrolled release of nanomaterials continues, this scenario can rapidly change.

  11. A flow-based methodology for the calculation of TSO to TSO compensations for cross-border flows

    International Nuclear Information System (INIS)

    Glavitsch, H.; Andersson, G.; Lekane, Th.; Marien, A.; Mees, E.; Naef, U.

    2004-01-01

    In the context of the development of the European internal electricity market, several methods for the tarification of cross-border flows have been proposed. This paper presents a flow-based method for the calculation of TSO to TSO compensations for cross-border flows. The basic principle of this approach is the allocation of the costs of cross-border flows to the TSOs who are responsible for these flows. This method is cost reflective, non-transaction based and compatible with domestic tariffs. It can be applied when limited data are available. Each internal transmission network is then modelled as an aggregated node, called 'supernode', and the European network is synthesized by a graph of supernodes and arcs, each arc representing all cross-border lines between two adjacent countries. When detailed data are available, the proposed methodology is also applicable to all the nodes and lines of the transmission network. Costs associated with flows transiting through supernodes or network elements are forwarded through the network in a way reflecting how the flows make use of the network. The costs can be charged either towards loads and exports or towards generations and imports. Combination of the two charging directions can also be considered. (author)

  12. Sharing responsibility for carbon dioxide emissions: A perspective on border tax adjustments

    International Nuclear Information System (INIS)

    Chang, Ning

    2013-01-01

    Concerns about the equity and efficiency of current allocation principles related to responsibility for carbon dioxide (CO 2 ) emissions have been presented in the recent literature. The objective of this paper is to design a calculation framework for shared responsibility from the perspective of border tax adjustments. The advantage of this framework is that it makes the shared responsibility principle and border carbon taxation complementary to each other; these are important policies for reducing global CO 2 emissions, but they are individually supported by developing and developed countries. As an illustration, the proposed framework is applied to data from China in 2007. The empirical results show that for the Chinese economy as a whole, changing from the production-based criterion to the shared responsibility approach would lead to an 11% decrease in its responsibility for CO 2 emissions. Moreover, the differences observed between the production-based criterion and the shared responsibility approach are considerable in several sectors; for example, changing from the production-based criterion to the shared principle would lead to a 60% decrease in the responsibility of the textile sector. - Highlights: • This paper designs a shared responsibility calculation framework for CO 2 emissions. • This paper suggests that the carbon tariff rate serve as a basis for calculating shared responsibility. • The proposed framework is applied to data from China in 2007. • Shared responsibility principle will significantly decrease China's responsibility for CO 2 emissions

  13. Uncertainty modelling and analysis of volume calculations based on a regular grid digital elevation model (DEM)

    Science.gov (United States)

    Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi

    2018-05-01

    The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.

  14. First-principles calculations of bulk and interfacial thermodynamic properties for fcc-based Al-Sc alloys

    International Nuclear Information System (INIS)

    Asta, M.; Foiles, S.M.; Quong, A.A.

    1998-01-01

    The configurational thermodynamic properties of fcc-based Al-Sc alloys and coherent Al/Al 3 Sc interphase-boundary interfaces have been calculated from first principles. The computational approach used in this study combines the results of pseudopotential total-energy calculations with a cluster-expansion description of the alloy energetics. Bulk and interface configurational-thermodynamic properties are computed using a low-temperature-expansion technique. Calculated values of the {100} and {111} Al/Al 3 Sc interfacial energies at zero temperature are, respectively, 192 and 226mJ/m 2 . The temperature dependence of the calculated interfacial free energies is found to be very weak for {100} and more appreciable for {111} orientations; the primary effect of configurational disordering at finite temperature is to reduce the degree of crystallographic anisotropy associated with calculated interfacial free energies. The first-principles-computed solid-solubility limits for Sc in bulk fcc Al are found to be underestimated significantly in comparison with experimental measurements. It is argued that this discrepancy can be largely attributed to nonconfigurational contributions to the entropy which have been neglected in the present thermodynamic calculations. copyright 1998 The American Physical Society

  15. Real-time simulation of response to load variation for a ship reactor based on point-reactor double regions and lumped parameter model

    Energy Technology Data Exchange (ETDEWEB)

    Wang Qiao; Zhang De [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Wenzhen, E-mail: Cwz2@21cn.com [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China); Chen Zhiyun [Department of Nuclear Energy Science and Engineering, Naval University of Engineering, Wuhan 430033 (China)

    2011-05-15

    Research highlights: > We calculate the variation of main parameters of the reactor core by the Simulink. > The Simulink calculation software (SCS) can deal well with the stiff problem. > The high calculation precision is reached with less time, and the results can be easily displayed. > The quick calculation of ship reactor transient can be achieved by this method. - Abstract: Based on the point-reactor double regions and lumped parameter model, while the nuclear power plant second loop load is increased or decreased quickly, the Simulink calculation software (SCS) is adopted to calculate the variation of main physical and thermal-hydraulic parameters of the reactor core. The calculation results are compared with those of three-dimensional simulation program. It is indicated that the SCS can deal well with the stiff problem of the point-reactor kinetics equations and the coupled problem of neutronics and thermal-hydraulics. The high calculation precision can be reached with less time, and the quick calculation of parameters of response to load disturbance for the ship reactor can be achieved. The clear image of the calculation results can also be displayed quickly by the SCS, which is very significant and important to guarantee the reactor safety operation.

  16. High-performance whole core Pin-by-Pin calculation based on EFEN-SP_3 method

    International Nuclear Information System (INIS)

    Yang Wen; Zheng Youqi; Wu Hongchun; Cao Liangzhi; Li Yunzhao

    2014-01-01

    The EFEN code for high-performance PWR whole core pin-by-pin calculation based on the EFEN-SP_3 method can be achieved by employing spatial parallelization based on MPI. To take advantage of the advanced computing and storage power, the entire problem spatial domain can be appropriately decomposed into sub-domains and the assigned to parallel CPUs to balance the computing load and minimize communication cost. Meanwhile, Red-Black Gauss-Seidel nodal sweeping scheme is employed to avoid the within-group iteration deterioration due to spatial parallelization. Numerical results based on whole core pin-by-pin problems designed according to commercial PWRs demonstrate the following conclusions: The EFEN code can provide results with acceptable accuracy; Communication period impacts neither the accuracy nor the parallel efficiency; Domain decomposition methods with smaller surface to volume ratio leads to greater parallel efficiency; A PWR whole core pin-by-pin calculation with a spatial mesh 289 × 289 × 218 and 4 energy groups could be completed about 900 s by using 125 CPUs, and its parallel efficiency is maintained at about 90%. (authors)

  17. Dose calculation for electrons

    International Nuclear Information System (INIS)

    Hirayama, Hideo

    1995-01-01

    The joint working group of ICRP/ICRU is advancing the works of reviewing the ICRP publication 51 by investigating the data related to radiation protection. In order to introduce the 1990 recommendation, it has been demanded to carry out calculation for neutrons, photons and electrons. As for electrons, EURADOS WG4 (Numerical Dosimetry) rearranged the data to be calculated at the meeting held in PTB Braunschweig in June, 1992, and the question and request were presented by Dr. J.L. Chartier, the responsible person, to the researchers who are likely to undertake electron transport Monte Carlo calculation. The author also has carried out the requested calculation as it was the good chance to do the mutual comparison among various computation codes regarding electron transport calculation. The content that the WG requested to calculate was the absorbed dose at depth d mm when parallel electron beam enters at angle α into flat plate phantoms of PMMA, water and ICRU4-element tissue, which were placed in vacuum. The calculation was carried out by the versatile electron-photon shower computation Monte Carlo code, EGS4. As the results, depth dose curves and the dependence of absorbed dose on electron energy, incident angle and material are reported. The subjects to be investigated are pointed out. (K.I.)

  18. Reference voltage calculation method based on zero-sequence component optimisation for a regional compensation DVR

    Science.gov (United States)

    Jian, Le; Cao, Wang; Jintao, Yang; Yinge, Wang

    2018-04-01

    This paper describes the design of a dynamic voltage restorer (DVR) that can simultaneously protect several sensitive loads from voltage sags in a region of an MV distribution network. A novel reference voltage calculation method based on zero-sequence voltage optimisation is proposed for this DVR to optimise cost-effectiveness in compensation of voltage sags with different characteristics in an ungrounded neutral system. Based on a detailed analysis of the characteristics of voltage sags caused by different types of faults and the effect of the wiring mode of the transformer on these characteristics, the optimisation target of the reference voltage calculation is presented with several constraints. The reference voltages under all types of voltage sags are calculated by optimising the zero-sequence component, which can reduce the degree of swell in the phase-to-ground voltage after compensation to the maximum extent and can improve the symmetry degree of the output voltages of the DVR, thereby effectively increasing the compensation ability. The validity and effectiveness of the proposed method are verified by simulation and experimental results.

  19. Uncertain hybrid model for the response calculation of an alternator

    International Nuclear Information System (INIS)

    Kuczkowiak, Antoine

    2014-01-01

    The complex structural dynamic behavior of alternator must be well understood in order to insure their reliable and safe operation. The numerical model is however difficult to construct mainly due to the presence of a high level of uncertainty. The objective of this work is to provide decision support tools in order to assess the vibratory levels in operation before to restart the alternator. Based on info-gap theory, a first decision support tool is proposed: the objective here is to assess the robustness of the dynamical response to the uncertain modal model. Based on real data, the calibration of an info-gap model of uncertainty is also proposed in order to enhance its fidelity to reality. Then, the extended constitutive relation error is used to expand identified mode shapes which are used to assess the vibratory levels. The robust expansion process is proposed in order to obtain robust expanded mode shapes to parametric uncertainties. In presence of lack-of knowledge, the trade-off between fidelity-to-data and robustness-to-uncertainties which expresses that robustness improves as fidelity deteriorates is emphasized on an industrial structure by using both reduced order model and surrogate model techniques. (author)

  20. Comparison of CT number calibration techniques for CBCT-based dose calculation

    International Nuclear Information System (INIS)

    Dunlop, Alex; McQuaid, Dualta; Nill, Simeon; Hansen, Vibeke N.; Oelfke, Uwe; Murray, Julia; Bhide, Shreerang; Harrington, Kevin; Poludniowski, Gavin; Nutting, Christopher; Newbold, Kate

    2015-01-01

    The aim of this work was to compare and validate various computed tomography (CT) number calibration techniques with respect to cone beam CT (CBCT) dose calculation accuracy. CBCT dose calculation accuracy was assessed for pelvic, lung, and head and neck (H and N) treatment sites for two approaches: (1) physics-based scatter correction methods (CBCT r ); (2) density override approaches including assigning water density to the entire CBCT (W), assignment of either water or bone density (WB), and assignment of either water or lung density (WL). Methods for CBCT density assignment within a commercially available treatment planning system (RS auto ), where CBCT voxels are binned into six density levels, were assessed and validated. Dose-difference maps and dose-volume statistics were used to compare the CBCT dose distributions with the ground truth of a planning CT acquired the same day as the CBCT. For pelvic cases, all CTN calibration methods resulted in average dose-volume deviations below 1.5 %. RS auto provided larger than average errors for pelvic treatments for patients with large amounts of adipose tissue. For H and N cases, all CTN calibration methods resulted in average dose-volume differences below 1.0 % with CBCT r (0.5 %) and RS auto (0.6 %) performing best. For lung cases, WL and RS auto methods generated dose distributions most similar to the ground truth. The RS auto density override approach is an attractive option for CTN adjustments for a variety of anatomical sites. RS auto methods were validated, resulting in dose calculations that were consistent with those calculated on diagnostic-quality CT images, for CBCT images acquired of the lung, for patients receiving pelvic RT in cases without excess adipose tissue, and for H and N cases. (orig.) [de

  1. Short-Term Wind Power Forecasting Based on Clustering Pre-Calculated CFD Method

    Directory of Open Access Journals (Sweden)

    Yimei Wang

    2018-04-01

    Full Text Available To meet the increasing wind power forecasting (WPF demands of newly built wind farms without historical data, physical WPF methods are widely used. The computational fluid dynamics (CFD pre-calculated flow fields (CPFF-based WPF is a promising physical approach, which can balance well the competing demands of computational efficiency and accuracy. To enhance its adaptability for wind farms in complex terrain, a WPF method combining wind turbine clustering with CPFF is first proposed where the wind turbines in the wind farm are clustered and a forecasting is undertaken for each cluster. K-means, hierarchical agglomerative and spectral analysis methods are used to establish the wind turbine clustering models. The Silhouette Coefficient, Calinski-Harabaz index and within-between index are proposed as criteria to evaluate the effectiveness of the established clustering models. Based on different clustering methods and schemes, various clustering databases are built for clustering pre-calculated CFD (CPCC-based short-term WPF. For the wind farm case studied, clustering evaluation criteria show that hierarchical agglomerative clustering has reasonable results, spectral clustering is better and K-means gives the best performance. The WPF results produced by different clustering databases also prove the effectiveness of the three evaluation criteria in turn. The newly developed CPCC model has a much higher WPF accuracy than the CPFF model without using clustering techniques, both on temporal and spatial scales. The research provides supports for both the development and improvement of short-term physical WPF systems.

  2. Metric for Calculation of System Complexity based on its Connections

    Directory of Open Access Journals (Sweden)

    João Ricardo Braga de Paiva

    2017-02-01

    Full Text Available This paper proposes a methodology based on system connections to calculate its complexity. Two study cases are proposed: the dining Chinese philosophers’ problem and the distribution center. Both studies are modeled using the theory of Discrete Event Systems and simulations in different contexts were performed in order to measure their complexities. The obtained results present i the static complexity as a limiting factor for the dynamic complexity, ii the lowest cost in terms of complexity for each unit of measure of the system performance and iii the output sensitivity to the input parameters. The associated complexity and performance measures aggregate knowledge about the system.

  3. Patient-specific IMRT verification using independent fluence-based dose calculation software: experimental benchmarking and initial clinical experience

    International Nuclear Information System (INIS)

    Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Joergen; Nyholm, Tufve; Ahnesjoe, Anders; Karlsson, Mikael

    2007-01-01

    Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm 3 ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 ± 1.2% and 0.5 ± 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 ± 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach

  4. A theoretical study of blue phosphorene nanoribbons based on first-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xie, Jiafeng; Si, M. S., E-mail: sims@lzu.edu.cn; Yang, D. Z.; Zhang, Z. Y.; Xue, D. S. [Key Laboratory for Magnetism and Magnetic Materials of the Ministry of Education, Lanzhou University, Lanzhou 730000 (China)

    2014-08-21

    Based on first-principles calculations, we present a quantum confinement mechanism for the band gaps of blue phosphorene nanoribbons (BPNRs) as a function of their widths. The BPNRs considered have either armchair or zigzag shaped edges on both sides with hydrogen saturation. Both the two types of nanoribbons are shown to be indirect semiconductors. An enhanced energy gap of around 1 eV can be realized when the ribbon's width decreases to ∼10 Å. The underlying physics is ascribed to the quantum confinement effect. More importantly, the parameters to describe quantum confinement are obtained by fitting the calculated band gaps with respect to their widths. The results show that the quantum confinement in armchair nanoribbons is stronger than that in zigzag ones. This study provides an efficient approach to tune the band gap in BPNRs.

  5. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy.

    Science.gov (United States)

    Martinez-Rovira, I; Sempau, J; Prezado, Y

    2012-05-01

    Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-μm-wide microbeams spaced by 200-400 μm) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Good agreement between MC simulations and experimental results was achieved, even at the interfaces between two

  6. Monte Carlo-based treatment planning system calculation engine for microbeam radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Rovira, I.; Sempau, J.; Prezado, Y. [Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain) and ID17 Biomedical Beamline, European Synchrotron Radiation Facility (ESRF), 6 rue Jules Horowitz B.P. 220, F-38043 Grenoble Cedex (France); Institut de Tecniques Energetiques, Universitat Politecnica de Catalunya, Diagonal 647, Barcelona E-08028 (Spain); Laboratoire Imagerie et modelisation en neurobiologie et cancerologie, UMR8165, Centre National de la Recherche Scientifique (CNRS), Universites Paris 7 et Paris 11, Bat 440., 15 rue Georges Clemenceau, F-91406 Orsay Cedex (France)

    2012-05-15

    Purpose: Microbeam radiation therapy (MRT) is a synchrotron radiotherapy technique that explores the limits of the dose-volume effect. Preclinical studies have shown that MRT irradiations (arrays of 25-75-{mu}m-wide microbeams spaced by 200-400 {mu}m) are able to eradicate highly aggressive animal tumor models while healthy tissue is preserved. These promising results have provided the basis for the forthcoming clinical trials at the ID17 Biomedical Beamline of the European Synchrotron Radiation Facility (ESRF). The first step includes irradiation of pets (cats and dogs) as a milestone before treatment of human patients. Within this context, accurate dose calculations are required. The distinct features of both beam generation and irradiation geometry in MRT with respect to conventional techniques require the development of a specific MRT treatment planning system (TPS). In particular, a Monte Carlo (MC)-based calculation engine for the MRT TPS has been developed in this work. Experimental verification in heterogeneous phantoms and optimization of the computation time have also been performed. Methods: The penelope/penEasy MC code was used to compute dose distributions from a realistic beam source model. Experimental verification was carried out by means of radiochromic films placed within heterogeneous slab-phantoms. Once validation was completed, dose computations in a virtual model of a patient, reconstructed from computed tomography (CT) images, were performed. To this end, decoupling of the CT image voxel grid (a few cubic millimeter volume) to the dose bin grid, which has micrometer dimensions in the transversal direction of the microbeams, was performed. Optimization of the simulation parameters, the use of variance-reduction (VR) techniques, and other methods, such as the parallelization of the simulations, were applied in order to speed up the dose computation. Results: Good agreement between MC simulations and experimental results was achieved, even at

  7. Nurse Staffing Calculation in the Emergency Department - Performance-Oriented Calculation Based on the Manchester Triage System at the University Hospital Bonn.

    Directory of Open Access Journals (Sweden)

    Ingo Gräff

    Full Text Available To date, there are no valid statistics regarding the number of full time staff necessary for nursing care in emergency departments in Europe.Staff requirement calculations were performed using state-of-the art procedures which take both fluctuating patient volume and individual staff shortfall rates into consideration. In a longitudinal observational study, the average nursing staff engagement time per patient was assessed for 503 patients. For this purpose, a full-time staffing calculation was estimated based on the five priority levels of the Manchester Triage System (MTS, taking into account specific workload fluctuations (50th-95th percentiles.Patients classified to the MTS category red (n = 35 required the most engagement time with an average of 97.93 min per patient. On weighted average, for orange MTS category patients (n = 118, nursing staff were required for 85.07 min, for patients in the yellow MTS category (n = 181, 40.95 min, while the two MTS categories with the least acute patients, green (n = 129 and blue (n = 40 required 23.18 min and 14.99 min engagement time per patient, respectively. Individual staff shortfall due to sick days and vacation time was 20.87% of the total working hours. When extrapolating this to 21,899 (2010 emergency patients, 67-123 emergency patients (50-95% percentile per month can be seen by one nurse. The calculated full time staffing requirement depending on the percentiles was 14.8 to 27.1.Performance-oriented staff planning offers an objective instrument for calculation of the full-time nursing staff required in emergency departments.

  8. Application of CFD dispersion calculation in risk based inspection for release of H2S

    International Nuclear Information System (INIS)

    Sharma, Pavan K.; Vinod, Gopika; Singh, R.K.; Rao, V.V.S.S.; Vaze, K.K.

    2011-01-01

    In atmospheric dispersion both deterministic and probabilistic approached have been used for addressing design and regulatory concerns. In context of deterministic calculations the amount of pollutants dispersion in the atmosphere is an important area wherein different approaches are followed in development of good analytical model. The analysis based on Computational Fluid Dynamics (CFD) codes offer an opportunity of model development based on first principles of physics and hence such models have an edge over the existing models. In context of probabilistic methods applying risk based inspection (wherein consequence of failure from each component needs to be assessed) are becoming popular. Consequence evaluation in a process plant is a crucial task. Often the number of components considered for life management will be too huge. Also consequence evaluation of all the components proved to be laborious task. The present paper is the results of joint collaborative work from deterministic and probabilistic modelling group working in the field of atmospheric dispersion. Even though API 581 has simplified qualitative approach, regulators find the some of the factors, in particular, quantity factor, not suitable for process plants. Often dispersion calculations for heavy gas are done with very simple model which can not take care of density based atmospheric dispersion. This necessitates a new approach with a CFD based technical basis is proposed, so that the range of quantity considered along with factors used can be justified. The present paper is aimed at bringing out some of the distinct merits and demerits of the CFD based models. A brief account of the applications of such CFD codes reported in literature is also presented in the paper. This paper describes the approach devised and demonstrated for the said issue with emphasis of CFD calculations. (author)

  9. Calculation and Simulation Study on Transient Stability of Power System Based on Matlab/Simulink

    Directory of Open Access Journals (Sweden)

    Shi Xiu Feng

    2016-01-01

    Full Text Available The stability of the power system is destroyed, will cause a large number of users power outage, even cause the collapse of the whole system, extremely serious consequences. Based on the analysis in single machine infinite system as an example, when at the f point two phase ground fault occurs, the fault lines on either side of the circuit breaker tripping resection at the same time,respectively by two kinds of calculation and simulation methods of system transient stability analysis, the conclusion are consistent. and the simulation analysis is superior to calculation analysis.

  10. Core physics design calculation of mini-type fast reactor based on Monte Carlo method

    International Nuclear Information System (INIS)

    He Keyu; Han Weishi

    2007-01-01

    An accurate physics calculation model has been set up for the mini-type sodium-cooled fast reactor (MFR) based on MCNP-4C code, then a detailed calculation of its critical physics characteristics, neutron flux distribution, power distribution and reactivity control has been carried out. The results indicate that the basic physics characteristics of MFR can satisfy the requirement and objectives of the core design. The power density and neutron flux distribution are symmetrical and reasonable. The control system is able to make a reliable reactivity balance efficiently and meets the request for long-playing operation. (authors)

  11. Analytical modeling and feasibility study of a multi-GPU cloud-based server (MGCS) framework for non-voxel-based dose calculations.

    Science.gov (United States)

    Neylon, J; Min, Y; Kupelian, P; Low, D A; Santhanam, A

    2017-04-01

    In this paper, a multi-GPU cloud-based server (MGCS) framework is presented for dose calculations, exploring the feasibility of remote computing power for parallelization and acceleration of computationally and time intensive radiotherapy tasks in moving toward online adaptive therapies. An analytical model was developed to estimate theoretical MGCS performance acceleration and intelligently determine workload distribution. Numerical studies were performed with a computing setup of 14 GPUs distributed over 4 servers interconnected by a 1 Gigabits per second (Gbps) network. Inter-process communication methods were optimized to facilitate resource distribution and minimize data transfers over the server interconnect. The analytically predicted computation time predicted matched experimentally observations within 1-5 %. MGCS performance approached a theoretical limit of acceleration proportional to the number of GPUs utilized when computational tasks far outweighed memory operations. The MGCS implementation reproduced ground-truth dose computations with negligible differences, by distributing the work among several processes and implemented optimization strategies. The results showed that a cloud-based computation engine was a feasible solution for enabling clinics to make use of fast dose calculations for advanced treatment planning and adaptive radiotherapy. The cloud-based system was able to exceed the performance of a local machine even for optimized calculations, and provided significant acceleration for computationally intensive tasks. Such a framework can provide access to advanced technology and computational methods to many clinics, providing an avenue for standardization across institutions without the requirements of purchasing, maintaining, and continually updating hardware.

  12. Quasiparticle properties of DNA bases from GW calculations in a Wannier basis

    Science.gov (United States)

    Qian, Xiaofeng; Marzari, Nicola; Umari, Paolo

    2009-03-01

    The quasiparticle GW-Wannier (GWW) approach [1] has been recently developed to overcome the size limitations of conventional planewave GW calculations. By taking advantage of the localization properties of the maximally-localized Wannier functions and choosing a small set of polarization basis we reduce the number of Bloch wavefunctions products required for the evaluation of dynamical polarizabilities, and in turn greatly reduce memory requirements and computational efficiency. We apply GWW to study quasiparticle properties of different DNA bases and base-pairs, and solvation effects on the energy gap, demonstrating in the process the key advantages of this approach. [1] P. Umari,G. Stenuit, and S. Baroni, cond-mat/0811.1453

  13. Dose rate calculations for a reconnaissance vehicle

    International Nuclear Information System (INIS)

    Grindrod, L.; Mackey, J.; Salmon, M.; Smith, C.; Wall, S.

    2005-01-01

    A Chemical Nuclear Reconnaissance System (CNRS) has been developed by the British Ministry of Defence to make chemical and radiation measurements on contaminated terrain using appropriate sensors and recording equipment installed in a land rover. A research programme is under way to develop and validate a predictive capability to calculate the build-up of contamination on the vehicle, radiation detector performance and dose rates to the occupants of the vehicle. This paper describes the geometric model of the vehicle and the methodology used for calculations of detector response. Calculated dose rates obtained using the MCBEND Monte Carlo radiation transport computer code in adjoint mode are presented. These address the transient response of the detectors as the vehicle passes through a contaminated area. Calculated dose rates were found to agree with the measured data to be within the experimental uncertainties, thus giving confidence in the shielding model of the vehicle and its application to other scenarios. (authors)

  14. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Schuemann, J; Grassberger, C; Paganetti, H [Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Dowdell, S [Illawarra Shoalhaven Local Health District, Wollongong (Australia)

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  15. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    International Nuclear Information System (INIS)

    Schuemann, J; Grassberger, C; Paganetti, H; Dowdell, S

    2014-01-01

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50) were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend

  16. Numerical Study on the Seismic Response of Structure with Consideration of the Behavior of Base Mat Uplift

    Directory of Open Access Journals (Sweden)

    Guo-Bo Wang

    2017-01-01

    Full Text Available The foundation might be separated from the supporting soil if the earthquake is big enough, which is known as base mat uplift. This paper proposed a simplified calculation model in which spring element is adopted to simulate the interaction between soil and structure. The load-deformation curve (F-D curve of the spring element can be designated to represent the base mat uplift, in which the pressure can be applied while tensile forces are not allowed. Key factors, such as seismic wave types, seismic wave excitation directions, seismic wave amplitudes, soil shear velocities, structure stiffness, and the ratio of structure height to width (H/B, were considered in the analysis. It is shown that (1 seismic wave type has significant influence on structure response due to different frequency components it contained; (2 the vertical input of seismic wave greatly affected structure response in vertical direction, while it has little impacts in horizontal direction; (3 base mat uplift is easier to take place in soil with higher shear velocity; (4 structure H/B value has complicated influence on base mat uplift. The outcome of this research is assumed to provide some references for the seismic design of the structure due to base mat uplift.

  17. A thermodynamic data base for Tc to calculate equilibrium solubilities at temperatures up to 300 deg C

    Energy Technology Data Exchange (ETDEWEB)

    Puigdomenech, I [Studsvik AB, Nykoeping (Sweden); Bruno, J [Intera Information Technologies SL, Cerdanyola (Spain)

    1995-04-01

    Thermodynamic data has been selected for solids and aqueous species of technetium. Equilibrium constants have been calculated in the temperature range 0 to 300 deg C at a pressure of 1 bar for T<100 deg C and at the steam saturated pressure at higher temperatures. For aqueous species, the revised Helgeson-Kirkham-Flowers model is used for temperature extrapolations. The data base contains a large amount of estimated data, and the methods used for these estimations are described in detail. A new equation is presented that allows the estimation of {Delta}{sub r}Cdeg{sub pm} values for mononuclear hydrolysis reactions. The formation constants for chloro complexes of Tc(V) and Tc(IV), whose existence is well established, have been estimated. The majority of entropy and heat capacity values in the data base have also been estimated, and therefore temperature extrapolations are largely based on estimations. The uncertainties derived from these calculations are described. Using the data base developed in this work, technetium solubilities have been calculated as a function of temperature for different chemical conditions. The implications for the mobility of Tc under nuclear repository conditions are discussed. 70 refs.

  18. TU-F-CAMPUS-T-05: A Cloud-Based Monte Carlo Dose Calculation for Electron Cutout Factors

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, T; Bush, K [Stanford School of Medicine, Stanford, CA (United States)

    2015-06-15

    Purpose: For electron cutouts of smaller sizes, it is necessary to verify electron cutout factors due to perturbations in electron scattering. Often, this requires a physical measurement using a small ion chamber, diode, or film. The purpose of this study is to develop a fast Monte Carlo based dose calculation framework that requires only a smart phone photograph of the cutout and specification of the SSD and energy to determine the electron cutout factor, with the ultimate goal of making this cloud-based calculation widely available to the medical physics community. Methods: The algorithm uses a pattern recognition technique to identify the corners of the cutout in the photograph as shown in Figure 1. It then corrects for variations in perspective, scaling, and translation of the photograph introduced by the user’s positioning of the camera. Blob detection is used to identify the portions of the cutout which comprise the aperture and the portions which are cutout material. This information is then used define physical densities of the voxels used in the Monte Carlo dose calculation algorithm as shown in Figure 2, and select a particle source from a pre-computed library of phase-spaces scored above the cutout. The electron cutout factor is obtained by taking a ratio of the maximum dose delivered with the cutout in place to the dose delivered under calibration/reference conditions. Results: The algorithm has been shown to successfully identify all necessary features of the electron cutout to perform the calculation. Subsequent testing will be performed to compare the Monte Carlo results with a physical measurement. Conclusion: A simple, cloud-based method of calculating electron cutout factors could eliminate the need for physical measurements and substantially reduce the time required to properly assure accurate dose delivery.

  19. Calculations in support of a potential definition of large release

    International Nuclear Information System (INIS)

    Hanson, A.L.; Davis, R.E.; Mubayi, V.

    1994-05-01

    The Nuclear Regulatory Commission has stated a hierarchy of safety goals with the qualitative safety goals as Level I of the hierarchy, backed up by the quantitative health objectives as Level II and the large release guideline as Level III. The large release guideline has been stated in qualitative terms as a magnitude of release of the core inventory whose frequency should not exceed 10 -6 per reactor year. However, the Commission did not provide a quantitative specification of a large release. This report describes various specifications of a large release and focuses, in particular, on an examination of releases which have a potential to lead to one prompt fatality in the mean. The basic information required to set up the calculations was derived from the simplified source terms which were obtained from approximations of the NUREG-1150 source terms. Since the calculation of consequences is affected by a large number of assumptions, a generic site with a (conservatively determined) population density and meteorology was specified. At this site, various emergency responses (including no response) were assumed based on information derived from earlier studies. For each of the emergency response assumptions, a set of calculations were performed with the simplified source terms; these included adjustments to the source terms, such as the timing of the release, the core inventory, and the release fractions of different radionuclides, to arrive at a result of one mean prompt fatality in each case. Each of the source terms, so defined, has the potential to be a candidate for a large release. The calculations show that there are many possible candidate source terms for a large release depending on the characteristics which are felt to be important

  20. A comparison study for dose calculation in radiation therapy: pencil beam Kernel based vs. Monte Carlo simulation vs. measurements

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Kwang-Ho; Suh, Tae-Suk; Lee, Hyoung-Koo; Choe, Bo-Young [The Catholic Univ. of Korea, Seoul (Korea, Republic of); Kim, Hoi-Nam; Yoon, Sei-Chul [Kangnam St. Mary' s Hospital, Seoul (Korea, Republic of)

    2002-07-01

    Accurate dose calculation in radiation treatment planning is most important for successful treatment. Since human body is composed of various materials and not an ideal shape, it is not easy to calculate the accurate effective dose in the patients. Many methods have been proposed to solve inhomogeneity and surface contour problems. Monte Carlo simulations are regarded as the most accurate method, but it is not appropriate for routine planning because it takes so much time. Pencil beam kernel based convolution/superposition methods were also proposed to correct those effects. Nowadays, many commercial treatment planning systems have adopted this algorithm as a dose calculation engine. The purpose of this study is to verify the accuracy of the dose calculated from pencil beam kernel based treatment planning system comparing to Monte Carlo simulations and measurements especially in inhomogeneous region. Home-made inhomogeneous phantom, Helax-TMS ver. 6.0 and Monte Carlo code BEAMnrc and DOSXYZnrc were used in this study. In homogeneous media, the accuracy was acceptable but in inhomogeneous media, the errors were more significant. However in general clinical situation, pencil beam kernel based convolution algorithm is thought to be a valuable tool to calculate the dose.

  1. Research on trust calculation of wireless sensor networks based on time segmentation

    Science.gov (United States)

    Su, Yaoxin; Gao, Xiufeng; Qiao, Wenxin

    2017-05-01

    Because the wireless sensor network is different from the traditional network characteristics, it is easy to accept the intrusion from the compromise node. The trust mechanism is the most effective way to defend against internal attacks. Aiming at the shortcomings of the existing trust mechanism, a method of calculating the trust of wireless sensor networks based on time segmentation is proposed. It improves the security of the network and extends the life of the network

  2. An Analysis on the Characteristic of Multi-response CADIS Method for the Monte Carlo Radiation Shielding Calculation

    International Nuclear Information System (INIS)

    Kim, Do Hyun; Shin, Chang Ho; Kim, Song Hyun

    2014-01-01

    It uses the deterministic method to calculate adjoint fluxes for the decision of the parameters used in the variance reductions. This is called as hybrid Monte Carlo method. The CADIS method, however, has a limitation to reduce the stochastic errors of all responses. The Forward Weighted CADIS (FW-CADIS) was introduced to solve this problem. To reduce the overall stochastic errors of the responses, the forward flux is used. In the previous study, the Multi-Response CADIS (MR-CAIDS) method was derived for minimizing sum of each squared relative error. In this study, the characteristic of the MR-CADIS method was evaluated and compared with the FW-CADIS method. In this study, how the CADIS, FW-CADIS, and MR-CADIS methods are applied to optimize and decide the parameters used in the variance reduction techniques was analyzed. The MR-CADIS Method uses a technique that the sum of squared relative error in each tally region was minimized to achieve uniform uncertainty. To compare the simulation efficiency of the methods, a simple shielding problem was evaluated. Using FW-CADIS method, it was evaluated that the average of the relative errors was minimized; however, MR-CADIS method gives a lowest variance of the relative errors. Analysis shows that, MR-CADIS method can efficiently and uniformly reduce the relative error of the plural response problem than FW-CADIS method

  3. 3-D seismic response of a base-isolated fast reactor

    International Nuclear Information System (INIS)

    Kitamura, S.; Morishita, M.; Iwata, K.

    1992-01-01

    This paper describes a 3-D response analysis methodology development and its application to a base-isolated fast breeder reactor (FBR) plant. At first, studies on application of a base-isolation system to an FBR plant were performed to identify a range of appropriate characteristics of the system. A response analysis method was developed based on mathematical models for the restoring force characteristics of several types of the systems. A series of shaking table tests using a small scale model was carried out to verify the analysis method. A good agreement was seen between the test and analysis results in terms of the horizontal and vertical responses. Parametric studies were then made to assess the effects of various factors which might be influential to the seismic response of the system. Moreover, the method was applied to evaluate three-dimensional response of the base-isolated FBR. (author)

  4. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    International Nuclear Information System (INIS)

    Xu, H; Guerrero, M; Prado, K; Yi, B

    2016-01-01

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  5. SU-F-T-78: Minimum Data Set of Measurements for TG 71 Based Electron Monitor-Unit Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H; Guerrero, M; Prado, K; Yi, B [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors, cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.

  6. Volume-based geometric modeling for radiation transport calculations

    International Nuclear Information System (INIS)

    Li, Z.; Williamson, J.F.

    1992-01-01

    Accurate theoretical characterization of radiation fields is a valuable tool in the design of complex systems, such as linac heads and intracavitary applicators, and for generation of basic dose calculation data that is inaccessible to experimental measurement. Both Monte Carlo and deterministic solutions to such problems require a system for accurately modeling complex 3-D geometries that supports ray tracing, point and segment classification, and 2-D graphical representation. Previous combinatorial approaches to solid modeling, which involve describing complex structures as set-theoretic combinations of simple objects, are limited in their ease of use and place unrealistic constraints on the geometric relations between objects such as excluding common boundaries. A new approach to volume-based solid modeling has been developed which is based upon topologically consistent definitions of boundary, interior, and exterior of a region. From these definitions, FORTRAN union, intersection, and difference routines have been developed that allow involuted and deeply nested structures to be described as set-theoretic combinations of ellipsoids, elliptic cylinders, prisms, cones, and planes that accommodate shared boundaries. Line segments between adjacent intersections on a trajectory are assigned to the appropriate region by a novel sorting algorithm that generalizes upon Siddon's approach. Two 2-D graphic display tools are developed to help the debugging of a given geometric model. In this paper, the mathematical basis of our system is described, it is contrasted to other approaches, and examples are discussed

  7. Hypervelocity impact cratering calculations

    Science.gov (United States)

    Maxwell, D. E.; Moises, H.

    1971-01-01

    A summary is presented of prediction calculations on the mechanisms involved in hypervelocity impact cratering and response of earth media. Considered are: (1) a one-gram lithium-magnesium alloys impacting basalt normally at 6.4 km/sec, and (2) a large terrestrial impact corresponding to that of Sierra Madera.

  8. World Wide Web-based system for the calculation of substituent parameters and substituent similarity searches.

    Science.gov (United States)

    Ertl, P

    1998-02-01

    Easy to use, interactive, and platform-independent WWW-based tools are ideal for development of chemical applications. By using the newly emerging Web technologies such as Java applets and sophisticated scripting, it is possible to deliver powerful molecular processing capabilities directly to the desk of synthetic organic chemists. In Novartis Crop Protection in Basel, a Web-based molecular modelling system has been in use since 1995. In this article two new modules of this system are presented: a program for interactive calculation of important hydrophobic, electronic, and steric properties of organic substituents, and a module for substituent similarity searches enabling the identification of bioisosteric functional groups. Various possible applications of calculated substituent parameters are also discussed, including automatic design of molecules with the desired properties and creation of targeted virtual combinatorial libraries.

  9. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    Science.gov (United States)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  10. Calculating and experimental technique for forecasting the bipolar digital integrated circuit response; Raschetno-ehksperimental`nyj metod prognozirovaniya reaktsii bipolyarnykh Ts IS

    Energy Technology Data Exchange (ETDEWEB)

    Butin, V I; Trofimov, Eh N

    1994-12-31

    Typical responses of the bipolar digital integrated circuits (DIC) of the combination type under the action of pulse gamma radiation are presented. Analysis of the DIC transients is carried out. A calculation-experimental method for forecasting the temporal serviceability loss of bipolar DIC is proposed. The reliability of the method is confirmed experimentally. 1 fig.

  11. Smart Demand Response Based on Smart Homes

    Directory of Open Access Journals (Sweden)

    Jingang Lai

    2015-01-01

    Full Text Available Smart homes (SHs are crucial parts for demand response management (DRM of smart grid (SG. The aim of SHs based demand response (DR is to provide a flexible two-way energy feedback whilst (or shortly after the consumption occurs. It can potentially persuade end-users to achieve energy saving and cooperate with the electricity producer or supplier to maintain balance between the electricity supply and demand through the method of peak shaving and valley filling. However, existing solutions are challenged by the lack of consideration between the wide application of fiber power cable to the home (FPCTTH and related users’ behaviors. Based on the new network infrastructure, the design and development of smart DR systems based on SHs are related with not only functionalities as security, convenience, and comfort, but also energy savings. A new multirouting protocol based on Kruskal’s algorithm is designed for the reliability and safety of the SHs distribution network. The benefits of FPCTTH-based SHs are summarized at the end of the paper.

  12. Nonlinear seismic response analysis of an embedded reactor building based on the substructure approach

    International Nuclear Information System (INIS)

    Hasegawa, M.; Ichikawa, T.; Nakai, S.; Watanabe, T.

    1987-01-01

    A practical method to calculate the elasto-plastic seismic response of structures considering the dynamic soil-structure interaction is presented. The substructure technique in the time domain is utilized in the proposed method. A simple soil spring system with the coupling effects which are usually evaluated by the impedance matrix is introduced to consider the soil-structure interaction for embedded structures. As a numerical example, the response of a BWR-MARK II type reactor building embedded in the layered soil is calculated. The accuracy of the present method is verified by comparing its numerical results with exact solutions. The nonlinear behaivor and the soil-structure interaction effects on the response of the reactor building are also discussed in detail. It is concluded that the present method is effective for the aseismic design considering both the material nonlinearity of the nuclear reactor building and the dynamic soil-structure interaction. (orig.)

  13. Calculation laboratory: game based learning in exact discipline

    Directory of Open Access Journals (Sweden)

    André Felipe de Almeida Xavier

    2017-12-01

    Full Text Available The Calculation Laboratory appeared with the need to give meaning to the learning of students entering the courses of Engineering, in the discipline of Differential Calculus, in the semester 1/2016. After obtaining good results, the activity was also extended to the classes of Analytical Geometry and Linear Algebra (GAAL and Integral Calculus, so that these incoming students could continue the process. Historically, students present some difficulty in these contents, and it is necessary to give meaning to their learning. Given the table presented, the Calculation Laboratory aims to give meaning to the contents worked, giving students autonomy, having the teacher as the tutor, as intermediary between the student and the knowledge, creating various practical, playful and innovative activities to assist in this process. Through this article, it is intended to report a little about the activities created to facilitate this process of execution of the Calculation Laboratory, in addition to demonstrating the results obtained and measured after its application. Through these proposed activities, it is noticed that the student is gradually gaining autonomy in the search for knowledge.

  14. Dynamic response function and large-amplitude dissipative collective motion

    International Nuclear Information System (INIS)

    Wu Xizhen; Zhuo Yizhong; Li Zhuxia; Sakata, Fumihiko.

    1993-05-01

    Aiming at exploring microscopic dynamics responsible for the dissipative large-amplitude collective motion, the dynamic response and correlation functions are introduced within the general theory of nuclear coupled-master equations. The theory is based on the microscopic theory of nuclear collective dynamics which has been developed within the time-dependent Hartree-Fock (TDHF) theory for disclosing complex structure of the TDHF-manifold. A systematic numerical method for calculating the dynamic response and correlation functions is proposed. By performing numerical calculation for a simple model Hamiltonian, it is pointed out that the dynamic response function gives an important information in understanding the large-amplitude dissipative collective motion which is described by an ensemble of trajectories within the TDHF-manifold. (author)

  15. Sensitivity of drainage morphometry based hydrological response (GIUH) of a river basin to the spatial resolution of DEM data

    Science.gov (United States)

    Sahoo, Ramendra; Jain, Vikrant

    2018-02-01

    Drainage network pattern and its associated morphometric ratios are some of the important plan form attributes of a drainage basin. Extraction of these attributes for any basin is usually done by spatial analysis of the elevation data of that basin. These planform attributes are further used as input data for studying numerous process-response interactions inside the physical premise of the basin. One of the important uses of the morphometric ratios is its usage in the derivation of hydrologic response of a basin using GIUH concept. Hence, accuracy of the basin hydrological response to any storm event depends upon the accuracy with which, the morphometric ratios can be estimated. This in turn, is affected by the spatial resolution of the source data, i.e. the digital elevation model (DEM). We have estimated the sensitivity of the morphometric ratios and the GIUH derived hydrograph parameters, to the resolution of source data using a 30 meter and a 90 meter DEM. The analysis has been carried out for 50 drainage basins in a mountainous catchment. A simple and comprehensive algorithm has been developed for estimation of the morphometric indices from a stream network. We have calculated all the morphometric parameters and the hydrograph parameters for each of these basins extracted from two different DEMs, with different spatial resolutions. Paired t-test and Sign test were used for the comparison. Our results didn't show any statistically significant difference among any of the parameters calculated from the two source data. Along with the comparative study, a first-hand empirical analysis about the frequency distribution of the morphometric and hydrologic response parameters has also been communicated. Further, a comparison with other hydrological models suggests that plan form morphometry based GIUH model is more consistent with resolution variability in comparison to topographic based hydrological model.

  16. Monte Carlo calculation of the energy deposited in the KASCADE GRANDE detectors

    International Nuclear Information System (INIS)

    Mihai, Constantin

    2004-01-01

    The energy deposited by protons, electrons and positrons in the KASCADE GRANDE detectors is calculated with a simple and fast Monte Carlo method. The KASCADE GRANDE experiment (Forschungszentrum Karlsruhe, Germany), based on an array of plastic scintillation detectors, has the aim to study the energy spectrum of the primary cosmic rays around and above the 'knee' region of the spectrum. The reconstruction of the primary spectrum is achieved by comparing the data collected by the detectors with simulations of the development of the extensive air shower initiated by the primary particle combined with detailed simulations of the detector response. The simulation of the air shower development is carried out with the CORSIKA Monte Carlo code. The output file produced by CORSIKA is further processed with a program that estimates the energy deposited in the detectors by the particles of the shower. The standard method to calculate the energy deposit in the detectors is based on the Geant package from the CERN library. A new method that calculates the energy deposit by fitting the Geant based distributions with simpler functions is proposed in this work. In comparison with the method based on the Geant package this method is substantially faster. The time saving is important because the number of particles involved is large. (author)

  17. Multi-scale calculation of the electric properties of organic-based devices from the molecular structure

    KAUST Repository

    Li, Haoyuan; Qiu, Yong; Duan, Lian

    2016-01-01

    A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using

  18. A demand response modeling for residential consumers in smart grid environment using game theory based energy scheduling algorithm

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-06-01

    Full Text Available In this paper, demand response modeling scheme is proposed for residential consumers using game theory algorithm as Generalized Tit for Tat (GTFT Dominant Game based Energy Scheduler. The methodology is established as a work flow domain model between the utility and the user considering the smart grid framework. It exhibits an algorithm which schedules load usage by creating several possible tariffs for consumers such that demand is never raised. This can be done both individually and among multiple users of a community. The uniqueness behind the demand response proposed is that, the tariff is calculated for all hours and the load during the peak hours which can be rescheduled is shifted based on the Peak Average Ratio. To enable the vitality of the work simulation results of a general case of three domestic consumers are modeled extended to a comparative performance and evaluation with other algorithms and inference is analyzed.

  19. Calculation of equivalent static loads and its application

    International Nuclear Information System (INIS)

    Choi, Woo-Seok; Park, K.B.; Park, G.J.

    2005-01-01

    All the forces in the real world act dynamically on structures. Since dynamic loads are extremely difficult to handle in analysis and design, static loads are usually utilized with dynamic factors. Generally, the dynamic factors are determined from design codes or experience. Therefore, static loads may not give accurate solutions in analysis and design and structural engineers often come up with unreliable solutions. Two different methods are proposed for the transformation of dynamic loads into equivalent static loads (ESLs). One is an analytical method for exact ESLs and the other is an approximation method. The exact ESLs are calculated to generate identical response fields such as displacement and stress with those from dynamic loads at a certain time. Some approximation methods are proposed in engineering applications, which generate similar response fields from dynamic loads. They are divided into the displacement-based approach and the stress-based approach. The process is derived and evaluated mathematically. Standard examples are selected and solved by the proposed method and error analyses are conducted. Applications of the method to structural optimization are discussed

  20. Specification of materials Data for Fire Safety Calculations based on ENV 1992-1-2

    DEFF Research Database (Denmark)

    Hertz, Kristian Dahl

    1997-01-01

    of constructions of any concrete exposed to any time of any fire exposure can be calculated.Chapter 4.4 provides information on what should be observed if more general calculation methods are used.Annex A provides some additional information on materials data. This chapter is not a part of the code......The part 1-2 of the Eurocode on Concrete deals with Structural Fire Design.In chapter 3, which is partly written by the author of this paper, some data are given for the development of a few material parameters at high temperatures. These data are intended to represent the worst possible concrete...... to experience form tests on structural specimens based on German siliceous concrete subjected to Standard fire exposure until the time of maximum gas temperature.Chapter 4.3, which is written by the author of this paper, provides a simplified calculation method by means of which the load bearing capacity...

  1. A massively-parallel electronic-structure calculations based on real-space density functional theory

    International Nuclear Information System (INIS)

    Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro

    2010-01-01

    Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.

  2. [Development and effectiveness of a drug dosage calculation training program using cognitive loading theory based on smartphone application].

    Science.gov (United States)

    Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon

    2012-10-01

    This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, psmartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.

  3. Application of shielding calculation of high-energy linear accelerators based on the NCRP-151 protocol

    International Nuclear Information System (INIS)

    Torres Pozas, S.; Monja Rey, P. de la; Sanchez Carrasca, M.; Yanez Lopez, D.; Macias Verde, D.; Martin Oliva, R.

    2011-01-01

    In recent years, the progress experienced in cancer treatment with ionizing radiation can deliver higher doses to smaller volumes and better shaped, making it necessary to take into account new aspects in the calculation of structural barriers. Furthermore, given that forecasts suggest that in the near future will install a large number of accelerators, or existing ones modified, we believe a useful tool to estimate the thickness of the structural barriers of treatment rooms. The shielding calculation methods are based on standard DIN 6847-2 and the recommendations given by the NCRP 151. In our experience we found only estimates originated from the DIN. Therefore, we considered interesting to develop an application that incorporates the formulation suggested by the NCRP, together with previous work based on the rules DIN allow us to establish a comparison between the results of both methods. (Author)

  4. Development of a power-period calculation unit for nuclear reactor Control; Etude et realisation d'un ensemble de calcul puissance periode pour le controle d'un reacteur nucleaire

    Energy Technology Data Exchange (ETDEWEB)

    Martin, J [Commissariat a l' Energie Atomique, Centre d' Etudes Nucleaires de Saclay, 91 - Gif-sur-Yvette (France)

    1966-10-01

    The apparatus studied is a digital calculating assembly which makes it possible to prepare and to present numerically the period and power of a nuclear reactor during operation, from start-up to nominal power. The pulses from a fission chamber are analyzed continuously, using real time. A small number of elements is required because of the systematic use of a calculation technique comprising the determination of a base 2 logarithm by a linear approximation. The accuracy obtained for the period is of the order of 14%; the response time of the order of the calculated period value. An approximate value of the power (30%) is given at each calculation cycle together with the power thresholds required for the control. (author) [French] L'appareil etudie est un ensemble de calcul digital permettant d'elaborer et d'afficher numeriquement la periode et la puissance, d'un reacteur nucleaire lors de son fonctionnement depuis le demarrage jusqu'a la puissance nominale. Il traite en temps reel, de facon continue, les impulsions en provenance d'une chambre de fission. Grace a l'utilisation systematique d'une technique de calcul, la determination d'un logarithme a base 2 par approximation lineaire, un nombre reduit d'elements est utilise. La precision obtenue sur la periode est de l'ordre de 14 pour cent, le temps de reponse de l'ordre de la valeur de la periode calculee. Un ordre de grandeur de la puissance (30 pour cent) est donne a chaque cycle de calcul ainsi que des seuils de puissance necessaires au controle. (auteur)

  5. Generation of input parameters for OSPM calculations. Sensitivity analysis of a method based on a questionnaire

    Energy Technology Data Exchange (ETDEWEB)

    Vignati, E.; Hertel, O.; Berkowicz, R. [National Environmental Research Inst., Dept. of Atmospheric Enviroment (Denmark); Raaschou-Nielsen, O. [Danish Cancer Society, Division of Cancer Epidemiology (Denmark)

    1997-05-01

    The method for generation of the input data for the calculations with OSPM is presented in this report. The described method which is based on information provided from a questionnaire, will be used for model calculations of long term exposure for a large number of children in connection with an epidemiological study. A test of the calculation method has been performed on a few locations in which detailed measurements of air pollution, meteorological data and traffic were available. Comparisons between measured and calculated concentrations were made for hourly, monthly and yearly values. Beside the measured concentrations, the test results were compared to results obtained with the optimal street configuration data and measured traffic. The main conclusions drawn from this investigation are: (1) The calculation method works satisfactory well for long term averages, whereas the uncertainties are high when short term averages are considered. (2) The street width is one of the most crucial input parameters for the calculation of street pollution levels for both short and long term averages. Using H.C. Andersens Boulevard as an example, it was shown that estimation of street width based on traffic amount can lead to large overestimation of the concentration levels (in this case 50% for NO{sub x} and 30% for NO{sub 2}). (3) The street orientation and geometry is important for prediction of short term concentrations but this importance diminished for longer term averages. (4) The uncertainties in diurnal traffic profiles can influence the accuracy of short term averages, but are less important for long term averages. The correlation is good between modelled and measured concentrations when the actual background concentrations are replaced with the generated values. Even though extreme situations are difficult to reproduce with this method, the comparison between the yearly averaged modelled and measured concentrations is very good. (LN) 20 refs.

  6. Global nuclear-structure calculations

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.

    1990-01-01

    The revival of interest in nuclear ground-state octupole deformations that occurred in the 1980's was stimulated by observations in 1980 of particularly large deviations between calculated and experimental masses in the Ra region, in a global calculation of nuclear ground-state masses. By minimizing the total potential energy with respect to octupole shape degrees of freedom in addition to ε 2 and ε 4 used originally, a vastly improved agreement between calculated and experimental masses was obtained. To study the global behavior and interrelationships between other nuclear properties, we calculate nuclear ground-state masses, spins, pairing gaps and Β-decay and half-lives and compare the results to experimental qualities. The calculations are based on the macroscopic-microscopic approach, with the microscopic contributions calculated in a folded-Yukawa single-particle potential

  7. Dielectric Response at THz Frequencies of Fe Water Complexes and Their Interaction with O3 Calculated by Density Functional Theory

    Science.gov (United States)

    2012-10-24

    geometric arrangement of the atoms in a chemical system , at the maximal peak of the energy surface separating reactants from products . In the...Sonnenberg, M. Hada, M. Ehara, K. Toyota , R. Fukuda, J. Hasegawa, M. Ishida, T. Nakajima, Y. Honda , O. Kitao, H. Nakai, T. Vreven, J. A. Montgomery... using DFT. The calculation of ground state resonance structure is for the construction of parameterized dielectric response functions for excitation

  8. Verification of EPA's " Preliminary remediation goals for radionuclides" (PRG) electronic calculator

    Energy Technology Data Exchange (ETDEWEB)

    Stagich, B. H. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-03-29

    The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides information on establishing PRGs for radionuclides at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) sites with radioactive contamination (Verification Study Charge, Background). These risk-based PRGs set concentration limits using carcinogenic toxicity values under specific exposure conditions (PRG User’s Guide, Section 1). The purpose of this verification study is to ascertain that the computer codes has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly.

  9. SDT: a virus classification tool based on pairwise sequence alignment and identity calculation.

    Directory of Open Access Journals (Sweden)

    Brejnev Muhizi Muhire

    Full Text Available The perpetually increasing rate at which viral full-genome sequences are being determined is creating a pressing demand for computational tools that will aid the objective classification of these genome sequences. Taxonomic classification approaches that are based on pairwise genetic identity measures are potentially highly automatable and are progressively gaining favour with the International Committee on Taxonomy of Viruses (ICTV. There are, however, various issues with the calculation of such measures that could potentially undermine the accuracy and consistency with which they can be applied to virus classification. Firstly, pairwise sequence identities computed based on multiple sequence alignments rather than on multiple independent pairwise alignments can lead to the deflation of identity scores with increasing dataset sizes. Also, when gap-characters need to be introduced during sequence alignments to account for insertions and deletions, methodological variations in the way that these characters are introduced and handled during pairwise genetic identity calculations can cause high degrees of inconsistency in the way that different methods classify the same sets of sequences. Here we present Sequence Demarcation Tool (SDT, a free user-friendly computer program that aims to provide a robust and highly reproducible means of objectively using pairwise genetic identity calculations to classify any set of nucleotide or amino acid sequences. SDT can produce publication quality pairwise identity plots and colour-coded distance matrices to further aid the classification of sequences according to ICTV approved taxonomic demarcation criteria. Besides a graphical interface version of the program for Windows computers, command-line versions of the program are available for a variety of different operating systems (including a parallel version for cluster computing platforms.

  10. Criticality criteria for submissions based on calculations

    International Nuclear Information System (INIS)

    Burgess, M.H.

    1975-06-01

    Calculations used in criticality clearances are subject to errors from various sources, and allowance must be made for these errors is assessing the safety of a system. A simple set of guidelines is defined, drawing attention to each source of error, and recommendations as to its application are made. (author)

  11. Model-based calculations of off-axis ratio of conic beams for a dedicated 6 MV radiosurgery unit

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J. N.; Ding, X.; Du, W.; Pino, R. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiation Oncology, Methodist Hospital, Houston, Texas 77030 (United States)

    2010-10-15

    Purpose: Because the small-radius photon beams shaped by cones in stereotactic radiosurgery (SRS) lack lateral electronic equilibrium and a detector's finite cross section, direct experimental measurement of dosimetric data for these beams can be subject to large uncertainties. As the dose calculation accuracy of a treatment planning system largely depends on how well the dosimetric data are measured during the machine's commissioning, there is a critical need for an independent method to validate measured results. Therefore, the authors studied the model-based calculation as an approach to validate measured off-axis ratios (OARs). Methods: The authors previously used a two-component analytical model to calculate central axis dose and associated dosimetric data (e.g., scatter factors and tissue-maximum ratio) in a water phantom and found excellent agreement between the calculated and the measured central axis doses for small 6 MV SRS conic beams. The model was based on that of Nizin and Mooij [''An approximation of central-axis absorbed dose in narrow photon beams,'' Med. Phys. 24, 1775-1780 (1997)] but was extended to account for apparent attenuation, spectral differences between broad and narrow beams, and the need for stricter scatter dose calculations for clinical beams. In this study, the authors applied Clarkson integration to this model to calculate OARs for conic beams. OARs were calculated for selected cones with radii from 0.2 to 1.0 cm. To allow comparisons, the authors also directly measured OARs using stereotactic diode (SFD), microchamber, and film dosimetry techniques. The calculated results were machine-specific and independent of direct measurement data for these beams. Results: For these conic beams, the calculated OARs were in excellent agreement with the data measured using an SFD. The discrepancies in radii and in 80%-20% penumbra were within 0.01 cm, respectively. Using SFD-measured OARs as the reference data, the

  12. Analytic energy derivatives for the calculation of the first-order molecular properties using the domain-based local pair-natural orbital coupled-cluster theory

    Science.gov (United States)

    Datta, Dipayan; Kossmann, Simone; Neese, Frank

    2016-09-01

    The domain-based local pair-natural orbital coupled-cluster (DLPNO-CC) theory has recently emerged as an efficient and powerful quantum-chemical method for the calculation of energies of molecules comprised of several hundred atoms. It has been demonstrated that the DLPNO-CC approach attains the accuracy of a standard canonical coupled-cluster calculation to about 99.9% of the basis set correlation energy while realizing linear scaling of the computational cost with respect to system size. This is achieved by combining (a) localized occupied orbitals, (b) large virtual orbital correlation domains spanned by the projected atomic orbitals (PAOs), and (c) compaction of the virtual space through a truncated pair natural orbital (PNO) basis. In this paper, we report on the implementation of an analytic scheme for the calculation of the first derivatives of the DLPNO-CC energy for basis set independent perturbations within the singles and doubles approximation (DLPNO-CCSD) for closed-shell molecules. Perturbation-independent one-particle density matrices have been implemented in order to account for the response of the CC wave function to the external perturbation. Orbital-relaxation effects due to external perturbation are not taken into account in the current implementation. We investigate in detail the dependence of the computed first-order electrical properties (e.g., dipole moment) on the three major truncation parameters used in a DLPNO-CC calculation, namely, the natural orbital occupation number cutoff used for the construction of the PNOs, the weak electron-pair cutoff, and the domain size cutoff. No additional truncation parameter has been introduced for property calculation. We present benchmark calculations on dipole moments for a set of 10 molecules consisting of 20-40 atoms. We demonstrate that 98%-99% accuracy relative to the canonical CCSD results can be consistently achieved in these calculations. However, this comes with the price of tightening the

  13. Dispersion calculation method based on S-transform and coordinate rotation for Love channel waves with two components

    Science.gov (United States)

    Feng, Lei; Zhang, Yugui

    2017-08-01

    Dispersion analysis is an important part of in-seam seismic data processing, and the calculation accuracy of the dispersion curve directly influences pickup errors of channel wave travel time. To extract an accurate channel wave dispersion curve from in-seam seismic two-component signals, we proposed a time-frequency analysis method based on single-trace signal processing; in addition, we formulated a dispersion calculation equation, based on S-transform, with a freely adjusted filter window width. To unify the azimuth of seismic wave propagation received by a two-component geophone, the original in-seam seismic data undergoes coordinate rotation. The rotation angle can be calculated based on P-wave characteristics, with high energy in the wave propagation direction and weak energy in the vertical direction. With this angle acquisition, a two-component signal can be converted to horizontal and vertical directions. Because Love channel waves have a particle vibration track perpendicular to the wave propagation direction, the signal in the horizontal and vertical directions is mainly Love channel waves. More accurate dispersion characters of Love channel waves can be extracted after the coordinate rotation of two-component signals.

  14. Pile Load Capacity – Calculation Methods

    Directory of Open Access Journals (Sweden)

    Wrana Bogumił

    2015-12-01

    Full Text Available The article is a review of the current problems of the foundation pile capacity calculations. The article considers the main principles of pile capacity calculations presented in Eurocode 7 and other methods with adequate explanations. Two main methods are presented: α – method used to calculate the short-term load capacity of piles in cohesive soils and β – method used to calculate the long-term load capacity of piles in both cohesive and cohesionless soils. Moreover, methods based on cone CPTu result are presented as well as the pile capacity problem based on static tests.

  15. Spatially dependent burnup implementation into the nodal program based on the finite element response matrix method

    International Nuclear Information System (INIS)

    Yoriyaz, H.

    1986-01-01

    In this work a spatial burnup scheme and feedback effects has been implemented into the FERM ( 'Finite Element Response Matrix' )program. The spatially dependent neutronic parameters have been considered in three levels: zonewise calculation, assembly wise calculation and pointwise calculation. Flux and power distributions and the multiplication factor were calculated and compared with the results obtained by CITATIOn program. These comparisons showed that processing time in the Ferm code has been hundred of times shorter and no significant difference has been observed in the assembly average power distribution. (Author) [pt

  16. A simplified calculation procedure for mass isotopomer distribution analysis (MIDA) based on multiple linear regression.

    Science.gov (United States)

    Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio

    2016-10-01

    We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Shape coexistence, Lanczos techniques, and large-basis shell-model calculations

    Energy Technology Data Exchange (ETDEWEB)

    Haxton, W C [Washington Univ., Seattle, WA (United States). Dept. of Physics

    1992-08-01

    I discuss numerical many-body techniques based on the Lanczos algorithm and their applications to nuclear structure problems. Examples include shape coexistence, inclusive response functions, and weak interaction rates in {sup 16}O; weak-coupling descriptions of the O{sup +} bands in isotopes of Ge and Se; and the evaluation of the nuclear Green`s functions that arise in two-neutrino {beta}{beta} decay and in nuclear anapole and electric dipole moment calculations. (author). 11 refs., 2 tabs., 4 figs.

  18. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Christoforou, Stavros, E-mail: stavros.christoforou@gmail.com [Kirinthou 17, 34100, Chalkida (Greece); Hoogenboom, J. Eduard, E-mail: j.e.hoogenboom@tudelft.nl [Department of Applied Sciences, Delft University of Technology (Netherlands)

    2011-07-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k{sub eff} estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  19. Continuous energy Monte Carlo calculations for randomly distributed spherical fuels based on statistical geometry model

    Energy Technology Data Exchange (ETDEWEB)

    Murata, Isao [Osaka Univ., Suita (Japan); Mori, Takamasa; Nakagawa, Masayuki; Itakura, Hirofumi

    1996-03-01

    The method to calculate neutronics parameters of a core composed of randomly distributed spherical fuels has been developed based on a statistical geometry model with a continuous energy Monte Carlo method. This method was implemented in a general purpose Monte Carlo code MCNP, and a new code MCNP-CFP had been developed. This paper describes the model and method how to use it and the validation results. In the Monte Carlo calculation, the location of a spherical fuel is sampled probabilistically along the particle flight path from the spatial probability distribution of spherical fuels, called nearest neighbor distribution (NND). This sampling method was validated through the following two comparisons: (1) Calculations of inventory of coated fuel particles (CFPs) in a fuel compact by both track length estimator and direct evaluation method, and (2) Criticality calculations for ordered packed geometries. This method was also confined by applying to an analysis of the critical assembly experiment at VHTRC. The method established in the present study is quite unique so as to a probabilistic model of the geometry with a great number of spherical fuels distributed randomly. Realizing the speed-up by vector or parallel computations in future, it is expected to be widely used in calculation of a nuclear reactor core, especially HTGR cores. (author).

  20. Promoting Culturally Responsive Standards-Based Teaching

    Science.gov (United States)

    Saifer, Steffen; Barton, Rhonda

    2007-01-01

    Culturally responsive standards-based (CRSB) teaching can help bring diverse school communities together and make learning meaningful. Unlike multicultural education--which is an important way to incorporate the world's cultural and ethnic diversity into lessons--CRSB teaching draws on the experiences, understanding, views, concepts, and ways of…

  1. Second reference calculation for the WIPP

    International Nuclear Information System (INIS)

    Branstetter, L.J.

    1985-03-01

    Results of the second reference calculation for the Waste Isolation Pilot Plant (WIPP) project using the dynamic relaxation finite element code SANCHO are presented. This reference calculation is intended to predict the response of a typical panel of excavated rooms designed for storage of nonheat-producing nuclear waste. Results are presented that include relevant deformations, relative clay seam displacements, and stress and strain profiles. This calculation is a particular solution obtained by a computer code, which has proven analytic capabilities when compared with other structural finite element codes. It is hoped that the results presented here will be useful in providing scoping values for defining experiments and for developing instrumentation. It is also hoped that the calculation will be useful as part of an exercise in developing a methodology for performing important design calculations by more than one analyst using more than one computer code, and for defining internal Quality Assurance (QA) procedures for such calculations. 27 refs., 15 figs

  2. Seismic response of base isolated auxiliary building with age related degradation

    International Nuclear Information System (INIS)

    Park, Jun Hee; Choun, Young Sun; Choi, In Kil

    2012-01-01

    The aging of an isolator affects not only the mechanical properties of the isolator but also the dynamic properties of the upper structure, such as the change in stiffness, deformation capacity, load bearing capacity, creep, and damping. Therefore, the seismic response of base isolated structures will change with time. The floor response in the base isolated nuclear power plants (NPPs) can be particularly changed because of the change in stiffness and damping for the isolator. The increased seismic response due to the aging of isolator can cause mechanical problems for many equipment located in the NPPs. Therefore, it is necessary to evaluate the seismic response of base isolated NPPs with age related degradation. In this study, the seismic responses for a base isolated auxiliary building of SHIN KORI 3 and 4 with age related degradation were investigated using a nonlinear time history analysis. Floor response spectrums (FRS) were presented with time for identifying the change in seismic demand under the aging of isolator

  3. Effect of XCOM photoelectric cross-sections on dosimetric quantities calculated with EGSnrc

    International Nuclear Information System (INIS)

    Hobeila, F.; Seuntjens, J.P.

    2002-01-01

    The EGSnrc Monte-Carlo code system incorporates improved low energy photon physics such as atomic relaxations and the implementation of bound Compton cross-sections using the impulse approximation. The total cross-section for photoelectric absorption however, still relies on the data by Storm and Israel (S and I). Yet, low energy applications such as brachytherapy (e.g. 125 I) require up-to-date low-energy photoelectric cross-section data. In this paper, we study the dosimetric effects of a simple implementation of NIST XCOM-based photoelectric cross-sections in EGSnrc. This is done by calculating mass energy-absorption coefficients, absorbed dose from point sources, kilovoltage x-ray beams and ion chamber response. In the EGS code system, the PEGS4 routine reads the photoelectric and pair cross-sections for elements from a file (pgspepr.dat) and provides numerical fits for compounds which will be used by EGSnrc. We updated the photoelectric cross-sections of the pgspepr.dat file with the XCOM total photoelectric absorption cross-sections from NIST. After validation of this new implementation, we studied its effects on a number of dosimetrically relevant quantities. Firstly, we calculated mass energy-absorption coefficients by scoring energy transferred in a thin slab of water and air using the DOSRZnrc user code. Secondly, we calculated inverse-square corrected absorbed dose distributions from point sources in water by using an internally developed user code, KERNELph. Thirdly, we studied the differences in free-air ion chamber response calculations. Ion chamber response is defined as the dose to the cavity of an ionization chamber, D gas , positioned with its effective point of measurement at a reference point divided by air-kerma measured free-in-air at the same point. The ion chamber response was calculated using monoenergetic photon beams of energy 10 keV to 200 keV. The comparison of the Storm and Israel photoelectric cross-sections with the XCOM cross

  4. Calculating Traffic based on Road Sensor Data

    NARCIS (Netherlands)

    Bisseling, Rob; Gao, Fengnan; Hafkenscheid, Patrick; Idema, Reijer; Jetka, Tomasz; Guerra Ones, Valia; Rata, Debanshu; Sikora, Monika

    2014-01-01

    Road sensors gather a lot of statistical data about traffic. In this paper, we discuss how a measure for the amount of traffic on the roads can be derived from this data, such that the measure is independent of the number and placement of sensors, and the calculations can be performed quickly for

  5. SU-E-T-161: Evaluation of Dose Calculation Based On Cone-Beam CT

    International Nuclear Information System (INIS)

    Abe, T; Nakazawa, T; Saitou, Y; Nakata, A; Yano, M; Tateoka, K; Fujimoto, K; Sakata, K

    2014-01-01

    Purpose: The purpose of this study is to convert pixel values in cone-beam CT (CBCT) using histograms of pixel values in the simulation CT (sim-CT) and the CBCT images and to evaluate the accuracy of dose calculation based on the CBCT. Methods: The sim-CT and CBCT images immediately before the treatment of 10 prostate cancer patients were acquired. Because of insufficient calibration of the pixel values in the CBCT, it is difficult to be directly used for dose calculation. The pixel values in the CBCT images were converted using an in-house program. A 7 fields treatment plans (original plan) created on the sim-CT images were applied to the CBCT images and the dose distributions were re-calculated with same monitor units (MUs). These prescription doses were compared with those of original plans. Results: In the results of the pixel values conversion in the CBCT images,the mean differences of pixel values for the prostate,subcutaneous adipose, muscle and right-femur were −10.78±34.60, 11.78±41.06, 29.49±36.99 and 0.14±31.15 respectively. In the results of the calculated doses, the mean differences of prescription doses for 7 fields were 4.13±0.95%, 0.34±0.86%, −0.05±0.55%, 1.35±0.98%, 1.77±0.56%, 0.89±0.69% and 1.69±0.71% respectively and as a whole, the difference of prescription dose was 1.54±0.4%. Conclusion: The dose calculation on the CBCT images achieve an accuracy of <2% by using this pixel values conversion program. This may enable implementation of efficient adaptive radiotherapy

  6. On the validity of microscopic calculations of double-quantum-dot spin qubits based on Fock-Darwin states

    Science.gov (United States)

    Chan, GuoXuan; Wang, Xin

    2018-04-01

    We consider two typical approximations that are used in the microscopic calculations of double-quantum dot spin qubits, namely, the Heitler-London (HL) and the Hund-Mulliken (HM) approximations, which use linear combinations of Fock-Darwin states to approximate the two-electron states under the double-well confinement potential. We compared these results to a case in which the solution to a one-dimensional Schr¨odinger equation was exactly known and found that typical microscopic calculations based on Fock-Darwin states substantially underestimate the value of the exchange interaction, which is the key parameter that controls the quantum dot spin qubits. This underestimation originates from the lack of tunneling of Fock-Darwin states, which is accurate only in the case with a single potential well. Our results suggest that the accuracies of the current two-dimensional molecular- orbit-theoretical calculations based on Fock-Darwin states should be revisited since underestimation could only deteriorate in dimensions that are higher than one.

  7. PHEBUS-FPTO Benchmark calculations

    International Nuclear Information System (INIS)

    Shepherd, I.; Ball, A.; Trambauer, K.; Barbero, F.; Olivar Dominguez, F.; Herranz, L.; Biasi, L.; Fermandjian, J.; Hocke, K.

    1991-01-01

    This report summarizes a set of pre-test predictions made for the first Phebus-FP test, FPT-O. There were many different calculations, performed by various organizations and they represent the first attempt to calculate the whole experimental sequence, from bundle to containment. Quantitative agreement between the various calculations was not good but the particular models in the code responsible for disagreements were mostly identified. A consensus view was formed as to how the test would proceed. It was found that a successful execution of the test will require a different operating procedure than had been assumed here. Critical areas which require close attention are the need to devize a strategy for the power and flow in the bundle that takes account of uncertainties in the modelling and the shroud conductivity and the necessity to develop a reliable method to achieve the desired thermalhydraulic conditions in the containment

  8. Simulation of dynamic response of nuclear power plant based on user-defined model in PSASP

    International Nuclear Information System (INIS)

    Zhao Jie; Liu Dichen; Xiong Li; Chen Qi; Du Zhi; Lei Qingsheng

    2010-01-01

    Based on the energy transformation regularity in physical process of pressurized water reactors (PWR), PWR NPP models are established in PSASP (Power System Analysis Software Package), which are applicable for calculating the dynamic process of PWR NPP and power system transient stabilization. The power dynamic characteristics of PWR NPP is simulated and analyzed, including the PWR self-stability, self-regulation and power step responses under power regulation system. The results indicate that the PWR NPP can afford certain exterior disturbances and 10%P n step under temperature negative feedbacks. The regulate speed of PWR power can reach 5%P n /min under the power regulation system, which meets the requirement of peak regulation in Power Grid. (authors)

  9. Radiation dose response simulation for biomechanical-based deformable image registration of head and neck cancer treatment

    International Nuclear Information System (INIS)

    Al-Mayah, Adil; Moseley, Joanne; Hunter, Shannon; Brock, Kristy

    2015-01-01

    Biomechanical-based deformable image registration is conducted on the head and neck region. Patient specific 3D finite element models consisting of parotid glands (PG), submandibular glands (SG), tumor, vertebrae (VB), mandible, and external body are used to register pre-treatment MRI to post-treatment MR images to model the dose response using image data of five patients. The images are registered using combinations of vertebrae and mandible alignments, and surface projection of the external body as boundary conditions. In addition, the dose response is simulated by applying a new loading technique in the form of a dose-induced shrinkage using the dose-volume relationship. The dose-induced load is applied as dose-induced shrinkage of the tumor and four salivary glands. The Dice Similarity Coefficient (DSC) is calculated for the four salivary glands, and tumor to calculate the volume overlap of the structures after deformable registration. A substantial improvement in the registration is found by including the dose-induced shrinkage. The greatest registration improvement is found in the four glands where the average DSC increases from 0.53, 0.55, 0.32, and 0.37 to 0.68, 0.68, 0.51, and 0.49 in the left PG, right PG, left SG, and right SG, respectively by using bony alignment of vertebrae and mandible (M), body (B) surface projection and dose (D) (VB+M+B+D). (paper)

  10. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  11. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    Energy Technology Data Exchange (ETDEWEB)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay gamma-quanta by the residuals in the activated structures and scoring the prompt doses of these gamma-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and showed a good agreement. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  12. Calculating Stress: From Entropy to a Thermodynamic Concept of Health and Disease

    Science.gov (United States)

    Nečesánek, Ivo; Konečný, David; Vasku, Anna

    2016-01-01

    To date, contemporary science has lacked a satisfactory tool for the objective expression of stress. This text thus introduces a new–thermodynamically derived–approach to stress measurement, based on entropy production in time and independent of the quality or modality of a given stressor or a combination thereof. Hereto, we propose a novel model of stress response based on thermodynamic modelling of entropy production, both in the tissues/organs and in regulatory feedbacks. Stress response is expressed in our model on the basis of stress entropic load (SEL), a variable we introduced previously; the mathematical expression of SEL, provided here for the first time, now allows us to describe the various states of a living system, including differentiating between states of health and disease. The resulting calculation of stress response regardless of the type of stressor(s) in question is thus poised to become an entirely new tool for predicting the development of a living system. PMID:26771542

  13. Aligning faith-based and national HIV/AIDS prevention responses? Factors influencing the HIV/AIDS prevention policy process and response of faith-based NGOs in Tanzania.

    Science.gov (United States)

    Morgan, Rosemary; Green, Andrew; Boesten, Jelke

    2014-05-01

    Faith-based organizations (FBOs) have a long tradition of providing HIV/AIDS prevention and mitigation services in Africa. The overall response of FBOs, however, has been controversial, particularly in regard to HIV/AIDS prevention and FBO's rejection of condom use and promotion, which can conflict with and negatively influence national HIV/AIDS prevention response efforts. This article reports the findings from a study that explored the factors influencing the HIV/AIDS prevention policy process within faith-based non-governmental organizations (NGOs) of different faiths. These factors were examined within three faith-based NGOs in Dar es Salaam, Tanzania-a Catholic, Anglican and Muslim organization. The research used an exploratory, qualitative case-study approach, and employed a health policy analysis framework, examining the context, actor and process factors and how they interact to form content in terms of policy and its implementation within each organization. Three key factors were found to influence faith-based NGOs' HIV/AIDS prevention response in terms of both policy and its implementation: (1) the faith structure in which the organizations are a part, (2) the presence or absence of organizational policy and (3) the professional nature of the organizations and its actors. The interaction between these factors, and how actors negotiate between them, was found to shape the organizations' HIV/AIDS prevention response. This article reports on these factors and analyses the different HIV/AIDS prevention responses found within each organization. By understanding the factors that influence faith-based NGOs' HIV/AIDS prevention policy process, the overall faith-based response to HIV/AIDS, and how it corresponds to national response efforts, is better understood. It is hoped that by doing so the government will be better able to identify how to best work with FBOs to meet national HIV/AIDS prevention targets, improving the overall role of FBOs in the fight against

  14. Kowledge-based dynamic network safety calculations. Wissensbasierte dynamische Netzsicherheitsberechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Kulicke, B [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany); Schlegel, S [Inst. fuer Hochspannungstechnik und Starkstromanlagen, Berlin (Germany)

    1993-06-28

    An important part of network operation management is the estimation and maintenance of the security of supply. So far the control personnel has only been supported by static network analyses and safety calculations. The authors describe an expert system, which is coupled to a real time simulation program on a transputer basis, for dynamic network safety calculations. They also introduce the system concept and the most important functions of the expert system. (orig.)

  15. Photon path distribution and optical responses of turbid media: theoretical analysis based on the microscopic Beer-Lambert law.

    Science.gov (United States)

    Tsuchiya, Y

    2001-08-01

    A concise theoretical treatment has been developed to describe the optical responses of a highly scattering inhomogeneous medium using functions of the photon path distribution (PPD). The treatment is based on the microscopic Beer-Lambert law and has been found to yield a complete set of optical responses by time- and frequency-domain measurements. The PPD is defined for possible photons having a total zigzag pathlength of l between the points of light input and detection. Such a distribution is independent of the absorption properties of the medium and can be uniquely determined for the medium under quantification. Therefore, the PPD can be calculated with an imaginary reference medium having the same optical properties as the medium under quantification except for the absence of absorption. One of the advantages of this method is that the optical responses, the total attenuation, the mean pathlength, etc are expressed by functions of the PPD and the absorption distribution.

  16. Transient anisotropic magnetic field calculation

    International Nuclear Information System (INIS)

    Jesenik, Marko; Gorican, Viktor; Trlep, Mladen; Hamler, Anton; Stumberger, Bojan

    2006-01-01

    For anisotropic magnetic material, nonlinear magnetic characteristics of the material are described with magnetization curves for different magnetization directions. The paper presents transient finite element calculation of the magnetic field in the anisotropic magnetic material based on the measured magnetization curves for different magnetization directions. For the verification of the calculation method some results of the calculation are compared with the measurement

  17. CVD diamond based soft X-ray detector with fast response

    International Nuclear Information System (INIS)

    Li Fang; Hou Lifei; Su Chunxiao; Yang Guohong; Liu Shenye

    2010-01-01

    A soft X-ray detector has been made with high quality chemical vapor deposited (CVD) diamond and the electrical structure of micro-strip. Through the measurement of response time on a laser with the pulse width of 10 ps, the full width at half maximum of the data got in the oscilloscope was 115 ps. The rise time of the CVD diamond detector was calculated to be 49 ps. In the experiment on the laser prototype facility, the signal got by the CVD diamond detector was compared with that got by a soft X-ray spectrometer. Both signals coincided well. The detector is proved to be a kind of reliable soft X-ray detector with fast response and high signal-to-noise ratio. (authors)

  18. Reactor core performance calculating device

    International Nuclear Information System (INIS)

    Tominaga, Kenji; Bando, Masaru; Sano, Hiroki; Maruyama, Hiromi.

    1995-01-01

    The device of the present invention can calculate a power distribution efficiently at high speed by a plurality of calculation means while taking an amount of the reactor state into consideration. Namely, an input device takes data from a measuring device for the amount of the reactor core state such as a large number of neutron detectors disposed in the reactor core for monitoring the reactor state during operation. An input data distribution device comprises a state recognition section and a data distribution section. The state recognition section recognizes the kind and amount of the inputted data and information of the calculation means. The data distribution section analyzes the characteristic of the inputted data, divides them into a several groups, allocates them to each of the calculation means for the purpose of calculating the reactor core performance efficiently at high speed based on the information from the state recognition section. A plurality of the calculation means calculate power distribution of each of regions based on the allocated inputted data, to determine the power distribution of the entire reactor core. As a result, the reactor core can be evaluated at high accuracy and at high speed irrespective of the whole reactor core or partial region. (I.S.)

  19. Tight-binding approximations to time-dependent density functional theory — A fast approach for the calculation of electronically excited states

    Energy Technology Data Exchange (ETDEWEB)

    Rüger, Robert, E-mail: rueger@scm.com [Scientific Computing & Modelling NV, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig (Germany); Lenthe, Erik van [Scientific Computing & Modelling NV, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Heine, Thomas [Wilhelm-Ostwald-Institut für Physikalische und Theoretische Chemie, Linnéstr. 2, 04103 Leipzig (Germany); Visscher, Lucas [Department of Theoretical Chemistry, Vrije Universiteit Amsterdam, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands)

    2016-05-14

    We propose a new method of calculating electronically excited states that combines a density functional theory based ground state calculation with a linear response treatment that employs approximations used in the time-dependent density functional based tight binding (TD-DFTB) approach. The new method termed time-dependent density functional theory TD-DFT+TB does not rely on the DFTB parametrization and is therefore applicable to systems involving all combinations of elements. We show that the new method yields UV/Vis absorption spectra that are in excellent agreement with computationally much more expensive TD-DFT calculations. Errors in vertical excitation energies are reduced by a factor of two compared to TD-DFTB.

  20. Mixtures of toxic agents and attributable risk calculations

    International Nuclear Information System (INIS)

    Seiler, F.A.; Scott, B.R.

    1987-01-01

    Calculations of attributable risks have attracted increasing interest recently. However, these efforts have been limited to mostly one agent, radiation, and no interactions with effects of other toxic agents have been taken into account. This paper outlines a generic approach to the calculation of attributable risks for an exposure to several toxic agents and interaction effects associated with them. In this calculation, the partition of interaction terms between the agents responsible is of particular importance. At present, there are no rules on how to assign equitable shares, so one methodology will be proposed and others discussed briefly. For one example of an assignment, the standard errors of the attributable risks are determined in terms of the uncertainties of the input parameters, thus setting the stage for a comparison of the different shares of responsibility

  1. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    Energy Technology Data Exchange (ETDEWEB)

    Pribadi, Sugeng, E-mail: sugengpribadimsc@gmail.com [Badan Meteorologi Klimatologi Geofisika, Jl Angkasa I No. 2 Jakarta (Indonesia); Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan [Institut Teknologi Bandung, Jl. Ganesha 10, Bandung 40132 (Indonesia)

    2014-03-24

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M{sub o}), moment magnitude (M{sub W}), rupture duration (T{sub o}) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M{sub W}=7.8 and the 17 July 2006 Pangandaran earthquake with M{sub W}=7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M{sub W}=7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake.

  2. Characterization of tsunamigenic earthquake in Java region based on seismic wave calculation

    International Nuclear Information System (INIS)

    Pribadi, Sugeng; Afnimar,; Puspito, Nanang T.; Ibrahim, Gunawan

    2014-01-01

    This study is to characterize the source mechanism of tsunamigenic earthquake based on seismic wave calculation. The source parameter used are the ratio (Θ) between the radiated seismic energy (E) and seismic moment (M o ), moment magnitude (M W ), rupture duration (T o ) and focal mechanism. These determine the types of tsunamigenic earthquake and tsunami earthquake. We calculate the formula using the teleseismic wave signal processing with the initial phase of P wave with bandpass filter 0.001 Hz to 5 Hz. The amount of station is 84 broadband seismometer with far distance of 30° to 90°. The 2 June 1994 Banyuwangi earthquake with M W =7.8 and the 17 July 2006 Pangandaran earthquake with M W =7.7 include the criteria as a tsunami earthquake which distributed about ratio Θ=−6.1, long rupture duration To>100 s and high tsunami H>7 m. The 2 September 2009 Tasikmalaya earthquake with M W =7.2, Θ=−5.1 and To=27 s which characterized as a small tsunamigenic earthquake

  3. Lift calculations based on accepted wake models for animal flight are inconsistent and sensitive to vortex dynamics.

    Science.gov (United States)

    Gutierrez, Eric; Quinn, Daniel B; Chin, Diana D; Lentink, David

    2016-12-06

    There are three common methods for calculating the lift generated by a flying animal based on the measured airflow in the wake. However, these methods might not be accurate according to computational and robot-based studies of flapping wings. Here we test this hypothesis for the first time for a slowly flying Pacific parrotlet in still air using stereo particle image velocimetry recorded at 1000 Hz. The bird was trained to fly between two perches through a laser sheet wearing laser safety goggles. We found that the wingtip vortices generated during mid-downstroke advected down and broke up quickly, contradicting the frozen turbulence hypothesis typically assumed in animal flight experiments. The quasi-steady lift at mid-downstroke was estimated based on the velocity field by applying the widely used Kutta-Joukowski theorem, vortex ring model, and actuator disk model. The calculated lift was found to be sensitive to the applied model and its different parameters, including vortex span and distance between the bird and laser sheet-rendering these three accepted ways of calculating weight support inconsistent. The three models predict different aerodynamic force values mid-downstroke compared to independent direct measurements with an aerodynamic force platform that we had available for the same species flying over a similar distance. Whereas the lift predictions of the Kutta-Joukowski theorem and the vortex ring model stayed relatively constant despite vortex breakdown, their values were too low. In contrast, the actuator disk model predicted lift reasonably accurately before vortex breakdown, but predicted almost no lift during and after vortex breakdown. Some of these limitations might be better understood, and partially reconciled, if future animal flight studies report lift calculations based on all three quasi-steady lift models instead. This would also enable much needed meta studies of animal flight to derive bioinspired design principles for quasi-steady lift

  4. CONDOR: neutronic code for fuel elements calculation with rods

    International Nuclear Information System (INIS)

    Villarino, E.A.

    1990-01-01

    CONDOR neutronic code is used for the calculation of fuel elements formed by fuel rods. The method employed to obtain the neutronic flux is that of collision probabilities in a multigroup scheme on two-dimensional geometry. This code utilizes new calculation algorithms and normalization of such collision probabilities. Burn-up calculations can be made before the alternative of applying variational methods for response flux calculations or those corresponding to collision normalization. (Author) [es

  5. Accelerating atomic orbital-based electronic structure calculation via pole expansion and selected inversion

    International Nuclear Information System (INIS)

    Lin, Lin; Yang, Chao; Chen, Mohan; He, Lixin

    2013-01-01

    We describe how to apply the recently developed pole expansion and selected inversion (PEXSI) technique to Kohn–Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating the charge density, the total energy, the Helmholtz free energy and the atomic forces (including both the Hellmann–Feynman force and the Pulay force) without using the eigenvalues and eigenvectors of the Kohn–Sham Hamiltonian. We also show how to update the chemical potential without using Kohn–Sham eigenvalues. The advantage of using PEXSI is that it has a computational complexity much lower than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEXSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEXSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEXSI are modest. This even makes it possible to perform Kohn–Sham DFT calculations for 10 000-atom nanotubes with a sequential implementation of the selected inversion algorithm. We also perform an accurate geometry optimization calculation on a truncated (8, 0) boron nitride nanotube system containing 1024 atoms. Numerical results indicate that the use of PEXSI does not lead to loss of the accuracy required in a practical DFT calculation. (paper)

  6. Computing Moment-Based Probability Tables for Self-Shielding Calculations in Lattice Codes

    International Nuclear Information System (INIS)

    Hebert, Alain; Coste, Mireille

    2002-01-01

    As part of the self-shielding model used in the APOLLO2 lattice code, probability tables are required to compute self-shielded cross sections for coarse energy groups (typically with 99 or 172 groups). This paper describes the replacement of the multiband tables (typically with 51 subgroups) with moment-based tables in release 2.5 of APOLLO2. An improved Ribon method is proposed to compute moment-based probability tables, allowing important savings in CPU resources while maintaining the accuracy of the self-shielding algorithm. Finally, a validation is presented where the absorption rates obtained with each of these techniques are compared with exact values obtained using a fine-group elastic slowing-down calculation in the resolved energy domain. Other results, relative to the Rowland's benchmark and to three assembly production cases, are also presented

  7. Calculation methods of reactivity using derivatives of nuclear power and Filter fir

    International Nuclear Information System (INIS)

    Diaz, Daniel Suescun

    2007-01-01

    This work presents two new methods for the solution of the inverse point kinetics equation. The first method is based on the integration by parts of the integral of the inverse point kinetics equation, which results in a power series in terms of the nuclear power in time dependence. Applying some conditions to the nuclear power, the reactivity is represented as first and second derivatives of this nuclear power. This new calculation method for reactivity has special characteristics, amongst which the possibility of using different sampling periods, and the possibility of restarting the calculation, after its interruption associated it with a possible equipment malfunction, allowing the calculation of reactivity in a non-continuous way. Apart from this reactivity can be obtained with or without dependency on the nuclear power memory. The second method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. The reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. In this method it can be pointed out that the linear part is equivalent to a filter named Finite Impulse Response (Fir). The Fir filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive way. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way. The proposed methods were validated using signals with random noise and showing the relationship between the reactivity difference and the degree of the random noise. (author)

  8. Demonstration of a zero-variance based scheme for variance reduction to a mini-core Monte Carlo calculation

    International Nuclear Information System (INIS)

    Christoforou, Stavros; Hoogenboom, J. Eduard

    2011-01-01

    A zero-variance based scheme is implemented and tested in the MCNP5 Monte Carlo code. The scheme is applied to a mini-core reactor using the adjoint function obtained from a deterministic calculation for biasing the transport kernels. It is demonstrated that the variance of the k_e_f_f estimate is halved compared to a standard criticality calculation. In addition, the biasing does not affect source distribution convergence of the system. However, since the code lacked optimisations for speed, we were not able to demonstrate an appropriate increase in the efficiency of the calculation, because of the higher CPU time cost. (author)

  9. Coupled vertical-rocking response of base-isolated structures

    International Nuclear Information System (INIS)

    Pan, T.C.; Kelly, J.M.

    1984-01-01

    A base-isolated building can have a small horizontal eccentricity between the center of mass of the superstructure and the center of rigidity of the supporting bearings. The structure can be modeled as a rigid block with tributary masses supported on massless rubber bearings placed at a constant elevation below the center of mass. Perturbation methods are implemented to find the dynamic characteristics for both the detuned and the perfectly tuned cases. The Green's functions for the displacement response of the system are derived for the undamped and the damped conditions. The response spectrum modal superposition method is used in estimating the maximum acceleration. A simple method, accounting for the effect of closely spaced modes, is proposed for combining modal maxima and results in an approximate single-degree-of-freedom solution. This approximate solution may be used for thepreliminary design of a base-isolated structure. Numerical results for a base-isolated building subjected to the vertical component of the El Centro earthquake of 1940 were carried out for comparison with analytical results. It is shown that the effect of rocking coupling on the vertical seismic response of baseisolated structures can generally be neglected because of the combined effects of the time lag between the maximum translational and rotational responses and the influence of damping in the isolation system

  10. Vertical Footbridge Vibrations: The Response Spectrum Methodology

    DEFF Research Database (Denmark)

    Georgakis, Christos; Ingólfsson, Einar Thór

    2008-01-01

    In this paper, a novel, accurate and readily codifiable methodology for the prediction of vertical footbridge response is presented. The methodology is based on the well-established response spectrum approach used in the majority of the world’s current seismic design codes of practice. The concept...... of a universally applicable reference response spectrum is introduced, from which the pedestrian-induced vertical response of any footbridge may be determined, based on a defined “event” and the probability of occurrence of that event. A series of Monte Carlo simulations are undertaken for the development...... period is introduced and its implication on the calculation of footbridge response is discussed. Finally, a brief comparison is made between the theoretically predicted pedestrian-induced vertical response of an 80m long RC footbridge (as an example) and actual field measurements. The comparison shows...

  11. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    International Nuclear Information System (INIS)

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-01-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  12. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  13. Ruthenia-based electrochemical supercapacitors: insights from first-principles calculations.

    Science.gov (United States)

    Ozoliņš, Vidvuds; Zhou, Fei; Asta, Mark

    2013-05-21

    Electrochemical supercapacitors (ECs) have important applications in areas wherethe need for fast charging rates and high energy density intersect, including in hybrid and electric vehicles, consumer electronics, solar cell based devices, and other technologies. In contrast to carbon-based supercapacitors, where energy is stored in the electrochemical double-layer at the electrode/electrolyte interface, ECs involve reversible faradaic ion intercalation into the electrode material. However, this intercalation does not lead to phase change. As a result, ECs can be charged and discharged for thousands of cycles without loss of capacity. ECs based on hydrous ruthenia, RuO2·xH2O, exhibit some of the highest specific capacitances attained in real devices. Although RuO2 is too expensive for widespread practical use, chemists have long used it as a model material for investigating the fundamental mechanisms of electrochemical supercapacitance and heterogeneous catalysis. In this Account, we discuss progress in first-principles density-functional theory (DFT) based studies of the electronic structure, thermodynamics, and kinetics of hydrous and anhydrous RuO2. We find that DFT correctly reproduces the metallic character of the RuO2 band structure. In addition, electron-proton double-insertion into bulk RuO2 leads to the formation of a polar covalent O-H bond with a fractional increase of the Ru charge in delocalized d-band states by only 0.3 electrons. This is in slight conflict with the common assumption of a Ru valence change from Ru(4+) to Ru(3+). Using the prototype electrostatic ground state (PEGS) search method, we predict a crystalline RuOOH compound with a formation energy of only 0.15 eV per proton. The calculated voltage for the onset of bulk proton insertion in the dilute limit is only 0.1 V with respect to the reversible hydrogen electrode (RHE), in reasonable agreement with the 0.4 V threshold for a large diffusion-limited contribution measured experimentally

  14. A Review of Solid-Solution Models of High-Entropy Alloys Based on Ab Initio Calculations

    Directory of Open Access Journals (Sweden)

    Fuyang Tian

    2017-11-01

    Full Text Available Similar to the importance of XRD in experiments, ab initio calculations, as a powerful tool, have been applied to predict the new potential materials and investigate the intrinsic properties of materials in theory. As a typical solid-solution material, the large degree of uncertainty of high-entropy alloys (HEAs results in the difficulty of ab initio calculations application to HEAs. The present review focuses on the available ab initio based solid-solution models (virtual lattice approximation, coherent potential approximation, special quasirandom structure, similar local atomic environment, maximum-entropy method, and hybrid Monte Carlo/molecular dynamics and their applications and limits in single phase HEAs.

  15. TO THE SOLUTION OF PROBLEMS ABOUT THE RAILWAYS CALCULATION FOR STRENGTH TAKING INTO ACCOUNT UNEQUAL ELASTICITY OF THE SUBRAIL BASE

    Directory of Open Access Journals (Sweden)

    D. M. Kurhan

    2014-11-01

    Full Text Available Purpose. The module of elasticity of the subrail base is one of the main characteristics for an assessment intense the deformed condition of a track. Need for different cases to consider unequal elasticity of the subrail base repeatedly was considered, however, results contained rather difficult mathematical approaches and the obtained decisions didn't keep within borders of standard engineering calculation of a railway on strength. Therefore the purpose of this work is obtaining the decision within this document. Methodology. It is offered to consider a rail model as a beam which has the distributed loading of such outline corresponding to value of the module of elasticity that gives an equivalent deflection at free seating on bearing parts. Findings. The method of the accounting of gradual change of the module of elasticity of the subrail base by means of the correcting coefficient in engineering calculation of a way on strength was received. Expansion of existing calculation of railways strength was developed for the accounting of sharp change of the module of elasticity of the subrail base (for example, upon transition from a ballast design of a way on the bridge. The characteristic of change of forces operating from a rail on a basis, depending on distance to the bridge on an approach site from a ballast design of a way was received. The results of the redistribution of forces after a sudden change in the elastic modulus of the base under the rail explain the formation of vertical irregularities before the bridge. Originality. The technique of engineering calculation of railways strength for performance of calculations taking into account unequal elasticity of the subrail base was improved. Practical value. The obtained results allow carrying out engineering calculations for an assessment of strength of a railway in places of unequal elasticity caused by a condition of a way or features of a design. The solution of the return task on

  16. Environmentally responsible behavior of nature-based tourists: A review

    Directory of Open Access Journals (Sweden)

    Lee, T.H.

    2013-03-01

    Full Text Available This study assesses the conceptualization of environmentally responsible behavior and methods for measuring such behavior based on a review of previous studies. Four major scales for the extent to which an individual’s behavior is responsible behavior are discussed. Various theoretical backgrounds and cultures provide diverse conceptualizations of environmentally responsible behavior. Both general and site-specific environmentally responsible behavior has been identified in the past studies. This study also discusses the precedents of environmentally responsible behavior and with a general overview; it provides insight into improving future research on this subject.

  17. Helical tomotherapy shielding calculation for an existing LINAC treatment room: sample calculation and cautions

    International Nuclear Information System (INIS)

    Wu Chuan; Guo Fanqing; Purdy, James A

    2006-01-01

    This paper reports a step-by-step shielding calculation recipe for a helical tomotherapy unit (TomoTherapy Inc., Madison, WI, USA), recently installed in an existing Varian 600C treatment room. Both primary and secondary radiations (leakage and scatter) are explicitly considered. A typical patient load is assumed. Use factor is calculated based on an analytical formula derived from the tomotherapy rotational beam delivery geometry. Leakage and scatter are included in the calculation based on corresponding measurement data as documented by TomoTherapy Inc. Our calculation result shows that, except for a small area by the therapists' console, most of the existing Varian 600C shielding is sufficient for the new tomotherapy unit. This work cautions other institutions facing the similar situation, where an HT unit is considered for an existing LINAC treatment room, more secondary shielding might be considered at some locations, due to the significantly increased secondary shielding requirement by HT. (note)

  18. SU-F-T-428: An Optimization-Based Commissioning Tool for Finite Size Pencil Beam Dose Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Y; Tian, Z; Song, T; Jia, X; Gu, X; Jiang, S [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Finite size pencil beam (FSPB) algorithms are commonly used to pre-calculate the beamlet dose distribution for IMRT treatment planning. FSPB commissioning, which usually requires fine tuning of the FSPB kernel parameters, is crucial to the dose calculation accuracy and hence the plan quality. Yet due to the large number of beamlets, FSPB commissioning could be very tedious. This abstract reports an optimization-based FSPB commissioning tool we have developed in MatLab to facilitate the commissioning. Methods: A FSPB dose kernel generally contains two types of parameters: the profile parameters determining the dose kernel shape, and a 2D scaling factors accounting for the longitudinal and off-axis corrections. The former were fitted using the penumbra of a reference broad beam’s dose profile with Levenberg-Marquardt algorithm. Since the dose distribution of a broad beam is simply a linear superposition of the dose kernel of each beamlet calculated with the fitted profile parameters and scaled using the scaling factors, these factors could be determined by solving an optimization problem which minimizes the discrepancies between the calculated dose of broad beams and the reference dose. Results: We have commissioned a FSPB algorithm for three linac photon beams (6MV, 15MV and 6MVFFF). Dose of four field sizes (6*6cm2, 10*10cm2, 15*15cm2 and 20*20cm2) were calculated and compared with the reference dose exported from Eclipse TPS system. For depth dose curves, the differences are less than 1% of maximum dose after maximum dose depth for most cases. For lateral dose profiles, the differences are less than 2% of central dose at inner-beam regions. The differences of the output factors are within 1% for all the three beams. Conclusion: We have developed an optimization-based commissioning tool for FSPB algorithms to facilitate the commissioning, providing sufficient accuracy of beamlet dose calculation for IMRT optimization.

  19. A new method for calculating gas saturation of low-resistivity shale gas reservoirs

    Directory of Open Access Journals (Sweden)

    Jinyan Zhang

    2017-09-01

    Full Text Available The Jiaoshiba shale gas field is located in the Fuling area of the Sichuan Basin, with the Upper Ordovician Wufeng–Lower Silurian Longmaxi Fm as the pay zone. At the bottom of the pay zone, a high-quality shale gas reservoir about 20 m thick is generally developed with high organic contents and gas abundance, but its resistivity is relatively low. Accordingly, the gas saturation calculated by formulas (e.g. Archie using electric logging data is often much lower than the experiment-derived value. In this paper, a new method was presented for calculating gas saturation more accurately based on non-electric logging data. Firstly, the causes for the low resistivity of shale gas reservoirs in this area were analyzed. Then, the limitation of traditional methods for calculating gas saturation based on electric logging data was diagnosed, and the feasibility of the neutron–density porosity overlay method was illustrated. According to the response characteristics of neutron, density and other porosity logging in shale gas reservoirs, a model for calculating gas saturation of shale gas was established by core experimental calibration based on the density logging value, the density porosity and the difference between density porosity and neutron porosity, by means of multiple methods (e.g. the dual-porosity overlay method by optimizing the best overlay coefficient. This new method avoids the effect of low resistivity, and thus can provide normal calculated gas saturation of high-quality shale gas reservoirs. It works well in practical application. This new method provides a technical support for the calculation of shale gas reserves in this area. Keywords: Shale gas, Gas saturation, Low resistivity, Non-electric logging, Volume density, Compensated neutron, Overlay method, Reserves calculation, Sichuan Basin, Jiaoshiba shale gas field

  20. Axial power distribution calculation using a neural network in the nuclear reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y H; Cha, K H; Lee, S H [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    This paper is concerned with an algorithm based on neural networks to calculate the axial power distribution using excore detector signals in the nuclear reactor core. The fundamental basis of the algorithm is that the detector response can be fairly accurately estimated using computational codes. In other words, the training set, which represents relationship between detector signals and axial power distributions, for the neural network can be obtained through calculations instead of measurements. Application of the new method to the Yonggwang nuclear power plant unit 3 (YGN-3) shows that it is superior to the current algorithm in place. 7 refs., 4 figs. (Author)

  1. Axial power distribution calculation using a neural network in the nuclear reactor core

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Y. H.; Cha, K. H.; Lee, S. H. [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    This paper is concerned with an algorithm based on neural networks to calculate the axial power distribution using excore detector signals in the nuclear reactor core. The fundamental basis of the algorithm is that the detector response can be fairly accurately estimated using computational codes. In other words, the training set, which represents relationship between detector signals and axial power distributions, for the neural network can be obtained through calculations instead of measurements. Application of the new method to the Yonggwang nuclear power plant unit 3 (YGN-3) shows that it is superior to the current algorithm in place. 7 refs., 4 figs. (Author)

  2. The Effect of Indium Concentration on the Structure and Properties of Zirconium Based Intermetallics: First-Principles Calculations

    Directory of Open Access Journals (Sweden)

    Fuda Guo

    2016-01-01

    Full Text Available The phase stability, mechanical, electronic, and thermodynamic properties of In-Zr compounds have been explored using the first-principles calculation based on density functional theory (DFT. The calculated formation enthalpies show that these compounds are all thermodynamically stable. Information on electronic structure indicates that they possess metallic characteristics and there is a common hybridization between In-p and Zr-d states near the Fermi level. Elastic properties have been taken into consideration. The calculated results on the ratio of the bulk to shear modulus (B/G validate that InZr3 has the strongest deformation resistance. The increase of indium content results in the breakout of a linear decrease of the bulk modulus and Young’s modulus. The calculated theoretical hardness of α-In3Zr is higher than the other In-Zr compounds.

  3. Three-dimensional electron-beam dose calculations

    International Nuclear Information System (INIS)

    Shiu, A.S.

    1988-01-01

    The MDAH pencil-beam algorithm developed by Hogstrom et al (1981) has been widely used in clinics for electron-beam dose calculations for radiotherapy treatment planning. The primary objective of this research was to address several deficiencies of that algorithm and to develop an enhanced version. Two enhancements were incorporated into the pencil-beam algorithm; one models fluence rather than planar fluence, and the other models the bremsstrahlung dose using measured beam data. Comparisons of the resulting calculated dose distributions with measured dose distributions for several test phantoms have been made. From these results it is concluded (1) that the fluence-based algorithm is more accurate to use for the dose calculation in an inhomogeneous slab phantom, and (2) the fluence-based calculation provides only a limited improvement to the accuracy the calculated dose in the region just downstream of the lateral edge of an inhomogeneity. A pencil-beam redefinition model was developed for the calculation of electron-beam dose distributions in three dimensions

  4. SU-D-207B-07: Development of a CT-Radiomics Based Early Response Prediction Model During Delivery of Chemoradiation Therapy for Pancreatic Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Klawikowski, S; Christian, J; Schott, D; Zhang, M; Li, X [Medical College of Wisconsin, Milwaukee, WI (United States)

    2016-06-15

    Purpose: Pilot study developing a CT-texture based model for early assessment of treatment response during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Methods: Daily CT data acquired for 24 pancreatic head cancer patients using CT-on-rails, during the routine CT-guided CRT delivery with a radiation dose of 50.4 Gy in 28 fractions, were analyzed. The pancreas head was contoured on each daily CT. Texture analysis was performed within the pancreas head contour using a research tool (IBEX). Over 1300 texture metrics including: grey level co-occurrence, run-length, histogram, neighborhood intensity difference, and geometrical shape features were calculated for each daily CT. Metric-trend information was established by finding the best fit of either a linear, quadratic, or exponential function for each metric value verses accumulated dose. Thus all the daily CT texture information was consolidated into a best-fit trend type for a given patient and texture metric. Linear correlation was performed between the patient histological response vector (good, medium, poor) and all combinations of 23 patient subgroups (statistical jackknife) determining which metrics were most correlated to response and repeatedly reliable across most patients. Control correlations against CT scanner, reconstruction kernel, and gated/nongated CT images were also calculated. Euclidean distance measure was used to group/sort patient vectors based on the data of these trend-response metrics. Results: We found four specific trend-metrics (Gray Level Coocurence Matrix311-1InverseDiffMomentNorm, Gray Level Coocurence Matrix311-1InverseDiffNorm, Gray Level Coocurence Matrix311-1 Homogeneity2, and Intensity Direct Local StdMean) that were highly correlated with patient response and repeatedly reliable. Our four trend-metric model successfully ordered our pilot response dataset (p=0.00070). We found no significant correlation to our control parameters: gating (p=0.7717), scanner (p

  5. Rose-like I-doped Bi_2O_2CO_3 microspheres with enhanced visible light response: DFT calculation, synthesis and photocatalytic performance

    International Nuclear Information System (INIS)

    Zai, Jiantao; Cao, Fenglei; Liang, Na; Yu, Ke; Tian, Yuan; Sun, Huai; Qian, Xuefeng

    2017-01-01

    Highlights: • DFT reveals I"− can partially substitute CO_3"2"−to narrow the bandgap of Bi_2O_2CO_3. • Sodium citrate play a key role on the formation of rose-like I-doped Bi_2O_2CO_3. • Rose-like I-doped Bi_2O_2CO_3 show enhanced visible light response. • The catalyst has enhanced photocatalytic activity to organic and Cr(VI) pollutes. - Abstract: Based on the crystal structure and the DFT calculation of Bi_2O_2CO_3, I"− can partly replace the CO_3"2"−in Bi_2O_2CO_3 to narrow its bandgap and to enhance its visible light absorption. With this in mind, rose-like I-doped Bi_2O_2CO_3 microspheres were prepared via a hydrothermal process. This method can also be extended to synthesize rose-like Cl- or Br-doped Bi_2O_2CO_3 microspheres. Photoelectrochemical test supports the DFT calculation result that I- doping narrows the bandgap of Bi_2O_2CO_3 by forming two intermediate levels in its forbidden band. Further study reveals that I-doped Bi_2O_2CO_3 microspheres with optimized composition exhibit the best photocatalytic activity. Rhodamine B can be completely degraded within 6 min and about 90% of Cr(VI) can be reduced after 25 min under the irradiation of visible light (λ > 400 nm).

  6. Feasibility study on embedded transport core calculations

    International Nuclear Information System (INIS)

    Ivanov, B.; Zikatanov, L.; Ivanov, K.

    2007-01-01

    The main objective of this study is to develop an advanced core calculation methodology based on embedded diffusion and transport calculations. The scheme proposed in this work is based on embedded diffusion or SP 3 pin-by-pin local fuel assembly calculation within the framework of the Nodal Expansion Method (NEM) diffusion core calculation. The SP 3 method has gained popularity in the last 10 years as an advanced method for neutronics calculation. NEM is a multi-group nodal diffusion code developed, maintained and continuously improved at the Pennsylvania State University. The developed calculation scheme is a non-linear iteration process, which involves cross-section homogenization, on-line discontinuity factors generation, and boundary conditions evaluation by the global solution passed to the local calculation. In order to accomplish the local calculation, a new code has been developed based on the Finite Elements Method (FEM), which is capable of performing both diffusion and SP 3 calculations. The new code will be used in the framework of the NEM code in order to perform embedded pin-by-pin diffusion and SP 3 calculations on fuel assembly basis. The development of the diffusion and SP 3 FEM code is presented first following by its application to several problems. Description of the proposed embedded scheme is provided next as well as the obtained preliminary results of the C3 MOX benchmark. The results from the embedded calculations are compared with direct pin-by-pin whole core calculations in terms of accuracy and efficiency followed by conclusions made about the feasibility of the proposed embedded approach. (authors)

  7. Calculation of Monte Carlo importance functions for use in nuclear-well logging calculations

    International Nuclear Information System (INIS)

    Soran, P.D.; McKeon, D.C.; Booth, T.E.

    1989-07-01

    Importance sampling is essential to the timely solution of Monte Carlo nuclear-logging computer simulations. Achieving minimum variance (maximum precision) of a response in minimum computation time is one criteria for the choice of an importance function. Various methods for calculating importance functions will be presented, new methods investigated, and comparisons with porosity and density tools will be shown. 5 refs., 1 tab

  8. Dose-volume histograms based on serial intravascular ultrasound: a calculation model for radioactive stents

    International Nuclear Information System (INIS)

    Kirisits, Christian; Wexberg, Paul; Gottsauner-Wolf, Michael; Pokrajac, Boris; Ortmann, Elisabeth; Aiginger, Hannes; Glogar, Dietmar; Poetter, Richard

    2001-01-01

    Background and purpose: Radioactive stents are under investigation for reduction of coronary restenosis. However, the actual dose delivered to specific parts of the coronary artery wall based on the individual vessel anatomy has not been determined so far. Dose-volume histograms (DVHs) permit an estimation of the actual dose absorbed by the target volume. We present a method to calculate DVHs based on intravascular ultrasound (IVUS) measurements to determine the dose distribution within the vessel wall. Materials and methods: Ten patients were studied by intravascular ultrasound after radioactive stenting (BX Stent, P-32, 15-mm length) to obtain tomographic cross-sections of the treated segments. We developed a computer algorithm using the actual dose distribution of the stent to calculate differential and cumulative DVHs. The minimal target dose, the mean target dose, the minimal doses delivered to 10 and 90% of the adventitia (DV10, DV90), and the percentage of volume receiving a reference dose at 0.5 mm from the stent surface cumulated over 28 days were derived from the DVH plots. Results were expressed as mean±SD. Results: The mean activity of the stents was 438±140 kBq at implantation. The mean reference dose was 111±35 Gy, whereas the calculated mean target dose within the adventitia along the stent was 68±20 Gy. On average, DV90 and DV10 were 33±9 Gy and 117±41 Gy, respectively. Expanding the target volume to include 2.5-mm-long segments at the proximal and distal ends of the stent, the calculated mean target dose decreased to 55±17 Gy, and DV 90 and DV 10 were 6.4±2.4 Gy and 107±36 Gy, respectively. Conclusions: The assessment of DVHs seems in principle to be a valuable tool for both prospective and retrospective analysis of dose-distribution of radioactive stents. It may provide the basis to adapt treatment planning in coronary brachytherapy to the common standards of radiotherapy

  9. Multi-scale calculation of the electric properties of organic-based devices from the molecular structure

    KAUST Repository

    Li, Haoyuan

    2016-03-24

    A method is proposed to calculate the electric properties of organic-based devices from the molecular structure. The charge transfer rate is obtained using non-adiabatic molecular dynamics. The organic film in the device is modeled using the snapshots from the dynamic trajectory of the simulated molecular system. Kinetic Monte Carlo simulations are carried out to calculate the current characteristics. A widely used hole-transporting material, N,N′-diphenyl-N,N′-bis(1-naphthyl)-1,1′-biphenyl-4,4′-diamine (NPB) is studied as an application of this method, and the properties of its hole-only device are investigated. The calculated current densities and dependence on the applied voltage without an injection barrier are close to those obtained by the Mott-Gurney equation. The results with injection barriers are also in good agreement with experiment. This method can be used to aid the design of molecules and guide the optimization of devices. © 2016 Elsevier B.V. All rights reserved.

  10. A New Thermodynamic Calculation Method for Binary Alloys: Part I: Statistical Calculation of Excess Functions

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The improved form of calculation formula for the activities of the components in binary liquids and solid alloys has been derived based on the free volume theory considering excess entropy and Miedema's model for calculating the formation heat of binary alloys. A calculation method of excess thermodynamic functions for binary alloys, the formulas of integral molar excess properties and partial molar excess properties for solid ordered or disordered binary alloys have been developed. The calculated results are in good agreement with the experimental values.

  11. Calculation of Critical Temperatures by Empirical Formulae

    Directory of Open Access Journals (Sweden)

    Trzaska J.

    2016-06-01

    Full Text Available The paper presents formulas used to calculate critical temperatures of structural steels. Equations that allow calculating temperatures Ac1, Ac3, Ms and Bs were elaborated based on the chemical composition of steel. To elaborate the equations the multiple regression method was used. Particular attention was paid to the collection of experimental data which was required to calculate regression coefficients, including preparation of data for calculation. The empirical data set included more than 500 chemical compositions of structural steel and has been prepared based on information available in literature on the subject.

  12. Improvements for Monte Carlo burnup calculation

    Energy Technology Data Exchange (ETDEWEB)

    Shenglong, Q.; Dong, Y.; Danrong, S.; Wei, L., E-mail: qiangshenglong@tsinghua.org.cn, E-mail: d.yao@npic.ac.cn, E-mail: songdr@npic.ac.cn, E-mail: luwei@npic.ac.cn [Nuclear Power Inst. of China, Cheng Du, Si Chuan (China)

    2015-07-01

    Monte Carlo burnup calculation is development trend of reactor physics, there would be a lot of work to be done for engineering applications. Based on Monte Carlo burnup code MOI, non-fuel burnup calculation methods and critical search suggestions will be mentioned in this paper. For non-fuel burnup, mixed burnup mode will improve the accuracy of burnup calculation and efficiency. For critical search of control rod position, a new method called ABN based on ABA which used by MC21 will be proposed for the first time in this paper. (author)

  13. AEROS: a real-time emergency response system for atmospheric releases of toxic material

    International Nuclear Information System (INIS)

    Nasstrom, J.S.; Greenly, G.D.

    1986-01-01

    The Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory has developed a sophisticated computer-based real-time emergency response system for radiotoxic releases into the atmosphere. The ARAC Emergency Response Operating System (AEROS) has a centralized computer facility linked to remote site computers, meteorological towers, and meteorological data sources. The system supports certain fixed sites, but has the ability to respond to accidents at arbitrary locations. Product quality and response time are optimized by using complex three-dimensional dispersion models; extensive on-line data bases; automated data processing; and an efficient user interface, employing graphical computer displays and computer-displayed forms. Upon notification, the system automatically initiates a response to an emergency and proceeds through preliminary calculations, automatically processing accident information, meteorological data, and model parameters. The model calculations incorporate mass-consistent three-dimensional wind fields, terrain effects, and particle-in-cell diffusion. Model products are color images of dose or deposition contours overlaid on a base map

  14. Monte Carlo Calculation of Sensitivities to Secondaries' Angular Distributions

    International Nuclear Information System (INIS)

    Perel, R.L.

    2003-01-01

    An algorithm for Monte Carlo calculation of sensitivities of responses to secondaries' angular distributions (SAD) is developed, based on the differential operator approach. The algorithm was formulated for the sensitivity to Legendre coefficients of the SAD and is valid even in cases where the actual representation of SAD is not in the form of a Legendre series. The algorithm was implemented, for point- or ring-detectors, in a local version of the code MCNP. Numerical tests were performed to validate the algorithm and its implementation. In addition, an algorithm specific for the Kalbach-Mann representation of SAD is presented

  15. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    International Nuclear Information System (INIS)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron

    2016-01-01

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm 3 water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  16. Poster - 08: Preliminary Investigation into Collapsed-Cone based Dose Calculations for COMS Eye Plaques

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, Hali; Menon, Geetha; Sloboda, Ron [Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB, Cross Cancer Institute, Edmonton, AB, and University of Alberta, Edmonton, AB (Canada)

    2016-08-15

    Purpose: To investigate the accuracy of model-based dose calculations using a collapsed-cone algorithm for COMS eye plaques loaded with I-125 seeds. Methods: The Nucletron SelectSeed 130.002 I-125 seed and the 12 mm COMS eye plaque were incorporated into a research version of the Oncentra® Brachy v4.5 treatment planning system which uses the Advanced Collapsed-cone Engine (ACE) algorithm. Comparisons of TG-43 and high-accuracy ACE doses were performed for a single seed in a 30×30×30 cm{sup 3} water box, as well as with one seed in the central slot of the 12 mm COMS eye plaque. The doses along the plaque central axis (CAX) were used to calculate the carrier correction factor, T(r), and were compared to tabulated and MCNP6 simulated doses for both the SelectSeed and IsoAid IAI-125A seeds. Results: The ACE calculated dose for the single seed in water was on average within 0.62 ± 2.2% of the TG-43 dose, with the largest differences occurring near the end-welds. The ratio of ACE to TG-43 calculated doses along the CAX (T(r)) of the 12 mm COMS plaque for the SelectSeed was on average within 3.0% of previously tabulated data, and within 2.9% of the MCNP6 simulated values. The IsoAid and SelectSeed T(r) values agreed within 0.3%. Conclusions: Initial comparisons show good agreement between ACE and MC doses for a single seed in a 12 mm COMS eye plaque; more complicated scenarios are being investigated to determine the accuracy of this calculation method.

  17. The MARS15-based FermiCORD code system for calculation of the accelerator-induced residual dose

    Science.gov (United States)

    Grebe, A.; Leveling, A.; Lu, T.; Mokhov, N.; Pronskikh, V.

    2018-01-01

    The FermiCORD code system, a set of codes based on MARS15 that calculates the accelerator-induced residual doses at experimental facilities of arbitrary configurations, has been developed. FermiCORD is written in C++ as an add-on to Fortran-based MARS15. The FermiCORD algorithm consists of two stages: 1) simulation of residual doses on contact with the surfaces surrounding the studied location and of radionuclide inventories in the structures surrounding those locations using MARS15, and 2) simulation of the emission of the nuclear decay γ-quanta by the residuals in the activated structures and scoring the prompt doses of these γ-quanta at arbitrary distances from those structures. The FermiCORD code system has been benchmarked against similar algorithms based on other code systems and against experimental data from the CERF facility at CERN, and FermiCORD showed reasonable agreement with these. The code system has been applied for calculation of the residual dose of the target station for the Mu2e experiment and the results have been compared to approximate dosimetric approaches.

  18. Dynamic response analysis of DFB fibre lasers

    DEFF Research Database (Denmark)

    Yujun, Qian; Varming, Poul; Povlsen, Jørn Hedegaard

    1998-01-01

    We present a model for relative intensity noise (RIN) in DFB fibre lasers which predicts measured characteristics accurately. Calculation results implies that the RIN decreases rapidly with stronger Bragg grating and higher pump power. We propose here a simplified model based on three spatially...... independent rate equations to describe the dynamic response of erbium doped DFB fibre lasers on pump power fluctuations, using coupled-mode theory to calculate the steady-state hole-burning of the erbium ion inversion...

  19. Metadyn View: Fast web-based viewer of free energy surfaces calculated by metadynamics

    Science.gov (United States)

    Hošek, Petr; Spiwok, Vojtěch

    2016-01-01

    Metadynamics is a highly successful enhanced sampling technique for simulation of molecular processes and prediction of their free energy surfaces. An in-depth analysis of data obtained by this method is as important as the simulation itself. Although there are several tools to compute free energy surfaces from metadynamics data, they usually lack user friendliness and a build-in visualization part. Here we introduce Metadyn View as a fast and user friendly viewer of bias potential/free energy surfaces calculated by metadynamics in Plumed package. It is based on modern web technologies including HTML5, JavaScript and Cascade Style Sheets (CSS). It can be used by visiting the web site and uploading a HILLS file. It calculates the bias potential/free energy surface on the client-side, so it can run online or offline without necessity to install additional web engines. Moreover, it includes tools for measurement of free energies and free energy differences and data/image export.

  20. Response of base-isolated nuclear structures to extreme earthquake shaking

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Manish, E-mail: mkumar2@buffalo.edu; Whittaker, Andrew S.; Constantinou, Michael C.

    2015-12-15

    Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.

  1. Response of base-isolated nuclear structures to extreme earthquake shaking

    International Nuclear Information System (INIS)

    Kumar, Manish; Whittaker, Andrew S.; Constantinou, Michael C.

    2015-01-01

    Highlights: • Response-history analysis of nuclear structures base-isolated using lead–rubber bearings is performed. • Advanced numerical model of lead–rubber bearing is used to capture behavior under extreme earthquake shaking. • Results of response-history analysis obtained using simplified and advanced model of lead–rubber bearings are compared. • Heating of the lead core and variation in buckling load and axial stiffness affect the response. - Abstract: Seismic isolation using low damping rubber and lead–rubber bearings is a viable strategy for mitigating the effects of extreme earthquake shaking on safety-related nuclear structures. The mechanical properties of these bearings are not expected to change substantially in design basis shaking. However, under shaking more intense than design basis, the properties of the lead cores in lead–rubber bearings may degrade due to heating associated with energy dissipation, some bearings in an isolation system may experience net tension, and the compression and tension stiffness may be affected by the lateral displacement of the isolation system. The effects of intra-earthquake changes in mechanical properties on the response of base-isolated nuclear power plants (NPPs) are investigated using an advanced numerical model of a lead–rubber bearing that has been verified and validated, and implemented in OpenSees. A macro-model is used for response-history analysis of base-isolated NPPs. Ground motions are selected and scaled to be consistent with response spectra for design basis and beyond design basis earthquake shaking at the site of the Diablo Canyon Nuclear Generating Station. Ten isolation systems of two periods and five characteristic strengths are analyzed. The responses obtained using simplified and advanced isolator models are compared. Strength degradation due to heating of lead cores and changes in buckling load most significantly affect the response of the base-isolated NPP.

  2. Economic calculation in socialist countries

    NARCIS (Netherlands)

    Ellman, M.; Durlauf, S.N.; Blume, L.E.

    2008-01-01

    In the 1930s, when the classical socialist system emerged, economic decisions were based not on detailed and precise economic methods of calculation but on rough and ready political methods. An important method of economic calculation - particularly in the post-Stalin period - was that of

  3. Application of CFD based wave loads in aeroelastic calculations

    DEFF Research Database (Denmark)

    Schløer, Signe; Paulsen, Bo Terp; Bredmose, Henrik

    2014-01-01

    Two fully nonlinear irregular wave realizations with different significant wave heights are considered. The wave realizations are both calculated in the potential flow solver Ocean-Wave3D and in a coupled domain decomposed potential-flow CFD solver. The surface elevations of the calculated wave...... domain decomposed potentialflow CFD solver result in different dynamic forces in the tower and monopile, despite that the static forces on a fixed monopile are similar. The changes are due to differences in the force profiles and wave steepness in the two solvers. The results indicate that an accurate...

  4. Calculating CR-39 Response to Radon in Water Using Monte Carlo Simulation

    International Nuclear Information System (INIS)

    Razaie Rayeni Nejad, M. R.

    2012-01-01

    CR-39 detectors are widely used for Radon and progeny measurement in the air. In this paper, using the Monte Carlo simulation, the possibility of using the CR-39 for direct measurement of Radon and progeny in water is investigated. Assuming the random position and angle of alpha particle emitted by Radon and progeny, alpha energy and angular spectrum that arrive at CR-39, the calibration factor, and the suitable depth of chemical etching of CR-39 in air and water was calculated. In this simulation, a range of data were obtained from SRIM2008 software. Calibration factor of CR-39 in water is calculated as 6.6 (kBq.d/m 3 )/(track/cm 2 ) that is corresponding with EPA standard level of Radon concentration in water (10-11 kBq/m 3 ). With replacing the skin instead of CR-39, the volume affected by Radon and progeny was determined to be 2.51 mm 3 for one m 2 of skin area. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/(Bq.h/m 3 ). Using the CR-39 for Radon measurement in water can be beneficial. The annual dose conversion factor for Radon and progeny was calculated to be between 8.8-58.8 nSv/ (Bq.h/m 3 ).

  5. Model Predictive Control based on Finite Impulse Response Models

    DEFF Research Database (Denmark)

    Prasath, Guru; Jørgensen, John Bagterp

    2008-01-01

    We develop a regularized l2 finite impulse response (FIR) predictive controller with input and input-rate constraints. Feedback is based on a simple constant output disturbance filter. The performance of the predictive controller in the face of plant-model mismatch is investigated by simulations...... and related to the uncertainty of the impulse response coefficients. The simulations can be used to benchmark l2 MPC against FIR based robust MPC as well as to estimate the maximum performance improvements by robust MPC....

  6. Tourists’ Environmentally Responsible Behavior in Response to Climate Change and Tourist Experiences in Nature-Based Tourism

    Directory of Open Access Journals (Sweden)

    Ju Hyoung Han

    2016-07-01

    Full Text Available Nature-based tourism destinations—locations in which economic viability and environmental responsibility are sought—are sensitive to climate change and its effects on important environmental components of the tourism areas. To meet the dual roles, it is important for destination marketers and resources managers to provide quality experiences for tourists and to induce tourists’ environmentally responsible behavior in such destinations. This study documents the importance of perceptions toward climate change and tourist experiences in determining tourists’ environmentally responsible behavior while enjoying holidays at nature-based tourism destinations in Jeju Island, South Korea. Two hundred and eleven Korean and 204 Chinese tourists marked dominant tourist arrivals to the island, and responded to the survey questionnaire. Results showed that perceptions toward climate change and tourist experiences affect Korean tourists’ environmentally responsible behavior intentions, whereas tourist experiences—not perceptions toward climate change—only significantly affect Chinese tourists’ behavior intention. In a nature-based tourism context under the pressure of climate change and adverse environmental effects as consequences of tourism activities, resources managers and destination marketers need to develop environmental campaigns or informative tourist programs to formulate environmentally responsible behavior as well as to increase tourist quality experiences among domestic and international tourists.

  7. High surface adsorption properties of carbon-based nanomaterials are responsible for mortality, swimming inhibition, and biochemical responses in Artemia salina larvae.

    Science.gov (United States)

    Mesarič, Tina; Gambardella, Chiara; Milivojević, Tamara; Faimali, Marco; Drobne, Damjana; Falugi, Carla; Makovec, Darko; Jemec, Anita; Sepčić, Kristina

    2015-06-01

    We investigated the effects of three different carbon-based nanomaterials on brine shrimp (Artemia salina) larvae. The larvae were exposed to different concentrations of carbon black, graphene oxide, and multiwall carbon nanotubes for 48 h, and observed using phase contrast and scanning electron microscopy. Acute (mortality) and behavioural (swimming speed alteration) responses and cholinesterase, glutathione-S-transferase and catalase enzyme activities were evaluated. These nanomaterials were ingested and concentrated in the gut, and attached onto the body surface of the A. salina larvae. This attachment was responsible for concentration-dependent inhibition of larval swimming, and partly for alterations in the enzyme activities, that differed according to the type of tested nanomaterials. No lethal effects were observed up to 0.5mg/mL carbon black and 0.1mg/mL multiwall carbon nanotubes, while graphene oxide showed a threshold whereby it had no effects at 0.6 mg/mL, and more than 90% mortality at 0.7 mg/mL. Risk quotients calculated on the basis of predicted environmental concentrations indicate that carbon black and multiwall carbon nanotubes currently do not pose a serious risk to the marine environment, however if uncontrolled release of nanomaterials continues, this scenario can rapidly change. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations

    Science.gov (United States)

    Stefanski, Philip L.

    2014-01-01

    A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.

  9. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  10. Study on the acceleration of the neutronics calculation based on GPGPU

    International Nuclear Information System (INIS)

    Ohoka, Y.; Tatsumi, M.

    2007-01-01

    The cost of the reactor physics calculation tends to become higher with more detail treatment in the physics models and computational algorithms. For example, SCOPE2 requires considerably high computational costs for multi-group transport calculation in 3-D pin-by-pin geometry. In this paper, applicability of GPGPU to acceleration of neutronics calculation is discussed. At first, performance and accuracy of the basic matrix calculations with fundamental arithmetic operators and the exponential, function are studied. The calculation was performed on a machine with Pentium 4 of 3.2 MHz and GPU of nVIDIA GeForce7800GTX using a test program written in C++, OpenGL and GLSL on Linux. When matrix size becomes large, the calculation on GPU is 10-50 times faster than that on CPU for fundamental arithmetic operators. For the exponential function, calculation on GPU is 270-370 times faster than that on CPU. The precision of all the cases are equivalent to that on CPU, which is less than the criterion of IEEE754 (10 -6 as single precision). Next, the GPGPU is applied to a functional module in SCOPE2. In the present study, as the first step of GPGPU application, calculations in. small geometry are tested. Performance gain, by GPGPU in this application was relatively modest, approximately 15%, compared to the feasibility study. This is because the part in which GPGPU was applied had appropriate structure for GPGPU implementation but had only small fraction of computational load. For much advanced acceleration, it is important to consider various factors such as easiness of implementation, fraction of computational load and bottleneck in data transfer between GPU and CPU. (authors)

  11. PID Controller Settings Based on a Transient Response Experiment

    Science.gov (United States)

    Silva, Carlos M.; Lito, Patricia F.; Neves, Patricia S.; Da Silva, Francisco A.

    2008-01-01

    An experimental work on controller tuning for chemical engineering undergraduate students is proposed using a small heat exchange unit. Based upon process reaction curves in open-loop configuration, system gain and time constant are determined for first order model with time delay with excellent accuracy. Afterwards students calculate PID…

  12. The calculation of the chemical exergies of coal-based fuels by using the higher heating values

    International Nuclear Information System (INIS)

    Bilgen, Selcuk; Kaygusuz, Kamil

    2008-01-01

    This paper demonstrates the application of exergy to gain a better understanding of coal properties, especially chemical exergy and specific chemical exergy. In this study, a BASIC computer program was used to calculation of the chemical exergies of the coal-based fuels. Calculations showed that the chemical composition of the coal influences strongly the values of the chemical exergy. The exergy value of a coal is closely related to the H:C and O:C ratios. High proportions of hydrogen and/or oxygen, compared to carbon, generally reduce the exergy value of the coal. High contents of the moisture and/or the ash cause to low values of the chemical exergy. The aim of this paper is to calculate the chemical exergy of coals by using equations given in the literature and to detect and to evaluate quantitatively the effect of irreversible phenomena increased the thermodynamic imperfection of the processes. In this paper, the calculated exergy values of the fuels will be useful for energy experts studied in the coal mining area and coal-fired powerplants

  13. FragIt: a tool to prepare input files for fragment based quantum chemical calculations.

    Directory of Open Access Journals (Sweden)

    Casper Steinmann

    Full Text Available Near linear scaling fragment based quantum chemical calculations are becoming increasingly popular for treating large systems with high accuracy and is an active field of research. However, it remains difficult to set up these calculations without expert knowledge. To facilitate the use of such methods, software tools need to be available to support these methods and help to set up reasonable input files which will lower the barrier of entry for usage by non-experts. Previous tools relies on specific annotations in structure files for automatic and successful fragmentation such as residues in PDB files. We present a general fragmentation methodology and accompanying tools called FragIt to help setup these calculations. FragIt uses the SMARTS language to locate chemically appropriate fragments in large structures and is applicable to fragmentation of any molecular system given suitable SMARTS patterns. We present SMARTS patterns of fragmentation for proteins, DNA and polysaccharides, specifically for D-galactopyranose for use in cyclodextrins. FragIt is used to prepare input files for the Fragment Molecular Orbital method in the GAMESS program package, but can be extended to other computational methods easily.

  14. Machine learning assisted first-principles calculation of multicomponent solid solutions: estimation of interface energy in Ni-based superalloys

    Science.gov (United States)

    Chandran, Mahesh; Lee, S. C.; Shim, Jae-Hyeok

    2018-02-01

    A disordered configuration of atoms in a multicomponent solid solution presents a computational challenge for first-principles calculations using density functional theory (DFT). The challenge is in identifying the few probable (low energy) configurations from a large configurational space before DFT calculation can be performed. The search for these probable configurations is possible if the configurational energy E({\\boldsymbol{σ }}) can be calculated accurately and rapidly (with a negligibly small computational cost). In this paper, we demonstrate such a possibility by constructing a machine learning (ML) model for E({\\boldsymbol{σ }}) trained with DFT-calculated energies. The feature vector for the ML model is formed by concatenating histograms of pair and triplet (only equilateral triangle) correlation functions, {g}(2)(r) and {g}(3)(r,r,r), respectively. These functions are a quantitative ‘fingerprint’ of the spatial arrangement of atoms, familiar in the field of amorphous materials and liquids. The ML model is used to generate an accurate distribution P(E({\\boldsymbol{σ }})) by rapidly spanning a large number of configurations. The P(E) contains full configurational information of the solid solution and can be selectively sampled to choose a few configurations for targeted DFT calculations. This new framework is employed to estimate (100) interface energy ({σ }{{IE}}) between γ and γ \\prime at 700 °C in Alloy 617, a Ni-based superalloy, with composition reduced to five components. The estimated {σ }{{IE}} ≈ 25.95 mJ m-2 is in good agreement with the value inferred by the precipitation model fit to experimental data. The proposed new ML-based ab initio framework can be applied to calculate the parameters and properties of alloys with any number of components, thus widening the reach of first-principles calculation to realistic compositions of industrially relevant materials and alloys.

  15. Localized-overlap approach to calculations of intermolecular interactions

    Science.gov (United States)

    Rob, Fazle

    Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.

  16. Cooling tower calculations

    International Nuclear Information System (INIS)

    Simonkova, J.

    1988-01-01

    The problems are summed up of the dynamic calculation of cooling towers with forced and natural air draft. The quantities and relations are given characterizing the simultaneous exchange of momentum, heat and mass in evaporative water cooling by atmospheric air in the packings of cooling towers. The method of solution is clarified in the calculation of evaporation criteria and thermal characteristics of countercurrent and cross current cooling systems. The procedure is demonstrated of the calculation of cooling towers, and correction curves and the effect assessed of the operating mode at constant air number or constant outlet air volume flow on their course in ventilator cooling towers. In cooling towers with the natural air draft the flow unevenness is assessed of water and air relative to its effect on the resulting cooling efficiency of the towers. The calculation is demonstrated of thermal and resistance response curves and cooling curves of hydraulically unevenly loaded towers owing to the water flow rate parameter graded radially by 20% along the cross-section of the packing. Flow rate unevenness of air due to wind impact on the outlet air flow from the tower significantly affects the temperatures of cooled water in natural air draft cooling towers of a design with lower demands on aerodynamics, as early as at wind velocity of 2 m.s -1 as was demonstrated on a concrete example. (author). 11 figs., 10 refs

  17. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    International Nuclear Information System (INIS)

    Chen, C.L.; Wu, T.H.; Cheng, M.C.; Huang, Y.H.; Sheu, C.Y.; Hsieh, J.C.; Lee, J.S.

    2006-01-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training

  18. Prospective demonstration of brain plasticity after intensive abacus-based mental calculation training: An fMRI study

    Science.gov (United States)

    Chen, C. L.; Wu, T. H.; Cheng, M. C.; Huang, Y. H.; Sheu, C. Y.; Hsieh, J. C.; Lee, J. S.

    2006-12-01

    Abacus-based mental calculation is a unique Chinese culture. The abacus experts can perform complex computations mentally with exceptionally fast speed and high accuracy. However, the neural bases of computation processing are not yet clearly known. This study used a BOLD contrast 3T fMRI system to explore the brain activation differences between abacus experts and non-expert subjects. All the acquired data were analyzed using SPM99 software. From the results, different ways of performing calculations between the two groups were seen. The experts tended to adopt efficient visuospatial/visuomotor strategy (bilateral parietal/frontal network) to process and retrieve all the intermediate and final results on the virtual abacus during calculation. By contrast, coordination of several networks (verbal, visuospatial processing and executive function) was required in the normal group to carry out arithmetic operations. Furthermore, more involvement of the visuomotor imagery processing (right dorsal premotor area) for imagining bead manipulation and low level use of the executive function (frontal-subcortical area) for launching the relatively time-consuming sequentially organized process was noted in the abacus expert group than in the non-expert group. We suggest that these findings may explain why abacus experts can reveal the exceptional computational skills compared to non-experts after intensive training.

  19. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-01-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example that shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation

  20. Importance iteration in MORSE Monte Carlo calculations

    International Nuclear Information System (INIS)

    Kloosterman, J.L.; Hoogenboom, J.E.

    1994-02-01

    An expression to calculate point values (the expected detector response of a particle emerging from a collision or the source) is derived and implemented in the MORSE-SGC/S Monte Carlo code. It is outlined how these point values can be smoothed as a function of energy and as a function of the optical thickness between the detector and the source. The smoothed point values are subsequently used to calculate the biasing parameters of the Monte Carlo runs to follow. The method is illustrated by an example, which shows that the obtained biasing parameters lead to a more efficient Monte Carlo calculation. (orig.)

  1. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Science.gov (United States)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel

  2. Quantum Monte Carlo calculation of neutral-current ν -12C inclusive quasielastic scattering

    Science.gov (United States)

    Lovato, A.; Gandolfi, S.; Carlson, J.; Lusk, Ewing; Pieper, Steven C.; Schiavilla, R.

    2018-02-01

    Quasielastic neutrino scattering is an important aspect of the experimental program to study fundamental neutrino properties including neutrino masses, mixing angles, mass hierarchy, and charge-conjugation parity (CP)- violating phase. Proper interpretation of the experiments requires reliable theoretical calculations of neutrino-nucleus scattering. In this paper we present calculations of response functions and cross sections by neutral-current scattering of neutrinos off 12C. These calculations are based on realistic treatments of nuclear interactions and currents, the latter including the axial, vector, and vector-axial interference terms crucial for determining the difference between neutrino and antineutrino scattering and the CP-violating phase. We find that the strength and energy dependence of two-nucleon processes induced by correlation effects and interaction currents are crucial in providing the most accurate description of neutrino-nucleus scattering in the quasielastic regime.

  3. Providing frequency regulation reserve services using demand response scheduling

    International Nuclear Information System (INIS)

    Motalleb, Mahdi; Thornton, Matsu; Reihani, Ehsan; Ghorbani, Reza

    2016-01-01

    Highlights: • Proposing a market model for contingency reserve services using demand response. • Considering transient limitations of grid frequency for inverter-based generations. • Price-sensitive scheduling of residential batteries and water heaters using dynamic programming. • Calculating the profits of both generation companies and demand response aggregators. - Abstract: During power grid contingencies, frequency regulation is a primary concern. Historically, frequency regulation during contingency events has been the sole responsibility of the power utility. We present a practical method of using distributed demand response scheduling to provide frequency regulation during contingency events. This paper discusses the implementation of a control system model for the use of distributed energy storage systems such as battery banks and electric water heaters as a source of ancillary services. We present an algorithm which handles the optimization of demand response scheduling for normal operation and during contingency events. We use dynamic programming as an optimization tool. A price signal is developed using optimal power flow calculations to determine the locational marginal price of electricity, while sensor data for water usage is also collected. Using these inputs to dynamic programming, the optimal control signals are given as output. We assume a market model in which distributed demand response resources are sold as a commodity on the open market and profits from demand response aggregators as brokers of distributed demand response resources can be calculated. In considering control decisions for regulation of transient changes in frequency, we focus on IEEE standard 1547 in order to prevent the safety shut-off of inverter-based generation and further exacerbation of frequency droop. This method is applied to IEEE case 118 as a demonstration of the method in practice.

  4. Bending Moment Calculations for Piles Based on the Finite Element Method

    Directory of Open Access Journals (Sweden)

    Yu-xin Jie

    2013-01-01

    Full Text Available Using the finite element analysis program ABAQUS, a series of calculations on a cantilever beam, pile, and sheet pile wall were made to investigate the bending moment computational methods. The analyses demonstrated that the shear locking is not significant for the passive pile embedded in soil. Therefore, higher-order elements are not always necessary in the computation. The number of grids across the pile section is important for bending moment calculated with stress and less significant for that calculated with displacement. Although computing bending moment with displacement requires fewer grid numbers across the pile section, it sometimes results in variation of the results. For displacement calculation, a pile row can be suitably represented by an equivalent sheet pile wall, whereas the resulting bending moments may be different. Calculated results of bending moment may differ greatly with different grid partitions and computational methods. Therefore, a comparison of results is necessary when performing the analysis.

  5. Dynamic calculation of structures in seismic zones. 2. ed.

    International Nuclear Information System (INIS)

    Capra, Alain; Davidovici, Victor

    1982-01-01

    The aims of this book are both didactic and practical. It is therefore addressed to both experienced engineers and students. Some general information about earthquakes and their occurrence is first given. The problem of a simple oscillator is presented. In this way, the reader is provided with an insight into undestanding the dynamic phenomena taking place and is introduced to the concept of response spectra and to an intuitive comprehension of the behavior of structures during earthquakes. The next chapter is devoted to the cases most frequently encountered with multiple oscillator structures. Theoretical studies are based on the usual modal decomposition method. The various practical methods of calculation employed are then examined, emphasis being given to the various different stages involved and to which of them is the best suited for a particular type of structure. Advise is given on how to select the model whose behavior best describes the real structure, both manual and computer methods of calculation being envisaged [fr

  6. CO2 calculator

    DEFF Research Database (Denmark)

    Nielsen, Claus Werner; Nielsen, Ole-Kenneth

    2009-01-01

    Many countries are in the process of mapping their national CO2 emissions, but only few have managed to produce an overall report at municipal level yet. Denmark, however, has succeeded in such a project. Using a new national IT-based calculation model, municipalities can calculate the extent...

  7. Thermal neutron dose calculations in a brain phantom from 7Li(p,n) reaction based BNCT setup

    International Nuclear Information System (INIS)

    Elshahat, B.A.; Naqvi, A.A.; Maalej, N.; Abdallah, Khalid

    2006-01-01

    Monte Carlo simulations were carried out to calculate neutron dose in a brain phantom from a 7 Li(p,n) reaction based setup utilizing a high density polyethylene moderator with graphite reflector. The dimensions of the moderator and the reflector were optimized through optimization of epithermal /(fast +thermal) neutron intensity ratio as a function of geometric parameters of the setup. Results of our calculation showed the capability of our setup to treat the tumor within 4 cm of the head surface. The calculated Peak Therapeutic Ratio for the setup was found to be 2.15. With further improvement in the moderator design and brain phantom irradiation arrangement, the setup capabilities can be improved to reach further deep-seated tumor. (author)

  8. Cardiovascular risk calculation

    African Journals Online (AJOL)

    James A. Ker

    2014-08-20

    Aug 20, 2014 ... smoking and elevated blood sugar levels (diabetes mellitus). These risk ... These are risk charts, e.g. FRS, a non-laboratory-based risk calculation, and ... for hard cardiovascular end-points, such as coronary death, myocardial ...

  9. Correction of the calculation of beam loading based in the RF power diffusion equation

    International Nuclear Information System (INIS)

    Silva, R. da.

    1980-01-01

    It is described an empirical correction based upon experimental datas of others authors in ORELA, GELINA and SLAC accelerators, to the calculation of the energy loss due to the beam loading effect as stated by the RF power diffusion equation theory an accelerating structure. It is obtained a dependence of this correction with the electron pulse full width half maximum, but independent of the electron energy. (author) [pt

  10. Calibration of thermoluminescence skin dosemeter response to beta emitters found in Ontario Hydro nuclear power stations

    International Nuclear Information System (INIS)

    Walsh, M.L.; Agnew, D.A.; Donnelly, K.E.

    1984-01-01

    The response of the Ontario Hydro Thermoluminescence Dosimetry System to beta radiation in nuclear power station environments was evaluated. Synthetic beta spectra were constructed, based on activity samples from heat transport systems and fuelling machine contamination smears at nuclear power stations. Using these spectra and dosemeter energy response functions, an overall response factor for the skin dosemeter relative to skin dose at 7 mg.cm -2 was calculated. This calculation was done assuming three specific geometries: (1) an infinite uniformly contaminated plane source at a distance of 33 cm (50 mg.cm -2 total shielding) from the receptor; (2) an infinite cloud surrounding the receptor; (3) a point source at 33 cm. Based on these calculations, a conservative response factor of 0.7 has been chosen. This provides an equation for skin dose assignment, i.e. Skin Dose = 1.4 x Skin Dosemeter Reading when the skin dosemeter is directly calibrated in mGy(gamma). (author)

  11. Equipment response spectra for base-isolated shear beam structures

    International Nuclear Information System (INIS)

    Ahmadi, G.; Su, L.

    1992-01-01

    Equipment response spectra in base-isolated structure under seismic ground excitations are studied. The equipment is treated as a single-degree-of-freedom system attached to a nonuniform elastic beam structural model. Several leading base isolation systems, including the laminated rubber bearing, the resilient-friction base isolator with and without a sliding upper plate, and the EDF system are considered. Deflection and acceleration response spectra for the equipment and the shear beam structure subject to a sinusoidal and the accelerogram of the N00W component of El Centro 1940 earthquake are evaluated. Primary-secondary interaction effects are included in the analysis. Several numerical parametric studies are carried out and the effectiveness of different base isolation systems in protecting the nonstructural components is studied. It is shown that use of properly designed base isolation systems provides considerable protection for secondary systems, as well as, the structure against severe seismic loadings. (orig.)

  12. Comparing Four Touch-Based Interaction Techniques for an Image-Based Audience Response System

    NARCIS (Netherlands)

    Jorritsma, Wiard; Prins, Jonatan T.; van Ooijen, Peter M. A.

    2015-01-01

    This study aimed to determine the most appropriate touch-based interaction technique for I2Vote, an image-based audience response system for radiology education in which users need to accurately mark a target on a medical image. Four plausible techniques were identified: land-on, take-off,

  13. Uncertainty calculations made easier

    International Nuclear Information System (INIS)

    Hogenbirk, A.

    1994-07-01

    The results are presented of a neutron cross section sensitivity/uncertainty analysis performed in a complicated 2D model of the NET shielding blanket design inside the ITER torus design, surrounded by the cryostat/biological shield as planned for ITER. The calculations were performed with a code system developed at ECN Petten, with which sensitivity/uncertainty calculations become relatively simple. In order to check the deterministic neutron transport calculations (performed with DORT), calculations were also performed with the Monte Carlo code MCNP. Care was taken to model the 2.0 cm wide gaps between two blanket segments, as the neutron flux behind the vacuum vessel is largely determined by neutrons streaming through these gaps. The resulting neutron flux spectra are in excellent agreement up to the end of the cryostat. It is noted, that at this position the attenuation of the neutron flux is about 1 l orders of magnitude. The uncertainty in the energy integrated flux at the beginning of the vacuum vessel and at the beginning of the cryostat was determined in the calculations. The uncertainty appears to be strongly dependent on the exact geometry: if the gaps are filled with stainless steel, the neutron spectrum changes strongly, which results in an uncertainty of 70% in the energy integrated flux at the beginning of the cryostat in the no-gap-geometry, compared to an uncertainty of only 5% in the gap-geometry. Therefore, it is essential to take into account the exact geometry in sensitivity/uncertainty calculations. Furthermore, this study shows that an improvement of the covariance data is urgently needed in order to obtain reliable estimates of the uncertainties in response parameters in neutron transport calculations. (orig./GL)

  14. Calculations of risk: regulation and responsibility for asbestos in social housing.

    Science.gov (United States)

    Waldman, Linda; Williams, Heather

    2013-01-01

    This paper examines questions of risk, regulation, and responsibility in relation to asbestos lodged in UK social housing. Despite extensive health and safety legislation protecting against industrial exposure, very little regulatory attention is given to asbestos present in domestic homes. The paper argues that this lack of regulatory oversight, combined with the informal, contractual, and small-scale work undertaken in domestic homes weakens the basic premise of occupational health and safety, namely that rational decision-making, technical measures, and individual safety behavior lead concerned parties (workers, employers, and others) to minimize risk and exposure. The paper focuses on UK council or social housing, examining how local housing authorities - as landlords - have a duty to provide housing, to protect and to care for residents, but points out that these obligations do not extend to health and safety legislation in relation to DIY undertaken by residents. At the same time, only conventional occupational health and safety, based on rationality, identification, containment, and protective measures, cover itinerant workmen entering these homes. Focusing on asbestos and the way things work in reality, this paper thus explores the degree to which official health and safety regulation can safeguard maintenance and other workers in council homes. It simultaneously examines how councils advise and protect tenants as they occupy and shape their homes. In so doing, this paper challenges the notion of risk as an objective, scientific, and effective measure. In contrast, it demonstrates the ways in which occupational risk - and the choice of appropriate response - is more likely situational and determined by wide-ranging and often contradictory factors.

  15. Complex reactor cell calculation by means of consecutive use of the one-dimensional algorithms based on the DSsub(n)-method

    International Nuclear Information System (INIS)

    Kalashnikov, A.G.; Elovskaya, L.F.; Glebov, A.P.; Kuznetsova, L.I.

    1981-01-01

    The technique for approximate calculation of the water cooled and moderated reactor cell based on using the DSn-method and the TESI-2S program for the BESM-6 computer in which the proposed technique is realized are described. The calculational technique is based on division of the reactor complex cell into simple one-dimensional cylindrical cells. Series of cells obtained that way is calculated beginning from the first one. After each cell calculation the macrocross sections are averaged over the cell vomome using the neutron spatial and energy distribution. The possibility of approximate account for neutron transport between the cells of the same rank by equating neutron fluxes on the cell boundary is supposed. The spatially and energy neutron flux distribution over cells is performed using the conditions of isotropic neutron reflection on the cell boundary. The results of the proposed technique approbation on the example of the ABV-1.5 reactor fuel assembly high accuracy and reliability of the employed algorithm [ru

  16. Calculation of the yearly energy performance of heating systems based on the European Building Energy Directive and related CEN Standards

    DEFF Research Database (Denmark)

    Olesen, Bjarne W.; de Carli, Michele

    2011-01-01

    According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting syst......–20% of the building energy demand. The additional loss depends on the type of heat emitter, type of control, pump and boiler. Keywords: Heating systems; CEN standards; Energy performance; Calculation methods......According to the Energy Performance of Buildings Directive (EPBD) all new European buildings (residential, commercial, industrial, etc.) must since 2006 have an energy declaration based on the calculated energy performance of the building, including heating, ventilating, cooling and lighting...... systems. This energy declaration must refer to the primary energy or CO2 emissions. The European Organization for Standardization (CEN) has prepared a series of standards for energy performance calculations for buildings and systems. This paper presents related standards for heating systems. The relevant...

  17. Verification of EPA's ''Preliminary Remediation Goals for radionuclides'' (PRG) electronic calculator

    Energy Technology Data Exchange (ETDEWEB)

    Jannik, Tim [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Stagich, Brooke [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-08-28

    The U.S. Environmental Protection Agency (EPA) requested an external, independent verification study of their updated “Preliminary Remediation Goals for Radionuclides” (PRG) electronic calculator. The calculator provides PRGs for radionuclides that are used as a screening tool at Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) and Resource Conservation and Recovery Act (RCRA) sites. These risk-based PRGs establish concentration limits under specific exposure scenarios. The purpose of this verification study is to determine that the calculator has no inherit numerical problems with obtaining solutions as well as to ensure that the equations are programmed correctly. There are 167 equations used in the calculator. To verify the calculator, all equations for each of seven receptor types (resident, construction worker, outdoor and indoor worker, recreator, farmer, and composite worker) were hand calculated using the default parameters. The same four radionuclides (Am-241, Co-60, H-3, and Pu-238) were used for each calculation for consistency throughout.

  18. An analytical method for calculating stresses and strains of ATF cladding based on thick walled theory

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Dong Hyun; Kim, Hak Sung [Hanyang University, Seoul (Korea, Republic of); Kim, Hyo Chan; Yang, Yong Sik; In, Wang kee [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In this paper, an analytical method based on thick walled theory has been studied to calculate stress and strain of ATF cladding. In order to prescribe boundary conditions of the analytical method, two algorithms were employed which are called subroutine 'Cladf' and 'Couple' of FRACAS, respectively. To evaluate the developed method, equivalent model using finite element method was established and stress components of the method were compared with those of equivalent FE model. One of promising ATF concepts is the coated cladding, which take advantages such as high melting point, a high neutron economy, and low tritium permeation rate. To evaluate the mechanical behavior and performance of the coated cladding, we need to develop the specified model to simulate the ATF behaviors in the reactor. In particular, the model for simulation of stress and strain for the coated cladding should be developed because the previous model, which is 'FRACAS', is for one body model. The FRACAS module employs the analytical method based on thin walled theory. According to thin-walled theory, radial stress is defined as zero but this assumption is not suitable for ATF cladding because value of the radial stress is not negligible in the case of ATF cladding. Recently, a structural model for multi-layered ceramic cylinders based on thick-walled theory was developed. Also, FE-based numerical simulation such as BISON has been developed to evaluate fuel performance. An analytical method that calculates stress components of ATF cladding was developed in this study. Thick-walled theory was used to derive equations for calculating stress and strain. To solve for these equations, boundary and loading conditions were obtained by subroutine 'Cladf' and 'Couple' and applied to the analytical method. To evaluate the developed method, equivalent FE model was established and its results were compared to those of analytical model. Based on the

  19. Technical manual for calculating cooling pond performance

    International Nuclear Information System (INIS)

    Krstulovich, S.F.

    1988-01-01

    This manual is produced in response to a growing number of requests for a technical aid to explain methods for simulating cooling pond performance. As such, it is a compilation of reports, charts and graphs developed through the years for use in analyzing situations. Section II contains a report summarizing the factors affecting cooling pond performance and lists statistical parameters used in developing performance simulations. Section III contains the graphs of simulated cooling pond performance on an hourly basis for various combinations of criteria (wind, solar, depth, air temperature and humidity) developed from the report in Section II. Section IV contains correspondence describing how to develop further data from the graphs in Section III, as well as mathematical models for the system of performance calculation. Section V contains the formulas used to simulate cooling pond performances in a cascade arrangement, such as the Fermilab Main Ring ponds. Section VI contains the calculations currently in use to evaluate the Main Ring pond performance based on current flows and Watts loadings. Section VII contains the overall site drawing of the Main Ring cooling ponds with thermal analysis and physical data

  20. Calculation of effect of burnup history on spent fuel reactivity based on CASMO5

    International Nuclear Information System (INIS)

    Li Xiaobo; Xia Zhaodong; Zhu Qingfu

    2015-01-01

    Based on the burnup credit of actinides + fission products (APU-2) which are usually considered in spent fuel package, the effect of power density and operating history on k_∞ was studied. All the burnup calculations are based on the two-dimensional fuel assembly burnup program CASMO5. The results show that taking the core average power density of specified power plus a bounding margin of 0.0023 to k_∞, and taking the operating history of specified power without shutdown during cycle and between cycles plus a bounding margin of 0.0045 to k_∞ can meet the bounding principle of burnup credit. (authors)

  1. Study of cosmic ray interaction model based on atmospheric muons for the neutrino flux calculation

    International Nuclear Information System (INIS)

    Sanuki, T.; Honda, M.; Kajita, T.; Kasahara, K.; Midorikawa, S.

    2007-01-01

    We have studied the hadronic interaction for the calculation of the atmospheric neutrino flux by summarizing the accurately measured atmospheric muon flux data and comparing with simulations. We find the atmospheric muon and neutrino fluxes respond to errors in the π-production of the hadronic interaction similarly, and compare the atmospheric muon flux calculated using the HKKM04 [M. Honda, T. Kajita, K. Kasahara, and S. Midorikawa, Phys. Rev. D 70, 043008 (2004).] code with experimental measurements. The μ + +μ - data show good agreement in the 1∼30 GeV/c range, but a large disagreement above 30 GeV/c. The μ + /μ - ratio shows sizable differences at lower and higher momenta for opposite directions. As the disagreements are considered to be due to assumptions in the hadronic interaction model, we try to improve it phenomenologically based on the quark parton model. The improved interaction model reproduces the observed muon flux data well. The calculation of the atmospheric neutrino flux will be reported in the following paper [M. Honda et al., Phys. Rev. D 75, 043006 (2007).

  2. Inverse boundary element calculations based on structural modes

    DEFF Research Database (Denmark)

    Juhl, Peter Møller

    2007-01-01

    The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...

  3. Workload Capacity: A Response Time-Based Measure of Automation Dependence.

    Science.gov (United States)

    Yamani, Yusuke; McCarley, Jason S

    2016-05-01

    An experiment used the workload capacity measure C(t) to quantify the processing efficiency of human-automation teams and identify operators' automation usage strategies in a speeded decision task. Although response accuracy rates and related measures are often used to measure the influence of an automated decision aid on human performance, aids can also influence response speed. Mean response times (RTs), however, conflate the influence of the human operator and the automated aid on team performance and may mask changes in the operator's performance strategy under aided conditions. The present study used a measure of parallel processing efficiency, or workload capacity, derived from empirical RT distributions as a novel gauge of human-automation performance and automation dependence in a speeded task. Participants performed a speeded probabilistic decision task with and without the assistance of an automated aid. RT distributions were used to calculate two variants of a workload capacity measure, COR(t) and CAND(t). Capacity measures gave evidence that a diagnosis from the automated aid speeded human participants' responses, and that participants did not moderate their own decision times in anticipation of diagnoses from the aid. Workload capacity provides a sensitive and informative measure of human-automation performance and operators' automation dependence in speeded tasks. © 2016, Human Factors and Ergonomics Society.

  4. A generalized approach for the calculation and automation of potentiometric titrations Part 1. Acid-Base Titrations

    NARCIS (Netherlands)

    Stur, J.; Bos, M.; van der Linden, W.E.

    1984-01-01

    Fast and accurate calculation procedures for pH and redox potentials are required for optimum control of automatic titrations. The procedure suggested is based on a three-dimensional titration curve V = f(pH, redox potential). All possible interactions between species in the solution, e.g., changes

  5. Electric field calculations in brain stimulation based on finite elements

    DEFF Research Database (Denmark)

    Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel

    2013-01-01

    The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation...... of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized...... the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh...

  6. Identifying the Interaction of Vancomycin With Novel pH-Responsive Lipids as Antibacterial Biomaterials Via Accelerated Molecular Dynamics and Binding Free Energy Calculations.

    Science.gov (United States)

    Ahmed, Shaimaa; Vepuri, Suresh B; Jadhav, Mahantesh; Kalhapure, Rahul S; Govender, Thirumala

    2018-06-01

    Nano-drug delivery systems have proven to be an efficient formulation tool to overcome the challenges with current antibiotics therapy and resistance. A series of pH-responsive lipid molecules were designed and synthesized for future liposomal formulation as a nano-drug delivery system for vancomycin at the infection site. The structures of these lipids differ from each other in respect of hydrocarbon tails: Lipid1, 2, 3 and 4 have stearic, oleic, linoleic, and linolenic acid hydrocarbon chains, respectively. The impact of variation in the hydrocarbon chain in the lipid structure on drug encapsulation and release profile, as well as mode of drug interaction, was investigated using molecular modeling analyses. A wide range of computational tools, including accelerated molecular dynamics, normal molecular dynamics, binding free energy calculations and principle component analysis, were applied to provide comprehensive insight into the interaction landscape between vancomycin and the designed lipid molecules. Interestingly, both MM-GBSA and MM-PBSA binding affinity calculations using normal molecular dynamics and accelerated molecular dynamics trajectories showed a very consistent trend, where the order of binding affinity towards vancomycin was lipid4 > lipid1 > lipid2 > lipid3. From both normal molecular dynamics and accelerated molecular dynamics, the interaction of lipid3 with vancomycin is demonstrated to be the weakest (∆G binding  = -2.17 and -11.57, for normal molecular dynamics and accelerated molecular dynamics, respectively) when compared to other complexes. We believe that the degree of unsaturation of the hydrocarbon chain in the lipid molecules may impact on the overall conformational behavior, interaction mode and encapsulation (wrapping) of the lipid molecules around the vancomycin molecule. This thorough computational analysis prior to the experimental investigation is a valuable approach to guide for predicting the encapsulation

  7. Predicting response to incretin-based therapy

    Directory of Open Access Journals (Sweden)

    Agrawal N

    2011-04-01

    Full Text Available Sanjay Kalra1, Bharti Kalra2, Rakesh Sahay3, Navneet Agrawal41Department of Endocrinology, 2Department of Diabetology, Bharti Hospital, Karnal, India; 3Department of Endocrinology, Osmania Medical College, Hyderabad, India; 4Department of Medicine, GR Medical College, Gwalior, IndiaAbstract: There are two important incretin hormones, glucose-dependent insulin tropic polypeptide (GIP and glucagon-like peptide-1 (GLP-1. The biological activities of GLP-1 include stimulation of glucose-dependent insulin secretion and insulin biosynthesis, inhibition of glucagon secretion and gastric emptying, and inhibition of food intake. GLP-1 appears to have a number of additional effects in the gastrointestinal tract and central nervous system. Incretin based therapy includes GLP-1 receptor agonists like human GLP-1 analogs (liraglutide and exendin-4 based molecules (exenatide, as well as DPP-4 inhibitors like sitagliptin, vildagliptin and saxagliptin. Most of the published studies showed a significant reduction in HbA1c using these drugs. A critical analysis of reported data shows that the response rate in terms of target achievers of these drugs is average. One of the first actions identified for GLP-1 was the glucose-dependent stimulation of insulin secretion from islet cell lines. Following the detection of GLP-1 receptors on islet beta cells, a large body of evidence has accumulated illustrating that GLP-1 exerts multiple actions on various signaling pathways and gene products in the ß cell. GLP-1 controls glucose homeostasis through well-defined actions on the islet ß cell via stimulation of insulin secretion and preservation and expansion of ß cell mass. In summary, there are several factors determining the response rate to incretin therapy. Currently minimal clinical data is available to make a conclusion. Key factors appear to be duration of diabetes, obesity, presence of autonomic neuropathy, resting energy expenditure, plasma glucagon levels and

  8. Time Analysis of Building Dynamic Response Under Seismic Action. Part 1: Theoretical Propositions

    Science.gov (United States)

    Ufimtcev, E. M.

    2017-11-01

    The first part of the article presents the main provisions of the analytical approach - the time analysis method (TAM) developed for the calculation of the elastic dynamic response of rod structures as discrete dissipative systems (DDS) and based on the investigation of the characteristic matrix quadratic equation. The assumptions adopted in the construction of the mathematical model of structural oscillations as well as the features of seismic forces’ calculating and recording based on the data of earthquake accelerograms are given. A system to resolve equations is given to determine the nodal (kinematic and force) response parameters as well as the stress-strain state (SSS) parameters of the system’s rods.

  9. One-dimensional thermal evolution calculation based on a mixing length theory: Application to Saturnian icy satellites

    Science.gov (United States)

    Kamata, S.

    2017-12-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.

  10. Correlation between calculated molecular descriptors of excipient amino acids and experimentally observed thermal stability of lysozyme

    DEFF Research Database (Denmark)

    Meng-Lund, Helena; Friis, Natascha; van de Weert, Marco

    2017-01-01

    for lysozyme in combination with 13 different amino acids using high throughput fluorescence spectroscopy and kinetic static light scattering measurements. On the theoretical side, around 200 2D and 3D molecular descriptors were calculated based on the amino acids' chemical structure. Multivariate data...... prominent stabilizing factor for both responses, whereas hydrophilic surface properties and high molecular mass density mostly had a positive influence on the unfolding temperature. A high partition coefficient (logP(o/w)) was identified as the most prominent destabilizing factor for both responses...

  11. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B.; Jia, Xun

    2015-09-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  12. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC).

    Science.gov (United States)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-10-07

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia's CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon-electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783-97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE's random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, goMC was ~4-16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was validated by

  13. A GPU OpenCL based cross-platform Monte Carlo dose calculation engine (goMC)

    International Nuclear Information System (INIS)

    Tian, Zhen; Shi, Feng; Folkerts, Michael; Qin, Nan; Jiang, Steve B; Jia, Xun

    2015-01-01

    Monte Carlo (MC) simulation has been recognized as the most accurate dose calculation method for radiotherapy. However, the extremely long computation time impedes its clinical application. Recently, a lot of effort has been made to realize fast MC dose calculation on graphic processing units (GPUs). However, most of the GPU-based MC dose engines have been developed under NVidia’s CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a GPU OpenCL based cross-platform MC dose engine named goMC with coupled photon–electron simulation for external photon and electron radiotherapy in the MeV energy range. Compared to our previously developed GPU-based MC code named gDPM (Jia et al 2012 Phys. Med. Biol. 57 7783–97), goMC has two major differences. First, it was developed under the OpenCL environment for high code portability and hence could be run not only on different GPU cards but also on CPU platforms. Second, we adopted the electron transport model used in EGSnrc MC package and PENELOPE’s random hinge method in our new dose engine, instead of the dose planning method employed in gDPM. Dose distributions were calculated for a 15 MeV electron beam and a 6 MV photon beam in a homogenous water phantom, a water-bone-lung-water slab phantom and a half-slab phantom. Satisfactory agreement between the two MC dose engines goMC and gDPM was observed in all cases. The average dose differences in the regions that received a dose higher than 10% of the maximum dose were 0.48–0.53% for the electron beam cases and 0.15–0.17% for the photon beam cases. In terms of efficiency, goMC was ∼4–16% slower than gDPM when running on the same NVidia TITAN card for all the cases we tested, due to both the different electron transport models and the different development environments. The code portability of our new dose engine goMC was

  14. Δg: The new aromaticity index based on g-factor calculation applied for polycyclic benzene rings

    Science.gov (United States)

    Ucun, Fatih; Tokatlı, Ahmet

    2015-02-01

    In this work, the aromaticity of polycyclic benzene rings was evaluated by the calculation of g-factor for a hydrogen placed perpendicularly at geometrical center of related ring plane at a distance of 1.2 Å. The results have compared with the other commonly used aromatic indices, such as HOMA, NICSs, PDI, FLU, MCI, CTED and, generally been found to be in agreement with them. So, it was proposed that the calculation of the average g-factor as Δg could be applied to study the aromaticity of polycyclic benzene rings without any restriction in the number of benzene rings as a new magnetic-based aromaticity index.

  15. Gibbs Sampler-Based λ-Dynamics and Rao-Blackwell Estimator for Alchemical Free Energy Calculation.

    Science.gov (United States)

    Ding, Xinqiang; Vilseck, Jonah Z; Hayes, Ryan L; Brooks, Charles L

    2017-06-13

    λ-dynamics is a generalized ensemble method for alchemical free energy calculations. In traditional λ-dynamics, the alchemical switch variable λ is treated as a continuous variable ranging from 0 to 1 and an empirical estimator is utilized to approximate the free energy. In the present article, we describe an alternative formulation of λ-dynamics that utilizes the Gibbs sampler framework, which we call Gibbs sampler-based λ-dynamics (GSLD). GSLD, like traditional λ-dynamics, can be readily extended to calculate free energy differences between multiple ligands in one simulation. We also introduce a new free energy estimator, the Rao-Blackwell estimator (RBE), for use in conjunction with GSLD. Compared with the current empirical estimator, the advantage of RBE is that RBE is an unbiased estimator and its variance is usually smaller than the current empirical estimator. We also show that the multistate Bennett acceptance ratio equation or the unbinned weighted histogram analysis method equation can be derived using the RBE. We illustrate the use and performance of this new free energy computational framework by application to a simple harmonic system as well as relevant calculations of small molecule relative free energies of solvation and binding to a protein receptor. Our findings demonstrate consistent and improved performance compared with conventional alchemical free energy methods.

  16. Important parameters in ORIGEN2 calculations of spent fuel compositions

    International Nuclear Information System (INIS)

    Welch, T.D.; Notz, K.J.; Andermann, R.J. Jr.

    1990-01-01

    The Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is responsible for implementing federal policy for the management and permanent disposal of spent nuclear fuel from civilian nuclear power reactors and of high-level radioactive waste. The Characteristics Data Base (CDB) provides an extensive collection of data on the four waste steams that may require long-term isolation: LWR spent fuel, high-level waste, non-LWR spent fuel, and miscellaneous wastes (such as greater-than-class-C). The eight-volume report and the five supplemental menu-driven PC data bases encompass radiological characteristics, chemical compositions, physical descriptions, inventories, and projections. An overview of these data bases, which are available through the Oak Ridge National Laboratory, is provided by Notz. This paper reports that the radiological characteristics in the CDB are calculated using ORIGEN2

  17. DP-THOT - a calculational tool for bundle-specific decay power based on actual irradiation history

    International Nuclear Information System (INIS)

    Johnston, S.; Morrison, C.A.; Albasha, H.; Arguner, D.

    2005-01-01

    A tool has been created for calculating the decay power of an individual fuel bundle to take account of its actual irradiation history, as tracked by the fuel management code SORO. The DP-THOT tool was developed in two phases: first as a standalone executable code for decay power calculation, which could accept as input an entirely arbitrary irradiation history; then as a module integrated with SORO auxiliary codes, which directly accesses SORO history files to retrieve the operating power history of the bundle since it first entered the core. The methodology implemented in the standalone code is based on the ANSI/ANS-5.1-1994 formulation, which has been specifically adapted for calculating decay power in irradiated CANDU reactor fuel, by making use of fuel type specific parameters derived from WIMS lattice cell simulations for both 37 element and 28 element CANDU fuel bundle types. The approach also yields estimates of uncertainty in the calculated decay power quantities, based on the evaluated error in the decay heat correlations built-in for each fissile isotope, in combination with the estimated uncertainty in user-supplied inputs. The method was first implemented in the form of a spreadsheet, and following successful testing against decay powers estimated using the code ORIGEN-S, the algorithm was coded in FORTRAN to create an executable program. The resulting standalone code, DP-THOT, accepts an arbitrary irradiation history and provides the calculated decay power and estimated uncertainty over any user-specified range of cooling times, for either 37 element or 28 element fuel bundles. The overall objective was to produce an integrated tool which could be used to find the decay power associated with any identified fuel bundle or channel in the core, taking into account the actual operating history of the bundles involved. The benefit is that the tool would allow a more realistic calculation of bundle and channel decay powers for outage heat sink planning

  18. Wave kinematics and response of slender offshore structures. Vol 5: Wave forces and responses

    Energy Technology Data Exchange (ETDEWEB)

    Pedersen, L.M.; Riber, H.J.

    1999-08-01

    A load measuring system (LMS) and a wave measuring system (WMS) has been used on the North Sea platform Tyra. The LMS consists of an instrumented pipe placed vertically in the crest zone of high and steep waves. The WMS consists of an unique sonar system placed on the sea floor. Simultaneous measurements are carried out of the kinematics of waves and currents and the response of the instrumented pipe during a period of five month in the winter 1994/95. Numerical calculations with LIC22 are carried out of the response of the LMS applying the measured wave and current kinematics. The responses are compared to the measured responses of the LMS. The comparison is based on the statistical main properties of the calculated and measured response as the kinematic field is measured 150 metres away from the instrumented pipe. From the analyses the main parameters (reduced velocity V{sub R} and correlation length l{sub c}) for vortex induced vibrations (VIV) are calibrated and the main environmental conditions for VIV are determined. The hydrodynamic coefficients determining the wave and current forces on slender structures are studied (drag coefficient C{sub D} and added mass coefficient C{sub M}). Further, the effect on the drag coefficient due to air blending in the upper part of the wave is determined. (au)

  19. Extension of the COSYMA-ECONOMICS module - cost calculations based on different economic sectors

    International Nuclear Information System (INIS)

    Faude, D.

    1994-12-01

    The COSYMA program system for evaluating the off-site consequences of accidental releases of radioactive material to the atmosphere includes an ECONOMICS module for assessing economic consequences. The aim of this module is to convert various consequences (radiation-induced health effects and impacts resulting from countermeasures) caused by an accident into the common framework of economic costs; this allows different effects to be expressed in the same terms and thus to make these effects comparable. With respect to the countermeasure 'movement of people', the dominant cost categories are 'loss-of-income costs' and 'costs of lost capital services'. In the original version of the ECONOMICS module these costs are calculated on the basis of the total number of people moved. In order to take into account also regional or local economic peculiarities of a nuclear site, the ECONOMICS module has been extended: Calculation of the above mentioned cost categories is now based on the number of employees in different economic sectors in the affected area. This extension of the COSYMA ECONOMICS module is described in more detail. (orig.)

  20. Slope excavation quality assessment and excavated volume calculation in hydraulic projects based on laser scanning technology

    Directory of Open Access Journals (Sweden)

    Chao Hu

    2015-04-01

    Full Text Available Slope excavation is one of the most crucial steps in the construction of a hydraulic project. Excavation project quality assessment and excavated volume calculation are critical in construction management. The positioning of excavation projects using traditional instruments is inefficient and may cause error. To improve the efficiency and precision of calculation and assessment, three-dimensional laser scanning technology was used for slope excavation quality assessment. An efficient data acquisition, processing, and management workflow was presented in this study. Based on the quality control indices, including the average gradient, slope toe elevation, and overbreak and underbreak, cross-sectional quality assessment and holistic quality assessment methods were proposed to assess the slope excavation quality with laser-scanned data. An algorithm was also presented to calculate the excavated volume with laser-scanned data. A field application and a laboratory experiment were carried out to verify the feasibility of these methods for excavation quality assessment and excavated volume calculation. The results show that the quality assessment indices can be obtained rapidly and accurately with design parameters and scanned data, and the results of holistic quality assessment are consistent with those of cross-sectional quality assessment. In addition, the time consumption in excavation quality assessment with the laser scanning technology can be reduced by 70%–90%, as compared with the traditional method. The excavated volume calculated with the scanned data only slightly differs from measured data, demonstrating the applicability of the excavated volume calculation method presented in this study.

  1. Determination of the spatial response of neutron based analysers using a Monte Carlo based method

    International Nuclear Information System (INIS)

    Tickner, James

    2000-01-01

    One of the principal advantages of using thermal neutron capture (TNC, also called prompt gamma neutron activation analysis or PGNAA) or neutron inelastic scattering (NIS) techniques for measuring elemental composition is the high penetrating power of both the incident neutrons and the resultant gamma-rays, which means that large sample volumes can be interrogated. Gauges based on these techniques are widely used in the mineral industry for on-line determination of the composition of bulk samples. However, attenuation of both neutrons and gamma-rays in the sample and geometric (source/detector distance) effects typically result in certain parts of the sample contributing more to the measured composition than others. In turn, this introduces errors in the determination of the composition of inhomogeneous samples. This paper discusses a combined Monte Carlo/analytical method for estimating the spatial response of a neutron gauge. Neutron propagation is handled using a Monte Carlo technique which allows an arbitrarily complex neutron source and gauge geometry to be specified. Gamma-ray production and detection is calculated analytically which leads to a dramatic increase in the efficiency of the method. As an example, the method is used to study ways of reducing the spatial sensitivity of on-belt composition measurements of cement raw meal

  2. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  3. Calculating Student Grades.

    Science.gov (United States)

    Allswang, John M.

    1986-01-01

    This article provides two short microcomputer gradebook programs. The programs, written in BASIC for the IBM-PC and Apple II, provide statistical information about class performance and calculate grades either on a normal distribution or based on teacher-defined break points. (JDH)

  4. Calculation of intercepted runoff depth based on stormwater quality and environmental capacity of receiving waters for initial stormwater pollution management.

    Science.gov (United States)

    Peng, Hai-Qin; Liu, Yan; Gao, Xue-Long; Wang, Hong-Wu; Chen, Yi; Cai, Hui-Yi

    2017-11-01

    While point source pollutions have gradually been controlled in recent years, the non-point source pollution problem has become increasingly prominent. The receiving waters are frequently polluted by the initial stormwater from the separate stormwater system and the wastewater from sewage pipes through stormwater pipes. Consequently, calculating the intercepted runoff depth has become a problem that must be resolved immediately for initial stormwater pollution management. The accurate calculation of intercepted runoff depth provides a solid foundation for selecting the appropriate size of intercepting facilities in drainage and interception projects. This study establishes a separate stormwater system for the Yishan Building watershed of Fuzhou City using the InfoWorks Integrated Catchment Management (InfoWorks ICM), which can predict the stormwater flow velocity and the flow of discharge outlet after each rainfall. The intercepted runoff depth is calculated from the stormwater quality and environmental capacity of the receiving waters. The average intercepted runoff depth from six rainfall events is calculated as 4.1 mm based on stormwater quality. The average intercepted runoff depth from six rainfall events is calculated as 4.4 mm based on the environmental capacity of the receiving waters. The intercepted runoff depth differs when calculated from various aspects. The selection of the intercepted runoff depth depends on the goal of water quality control, the self-purification capacity of the water bodies, and other factors of the region.

  5. Dynamic Response of a Floating Bridge Structure

    OpenAIRE

    Viuff, Thomas; Leira, Bernt Johan; Øiseth, Ole; Xiang, Xu

    2016-01-01

    A theoretical overview of the stochastic dynamic analysis of a floating bridge structure is presented. Emphasis is on the wave-induced response and the waves on the sea surface are idealized as a zero mean stationary Gaussian process. The first-order wave load processes are derived using linear potential theory and the structural idealization is based on the Finite Element Method. A frequency response calculation is presented for a simplified floating bridge structure example emphasising the ...

  6. SU-E-T-37: A GPU-Based Pencil Beam Algorithm for Dose Calculations in Proton Radiation Therapy

    International Nuclear Information System (INIS)

    Kalantzis, G; Leventouri, T; Tachibana, H; Shang, C

    2015-01-01

    Purpose: Recent developments in radiation therapy have been focused on applications of charged particles, especially protons. Over the years several dose calculation methods have been proposed in proton therapy. A common characteristic of all these methods is their extensive computational burden. In the current study we present for the first time, to our best knowledge, a GPU-based PBA for proton dose calculations in Matlab. Methods: In the current study we employed an analytical expression for the protons depth dose distribution. The central-axis term is taken from the broad-beam central-axis depth dose in water modified by an inverse square correction while the distribution of the off-axis term was considered Gaussian. The serial code was implemented in MATLAB and was launched on a desktop with a quad core Intel Xeon X5550 at 2.67GHz with 8 GB of RAM. For the parallelization on the GPU, the parallel computing toolbox was employed and the code was launched on a GTX 770 with Kepler architecture. The performance comparison was established on the speedup factors. Results: The performance of the GPU code was evaluated for three different energies: low (50 MeV), medium (100 MeV) and high (150 MeV). Four square fields were selected for each energy, and the dose calculations were performed with both the serial and parallel codes for a homogeneous water phantom with size 300×300×300 mm3. The resolution of the PBs was set to 1.0 mm. The maximum speedup of ∼127 was achieved for the highest energy and the largest field size. Conclusion: A GPU-based PB algorithm for proton dose calculations in Matlab was presented. A maximum speedup of ∼127 was achieved. Future directions of the current work include extension of our method for dose calculation in heterogeneous phantoms

  7. A fast dose calculation method based on table lookup for IMRT optimization

    International Nuclear Information System (INIS)

    Wu Qiuwen; Djajaputra, David; Lauterbach, Marc; Wu Yan; Mohan, Radhe

    2003-01-01

    This note describes a fast dose calculation method that can be used to speed up the optimization process in intensity-modulated radiotherapy (IMRT). Most iterative optimization algorithms in IMRT require a large number of dose calculations to achieve convergence and therefore the total amount of time needed for the IMRT planning can be substantially reduced by using a faster dose calculation method. The method that is described in this note relies on an accurate dose calculation engine that is used to calculate an approximate dose kernel for each beam used in the treatment plan. Once the kernel is computed and saved, subsequent dose calculations can be done rapidly by looking up this kernel. Inaccuracies due to the approximate nature of the kernel in this method can be reduced by performing scheduled kernel updates. This fast dose calculation method can be performed more than two orders of magnitude faster than the typical superposition/convolution methods and therefore is suitable for applications in which speed is critical, e.g., in an IMRT optimization that requires a simulated annealing optimization algorithm or in a practical IMRT beam-angle optimization system. (note)

  8. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    International Nuclear Information System (INIS)

    Su Xiaoxing; Wang Yuesheng

    2010-01-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  9. A postprocessing method based on chirp Z transform for FDTD calculation of point defect states in two-dimensional phononic crystals

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-09-01

    In this paper, a new postprocessing method for the finite difference time domain (FDTD) calculation of the point defect states in two-dimensional (2D) phononic crystals (PNCs) is developed based on the chirp Z transform (CZT), one of the frequency zooming techniques. The numerical results for the defect states in 2D solid/liquid PNCs with single or double point defects show that compared with the fast Fourier transform (FFT)-based postprocessing method, the method can improve the estimation accuracy of the eigenfrequencies of the point defect states significantly when the FDTD calculation is run with relatively few iterations; and furthermore it can yield the point defect bands without calculating all eigenfrequencies outside the band gaps. The efficiency and accuracy of the FDTD method can be improved significantly with this new postprocessing method.

  10. Monte Carlo based electron treatment planning and cutout output factor calculations

    Science.gov (United States)

    Mitrou, Ellis

    Electron radiotherapy (RT) offers a number of advantages over photons. The high surface dose, combined with a rapid dose fall-off beyond the target volume presents a net increase in tumor control probability and decreases the normal tissue complication for superficial tumors. Electron treatments are normally delivered clinically without previously calculated dose distributions due to the complexity of the electron transport involved and greater error in planning accuracy. This research uses Monte Carlo (MC) methods to model clinical electron beams in order to accurately calculate electron beam dose distributions in patients as well as calculate cutout output factors, reducing the need for a clinical measurement. The present work is incorporated into a research MC calculation system: McGill Monte Carlo Treatment Planning (MMCTP) system. Measurements of PDDs, profiles and output factors in addition to 2D GAFCHROMICRTM EBT2 film measurements in heterogeneous phantoms were obtained to commission the electron beam model. The use of MC for electron TP will provide more accurate treatments and yield greater knowledge of the electron dose distribution within the patient. The calculation of output factors could invoke a clinical time saving of up to 1 hour per patient.

  11. Mode Calculation and Testing of a Car Body in White

    Directory of Open Access Journals (Sweden)

    Ying Yang

    2011-01-01

    Full Text Available The dynamic parameters of a car body in white (BIW are important during a new car developing. Based on the finite element method, the model of a BIW is developed in which the welding points are treated specially as a new element type and the vibration modes of it are calculated. In modal testing, a fixed sine-sweeping exciter is used to conduct a single-point input force for the structure, whereas the output responses are picked up at different points to identify modes. The obtained modes are coincided both with the FE results and the practical testing.

  12. Hybrid Electric Vehicle Control Strategy Based on Power Loss Calculations

    OpenAIRE

    Boyd, Steven J

    2006-01-01

    Defining an operation strategy for a Split Parallel Architecture (SPA) Hybrid Electric Vehicle (HEV) is accomplished through calculating powertrain component losses. The results of these calculations define how the vehicle can decrease fuel consumption while maintaining low vehicle emissions. For a HEV, simply operating the vehicle's engine in its regions of high efficiency does not guarantee the most efficient vehicle operation. The results presented are meant only to define a literal str...

  13. Microscopic calculation of friction coefficients for use in heavy-ion reaction

    International Nuclear Information System (INIS)

    Iwamoto, A.; Harada, K.; Yoshida, S.

    1981-01-01

    A microscopic calculation has been done for the friction coefficient for use in the deep-inelastic collision of heavy nuclei. We adopted the formalism of the linear response theory as a basis and used the adiabatic base of the two-center shell model. Several reaction channels with the total mass numbers of 236 and 260 systems were investigated. The friction coefficients for the radial and deforming motions including the coupling term were calculated as a function of the distance between two nuclei and deformation of the two nuclei for each channel. The general feature of the friction coefficient, its strength and form factor, was clarified in this model and comparison with the results of other models were done. It was found that our model gives a physically plausible value for the friction coefficient as a whole. (orig.)

  14. Electromagnetic response in kinetic energy driven cuprate superconductors: Linear response approach

    International Nuclear Information System (INIS)

    Krzyzosiak, Mateusz; Huang, Zheyu; Feng, Shiping; Gonczarek, Ryszard

    2010-01-01

    Within the framework of the kinetic energy driven superconductivity, the electromagnetic response in cuprate superconductors is studied in the linear response approach. The kernel of the response function is evaluated and employed to calculate the local magnetic field profile, the magnetic field penetration depth, and the superfluid density, based on the specular reflection model for a purely transverse vector potential. It is shown that the low temperature magnetic field profile follows an exponential decay at the surface, while the magnetic field penetration depth depends linearly on temperature, except for the strong deviation from the linear characteristics at extremely low temperatures. The superfluid density is found to decrease linearly with decreasing doping concentration in the underdoped regime. The problem of gauge invariance is addressed and an approximation for the dressed current vertex, which does not violate local charge conservation is proposed and discussed.

  15. Dementia caregivers' responses to 2 Internet-based intervention programs.

    Science.gov (United States)

    Marziali, Elsa; Garcia, Linda J

    2011-02-01

    The aim of this study was to examine the impact on dementia caregivers' experienced stress and health status of 2 Internet-based intervention programs. Ninety-one dementia caregivers were given the choice of being involved in either an Internet-based chat support group or an Internet-based video conferencing support group. Pre-post outcome measures focused on distress, health status, social support, and service utilization. In contrast to the Chat Group, the Video Group showed significantly greater improvement in mental health status. Also, for the Video Group, improvements in self-efficacy, neuroticism, and social support were associated with lower stress response to coping with the care recipient's cognitive impairment and decline in function. The results show that, of 2 Internet-based intervention programs for dementia caregivers, the video conferencing intervention program was more effective in improving mental health status and improvement in personal characteristics were associated with lower caregiver stress response.

  16. Selection of logging-based TOC calculation methods for shale reservoirs: A case study of the Jiaoshiba shale gas field in the Sichuan Basin

    Directory of Open Access Journals (Sweden)

    Renchun Huang

    2015-03-01

    Full Text Available Various methods are available for calculating the TOC of shale reservoirs with logging data, and each method has its unique applicability and accuracy. So it is especially important to establish a regional experimental calculation model based on a thorough analysis of their applicability. With the Upper Ordovician Wufeng Fm-Lower Silurian Longmaxi Fm shale reservoirs as an example, TOC calculation models were built by use of the improved ΔlgR, bulk density, natural gamma spectroscopy, multi-fitting and volume model methods respectively, considering the previous research results and the geologic features of the area. These models were compared based on the core data. Finally, the bulk density method was selected as the regional experimental calculation model. Field practices demonstrated that the improved ΔlgR and natural gamma spectroscopy methods are poor in accuracy; although the multi-fitting method and bulk density method have relatively high accuracy, the bulk density method is simpler and wider in application. For further verifying its applicability, the bulk density method was applied to calculate the TOC of shale reservoirs in several key wells in the Jiaoshiba shale gas field, Sichuan Basin, and the calculation accuracy was clarified with the measured data of core samples, showing that the coincidence rate of logging-based TOC calculation is up to 90.5%–91.0%.

  17. Monte Carlo perturbation theory in neutron transport calculations

    International Nuclear Information System (INIS)

    Hall, M.C.G.

    1980-01-01

    The need to obtain sensitivities in complicated geometrical configurations has resulted in the development of Monte Carlo sensitivity estimation. A new method has been developed to calculate energy-dependent sensitivities of any number of responses in a single Monte Carlo calculation with a very small time penalty. This estimation typically increases the tracking time per source particle by about 30%. The method of estimation is explained. Sensitivities obtained are compared with those calculated by discrete ordinates methods. Further theoretical developments, such as second-order perturbation theory and application to k/sub eff/ calculations, are discussed. The application of the method to uncertainty analysis and to the analysis of benchmark experiments is illustrated. 5 figures

  18. Parallel computational in nuclear group constant calculation

    International Nuclear Information System (INIS)

    Su'ud, Zaki; Rustandi, Yaddi K.; Kurniadi, Rizal

    2002-01-01

    In this paper parallel computational method in nuclear group constant calculation using collision probability method will be discuss. The main focus is on the calculation of collision matrix which need large amount of computational time. The geometry treated here is concentric cylinder. The calculation of collision probability matrix is carried out using semi analytic method using Beckley Naylor Function. To accelerate computation speed some computer parallel used to solve the problem. We used LINUX based parallelization using PVM software with C or fortran language. While in windows based we used socket programming using DELPHI or C builder. The calculation results shows the important of optimal weight for each processor in case there area many type of processor speed

  19. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    Energy Technology Data Exchange (ETDEWEB)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H., E-mail: mbellezzo@gmail.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  20. A GPU-based Monte Carlo dose calculation code for photon transport in a voxel phantom

    International Nuclear Information System (INIS)

    Bellezzo, M.; Do Nascimento, E.; Yoriyaz, H.

    2014-08-01

    As the most accurate method to estimate absorbed dose in radiotherapy, Monte Carlo method has been widely used in radiotherapy treatment planning. Nevertheless, its efficiency can be improved for clinical routine applications. In this paper, we present the CUBMC code, a GPU-based Mc photon transport algorithm for dose calculation under the Compute Unified Device Architecture platform. The simulation of physical events is based on the algorithm used in Penelope, and the cross section table used is the one generated by the Material routine, als present in Penelope code. Photons are transported in voxel-based geometries with different compositions. To demonstrate the capabilities of the algorithm developed in the present work four 128 x 128 x 128 voxel phantoms have been considered. One of them is composed by a homogeneous water-based media, the second is composed by bone, the third is composed by lung and the fourth is composed by a heterogeneous bone and vacuum geometry. Simulations were done considering a 6 MeV monoenergetic photon point source. There are two distinct approaches that were used for transport simulation. The first of them forces the photon to stop at every voxel frontier, the second one is the Woodcock method, where the photon stop in the frontier will be considered depending on the material changing across the photon travel line. Dose calculations using these methods are compared for validation with Penelope and MCNP5 codes. Speed-up factors are compared using a NVidia GTX 560-Ti GPU card against a 2.27 GHz Intel Xeon CPU processor. (Author)

  1. Coil protection calculator for TFTR

    International Nuclear Information System (INIS)

    Marsala, R.J.; Lawson, J.E.; Persing, R.G.; Senko, T.R.; Woolley, R.D.

    1989-01-01

    A new coil protection system (CPS) is being developed to replace the existing TFTR magnetic coil fault detector. The existing fault detector sacrifices TFTR operating capability for simplicity. The new CPS, when installed in October of 1988, will permit operation up to the actual coil stress limits parameters in real-time. The computation will be done in a microprocessor based Coil Protection Calculator (CPC) currently under construction at PPL. THe new CPC will allow TFTR to operate with higher plasma currents and will permit the optimization of pulse repetition rates. The CPC will provide real-time estimates of critical coil and bus temperatures and stresses based on real-time redundant measurements of coil currents, coil cooling water inlet temperature, and plasma current. The critical parameter calculations are compared to prespecified limits. If these limits are reached or exceeded, protection action will be initiated to a hard wired control system (HCS), which will shut down the power supplies. The CPC consists of a redundant VME based microprocessor system which will sample all input data and compute all stress quantities every ten milliseconds. Thermal calculations will be approximated every 10ms with an exact solution occurring every second. The CPC features continuous cross-checking of redundant input signal, automatic detection of internal failure modes, monitoring and recording of calculated results, and a quick, functional verification of performance via an internal test system. (author)

  2. Calculations of atomic magnetic nuclear shielding constants based on the two-component normalized elimination of the small component method

    Science.gov (United States)

    Yoshizawa, Terutaka; Zou, Wenli; Cremer, Dieter

    2017-04-01

    A new method for calculating nuclear magnetic resonance shielding constants of relativistic atoms based on the two-component (2c), spin-orbit coupling including Dirac-exact NESC (Normalized Elimination of the Small Component) approach is developed where each term of the diamagnetic and paramagnetic contribution to the isotropic shielding constant σi s o is expressed in terms of analytical energy derivatives with regard to the magnetic field B and the nuclear magnetic moment 𝝁 . The picture change caused by renormalization of the wave function is correctly described. 2c-NESC/HF (Hartree-Fock) results for the σiso values of 13 atoms with a closed shell ground state reveal a deviation from 4c-DHF (Dirac-HF) values by 0.01%-0.76%. Since the 2-electron part is effectively calculated using a modified screened nuclear shielding approach, the calculation is efficient and based on a series of matrix manipulations scaling with (2M)3 (M: number of basis functions).

  3. Antenna modeling considerations for accurate SAR calculations in human phantoms in close proximity to GSM cellular base station antennas.

    Science.gov (United States)

    van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C

    2005-09-01

    International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines. (c) 2005 Wiley-Liss, Inc.

  4. Absorbed fractions in a voxel-based phantom calculated with the MCNP-4B code.

    Science.gov (United States)

    Yoriyaz, H; dos Santos, A; Stabin, M G; Cabezas, R

    2000-07-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. MCNP-4B absorbed fractions for photons in the mathematical phantom of Snyder et al. agreed well with reference values. Results obtained through radiation transport simulation in the voxel-based phantom, in general, agreed well with reference values. Considerable discrepancies, however, were found in some cases due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the voxel-based phantom, which is not considered in the mathematical phantom.

  5. MVP/GMVP 2: general purpose Monte Carlo codes for neutron and photon transport calculations based on continuous energy and multigroup methods

    International Nuclear Information System (INIS)

    Nagaya, Yasunobu; Okumura, Keisuke; Mori, Takamasa; Nakagawa, Masayuki

    2005-06-01

    In order to realize fast and accurate Monte Carlo simulation of neutron and photon transport problems, two vectorized Monte Carlo codes MVP and GMVP have been developed at JAERI. MVP is based on the continuous energy model and GMVP is on the multigroup model. Compared with conventional scalar codes, these codes achieve higher computation speed by a factor of 10 or more on vector super-computers. Both codes have sufficient functions for production use by adopting accurate physics model, geometry description capability and variance reduction techniques. The first version of the codes was released in 1994. They have been extensively improved and new functions have been implemented. The major improvements and new functions are (1) capability to treat the scattering model expressed with File 6 of the ENDF-6 format, (2) time-dependent tallies, (3) reaction rate calculation with the pointwise response function, (4) flexible source specification, (5) continuous-energy calculation at arbitrary temperatures, (6) estimation of real variances in eigenvalue problems, (7) point detector and surface crossing estimators, (8) statistical geometry model, (9) function of reactor noise analysis (simulation of the Feynman-α experiment), (10) arbitrary shaped lattice boundary, (11) periodic boundary condition, (12) parallelization with standard libraries (MPI, PVM), (13) supporting many platforms, etc. This report describes the physical model, geometry description method used in the codes, new functions and how to use them. (author)

  6. A cultural study of a science classroom and graphing calculator-based technology

    Science.gov (United States)

    Casey, Dennis Alan

    Social, political, and technological events of the past two decades have had considerable bearing on science education. While sociological studies of scientists at work have seriously questioned traditional histories of science, national and state educational systemic reform initiatives have been enacted, stressing standards and accountability. Recently, powerful instructional technologies have become part of the landscape of the classroom. One example, graphing calculator-based technology, has found its way from commercial and domestic applications into the pedagogy of science and math education. The purpose of this study was to investigate the culture of an "alternative" science classroom and how it functions with graphing calculator-based technology. Using ethnographic methods, a case study of one secondary, team-taught, Environmental/Physical Science (EPS) classroom was conducted. Nearly half of the 23 students were identified as students with special education needs. Over a four-month period, field data was gathered from written observations, videotaped interactions, audio taped interviews, and document analyses to determine how technology was used and what meaning it had for the participants. Analysis indicated that the technology helped to keep students from getting frustrated with handling data and graphs. In a relatively short period of time, students were able to gather data, produce graphs, and to use inscriptions in meaningful classroom discussions. In addition, teachers used the technology as a means to involve and motivate students to want to learn science. By employing pedagogical skills and by utilizing a technology that might not otherwise be readily available to these students, an environment of appreciation, trust, and respect was fostered. Further, the use of technology by these teachers served to expand students' social capital---the benefits that come from an individual's social contacts, social skills, and social resources.

  7. Calculating Quenching Weights

    CERN Document Server

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim

    2003-01-01

    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

  8. Multi-Agent System-Based Microgrid Operation Strategy for Demand Response

    Directory of Open Access Journals (Sweden)

    Hee-Jun Cha

    2015-12-01

    Full Text Available The microgrid and demand response (DR are important technologies for future power grids. Among the variety of microgrid operations, the multi-agent system (MAS has attracted considerable attention. In a microgrid with MAS, the agents installed on the microgrid components operate optimally by communicating with each other. This paper proposes an operation algorithm for the individual agents of a test microgrid that consists of a battery energy storage system (BESS and an intelligent load. A microgrid central controller to manage the microgrid can exchange information with each agent. The BESS agent performs scheduling for maximum benefit in response to the electricity price and BESS state of charge (SOC through a fuzzy system. The intelligent load agent assumes that the industrial load performs scheduling for maximum benefit by calculating the hourly production cost. The agent operation algorithm includes a scheduling algorithm using day-ahead pricing in the DR program and a real-time operation algorithm for emergency situations using emergency demand response (EDR. The proposed algorithm and operation strategy were implemented both by a hardware-in-the-loop simulation test using OPAL-RT and an actual hardware test by connecting a new distribution simulator.

  9. Groebner bases in perturbative calculations

    Energy Technology Data Exchange (ETDEWEB)

    Gerdt, Vladimir P. [Laboratory of Information Technologies, Joint Institute for Nuclear Research, 141980 Dubna (Russian Federation)

    2004-10-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations.

  10. Groebner bases in perturbative calculations

    International Nuclear Information System (INIS)

    Gerdt, Vladimir P.

    2004-01-01

    In this paper we outline the most general and universal algorithmic approach to reduction of loop integrals to basic integrals. The approach is based on computation of Groebner bases for recurrence relations derived from the integration by parts method. In doing so we consider generic recurrence relations when propagators have arbitrary integer powers treated as symbolic variables (indices) for the relations

  11. Applicability of coupled code RELAP5/GOTHIC to NPP Krsko MSLB calculation

    International Nuclear Information System (INIS)

    Keco, M.; Debrecin, N.; Grgic, D.

    2005-01-01

    Usual way to analyze Main Steam Line Break (MSLB) accident in PWR plants is to calculate core and containment responses in two separate calculations. In first calculation system code is used to address behaviour of nuclear steam supply system and containment is modelled mainly as a boundary condition. In second calculation mass and energy release data are used to perform containment analysis. Coupled code R5G realized by direct explicit coupling of system code RELAP5/MOD3.3 and containment code GOTHIC is able to perform both calculations simultaneously. In this paper R5G is applied to calculation of MSLB accident in large dry containment of NPP Krsko. Standard separate calculation is performed first and then both core and containment responses are compared against corresponding coupled code results. Two versions of GOTHIC code are used, one old ver 3.4e and the last one ver 7.2. As expected, differences between standard procedure and coupled calculations are small. The performed analyses showed that classical uncoupled approach is applicable in case of large dry containment calculation, but that new approach can bring some additional insight in understanding of the transient and that can be used as simple and reliable procedure in performing MSLB calculation without any significant calculation overhead. (author)

  12. Determination of structural fluctuations of proteins from structure-based calculations of residual dipolar couplings

    International Nuclear Information System (INIS)

    Montalvao, Rinaldo W.; De Simone, Alfonso; Vendruscolo, Michele

    2012-01-01

    Residual dipolar couplings (RDCs) have the potential of providing detailed information about the conformational fluctuations of proteins. It is very challenging, however, to extract such information because of the complex relationship between RDCs and protein structures. A promising approach to decode this relationship involves structure-based calculations of the alignment tensors of protein conformations. By implementing this strategy to generate structural restraints in molecular dynamics simulations we show that it is possible to extract effectively the information provided by RDCs about the conformational fluctuations in the native states of proteins. The approach that we present can be used in a wide range of alignment media, including Pf1, charged bicelles and gels. The accuracy of the method is demonstrated by the analysis of the Q factors for RDCs not used as restraints in the calculations, which are significantly lower than those corresponding to existing high-resolution structures and structural ensembles, hence showing that we capture effectively the contributions to RDCs from conformational fluctuations.

  13. Penerapan Corporate Social Responsibility dengan Konsep Community Based Tourism

    Directory of Open Access Journals (Sweden)

    Linda Suriany

    2013-12-01

    Full Text Available Abstract: Business is not only economic institution, but social institution too. As social institution, business has responsibility to help society in solving social problem. This responsibility called Corporate Social Responsibility (CSR. CSR pays attention about social problem and environment, so CSR support continuous development to help government role. Nowadays, our government has national development’s agenda. One of them is tourism sector (Visit Indonesia Year 2008 programmed. But tourism sector has challenge in human resources. In this case, business role in practice CSR is needed to help tourism sector. With CSR activities, the quality of local community will increase to participate in tourism activities. CSR activities include training that based on research. When the quality of local community increase, local community can practice the concept of community based tourism (CBT. In the future, Indonesia has a power to compete with other countries.

  14. The Biological Responses to Magnesium-Based Biodegradable Medical Devices

    Directory of Open Access Journals (Sweden)

    Lumei Liu

    2017-11-01

    Full Text Available The biocompatibility of Magnesium-based materials (MBMs is critical to the safety of biodegradable medical devices. As a promising metallic biomaterial for medical devices, the issue of greatest concern is devices’ safety as degrading products are possibly interacting with local tissue during complete degradation. The aim of this review is to summarize the biological responses to MBMs at the cellular/molecular level, including cell adhesion, transportation signaling, immune response, and tissue growth during the complex degradation process. We review the influence of MBMs on gene/protein biosynthesis and expression at the site of implantation, as well as throughout the body. This paper provides a systematic review of the cellular/molecular behavior of local tissue on the response to Mg degradation, which may facilitate a better prediction of long-term degradation and the safe use of magnesium-based implants through metal innovation.

  15. JNC results of BN-600 benchmark calculation (phase 4)

    International Nuclear Information System (INIS)

    Ishikawa, Makoto

    2003-01-01

    The present work is the results of JNC, Japan, for the Phase 4 of the BN-600 core benchmark problem (Hex-Z fully MOX fuelled core model) organized by IAEA. The benchmark specification is based on 1) the RCM report of IAEA CRP on 'Updated Codes and Methods to Reduce the Calculational Uncertainties of LMFR Reactivity Effects, Action 3.12' (Calculations for BN-600 fully fuelled MOX core for subsequent transient analyses). JENDL-3.2 nuclear data library was used for calculating 70 group ABBN-type group constants. Cell models for fuel assembly and control rod calculations were applied: homogeneous and heterogeneous (cylindrical supercell) model. Basic diffusion calculation was three-dimensional Hex-Z model, 18 group (Citation code). Transport calculations were 18 group, three-dimensional (NSHEC code) based on Sn-transport nodal method developed at JNC. The generated thermal power per fission was based on Sher's data corrected on the basis of ENDF/B-IV data library. Calculation results are presented in Tables for intercomparison

  16. Numerical calculation of the Fresnel transform.

    Science.gov (United States)

    Kelly, Damien P

    2014-04-01

    In this paper, we address the problem of calculating Fresnel diffraction integrals using a finite number of uniformly spaced samples. General and simple sampling rules of thumb are derived that allow the user to calculate the distribution for any propagation distance. It is shown how these rules can be extended to fast-Fourier-transform-based algorithms to increase calculation efficiency. A comparison with other theoretical approaches is made.

  17. Semiclassical theory for the nuclear response function

    International Nuclear Information System (INIS)

    Stroth, U.

    1986-01-01

    In the first part of this thesis it was demonstrated how on a semiclassical base a RPA theory is developed and applied to electron scattering. It was shown in which fields of nuclear physics this semiclassical theory can be applied and how it is to be understood. In this connection we dedicated an extensive discussion to the Fermi gas model. From the free response function we calculated the RPA response with a finite-range residual interaction which we completely antisymmetrize. In the second part of this thesis we studied with our theory (e,e') data for the separated response functions. (orig./HSI) [de

  18. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  19. Development of 3-D FBR heterogeneous core calculation method based on characteristics method

    International Nuclear Information System (INIS)

    Takeda, Toshikazu; Maruyama, Manabu; Hamada, Yuzuru; Nishi, Hiroshi; Ishibashi, Junichi; Kitano, Akihiro

    2002-01-01

    A new 3-D transport calculation method taking into account the heterogeneity of fuel assemblies has been developed by combining the characteristics method and the nodal transport method. In the axial direction the nodal transport method is applied, and the characteristics method is applied to take into account the radial heterogeneity of fuel assemblies. The numerical calculations have been performed to verify 2-D radial calculations of FBR assemblies and partial core calculations. Results are compared with the reference Monte-Carlo calculations. A good agreement has been achieved. It is shown that the present method has an advantage in calculating reaction rates in a small region

  20. Prenatal radiation exposure. Dose calculation

    International Nuclear Information System (INIS)

    Scharwaechter, C.; Schwartz, C.A.; Haage, P.; Roeser, A.

    2015-01-01

    The unborn child requires special protection. In this context, the indication for an X-ray examination is to be checked critically. If thereupon radiation of the lower abdomen including the uterus cannot be avoided, the examination should be postponed until the end of pregnancy or alternative examination techniques should be considered. Under certain circumstances, either accidental or in unavoidable cases after a thorough risk assessment, radiation exposure of the unborn may take place. In some of these cases an expert radiation hygiene consultation may be required. This consultation should comprise the expected risks for the unborn while not perturbing the mother or the involved medical staff. For the risk assessment in case of an in-utero X-ray exposition deterministic damages with a defined threshold dose are distinguished from stochastic damages without a definable threshold dose. The occurrence of deterministic damages depends on the dose and the developmental stage of the unborn at the time of radiation. To calculate the risks of an in-utero radiation exposure a three-stage concept is commonly applied. Depending on the amount of radiation, the radiation dose is either estimated, roughly calculated using standard tables or, in critical cases, accurately calculated based on the individual event. The complexity of the calculation thereby increases from stage to stage. An estimation based on stage one is easily feasible whereas calculations based on stages two and especially three are more complex and often necessitate execution by specialists. This article demonstrates in detail the risks for the unborn child pertaining to its developmental phase and explains the three-stage concept as an evaluation scheme. It should be noted, that all risk estimations are subject to considerable uncertainties.

  1. MARIOLA: A model for calculating the response of mediterranean bush ecosystem to climatic variations

    Energy Technology Data Exchange (ETDEWEB)

    Uso-Domenech, J.L.; Ramo, M.P. [Department of Mathematics, Campus de Penyeta Roja, University Jaume I, Castellon (Spain); Villacampa-Esteve, Y. [Department of Analysis and Applied Mathematics, University of Alicante (Spain); Stuebing-Martinez, G. [Department of Botany, University of Valencia (Spain); Karjalainen, T. [Faculty of Forestry, University of Joensuu (Finland)

    1995-07-01

    The paper summarizes the bush ecosystem model developed for assessing the effects of climatic change on the behaviour of mediterranean bushes assuming that temperature, humidity and rain-fall are the basic dimensions of the niche occupied by shrub species. In this context, changes in the monthly weather pattern serve only to outline the growth conditions due to the nonlinearity of response of shrubs to climatic factors. The plant-soil-atmosphere system is described by means of ordinary non-linear differential equations for the state variables: green biomass, woody biomass, the residues of green and woody biomasses, faecal detritus of mammals on the soil, and the total organic matter of the soil. The behaviour of the flow variables is described by means of equations obtained from non-linear multiple regressions from the state variables and the input variables. The model has been applied with success to the behaviour of Cistus albidus in two zones of the Province of Alicante (Spain). The data base for the parametrical locations (zone 1) and validation (zone 2) is based upon measurements taken weekly over a 2-year period. The model is used to simulate the response of this shrub to a decreasing tendency in precipitation combined with a simultaneous rise in temperature. A period of 10 years is simulated and it is observed that plants with woody biomass smaller than 85 g die between the first and the third month and other plants` biomass decreases during this period, and strongly thereafter

  2. Action video games and improved attentional control: Disentangling selection- and response-based processes.

    Science.gov (United States)

    Chisholm, Joseph D; Kingstone, Alan

    2015-10-01

    Research has demonstrated that experience with action video games is associated with improvements in a host of cognitive tasks. Evidence from paradigms that assess aspects of attention has suggested that action video game players (AVGPs) possess greater control over the allocation of attentional resources than do non-video-game players (NVGPs). Using a compound search task that teased apart selection- and response-based processes (Duncan, 1985), we required participants to perform an oculomotor capture task in which they made saccades to a uniquely colored target (selection-based process) and then produced a manual directional response based on information within the target (response-based process). We replicated the finding that AVGPs are less susceptible to attentional distraction and, critically, revealed that AVGPs outperform NVGPs on both selection-based and response-based processes. These results not only are consistent with the improved-attentional-control account of AVGP benefits, but they suggest that the benefit of action video game playing extends across the full breadth of attention-mediated stimulus-response processes that impact human performance.

  3. Film based verification of calculation algorithms used for brachytherapy planning-getting ready for upcoming challenges of MBDCA

    Directory of Open Access Journals (Sweden)

    Grzegorz Zwierzchowski

    2016-08-01

    Full Text Available Purpose: Well-known defect of TG-43 based algorithms used in brachytherapy is a lack of information about interaction cross-sections, which are determined not only by electron density but also by atomic number. TG-186 recommendations with using of MBDCA (model-based dose calculation algorithm, accurate tissues segmentation, and the structure’s elemental composition continue to create difficulties in brachytherapy dosimetry. For the clinical use of new algorithms, it is necessary to introduce reliable and repeatable methods of treatment planning systems (TPS verification. The aim of this study is the verification of calculation algorithm used in TPS for shielded vaginal applicators as well as developing verification procedures for current and further use, based on the film dosimetry method. Material and methods : Calibration data was collected by separately irradiating 14 sheets of Gafchromic® EBT films with the doses from 0.25 Gy to 8.0 Gy using HDR 192Ir source. Standard vaginal cylinders of three diameters were used in the water phantom. Measurements were performed without any shields and with three shields combination. Gamma analyses were performed using the VeriSoft® package. Results : Calibration curve was determined as third-degree polynomial type. For all used diameters of unshielded cylinder and for all shields combinations, Gamma analysis were performed and showed that over 90% of analyzed points meets Gamma criteria (3%, 3 mm. Conclusions : Gamma analysis showed good agreement between dose distributions calculated using TPS and measured by Gafchromic films, thus showing the viability of using film dosimetry in brachytherapy.

  4. Emergency response strategies

    International Nuclear Information System (INIS)

    Carrilo, D.; Dias de la Cruz, F.

    1984-01-01

    In the present study is estimated, on the basis of a release category (PWR4) and several accident scenarios previously set up, the emergency response efficacy obtained in the application of different response strategies on each of the above mentioned scenarios. The studied strategies contemplate the following protective measures: evacuation, shelter and relocation. The radiological response has been obtained by means of CRAC2 (Calculation of Reactor Accident Consequences) code, and calculated in terms of absorbed dose equivalent (Whole body and thyroid), as well as early and latent biological effects. (author)

  5. Fast, large-scale hologram calculation in wavelet domain

    Science.gov (United States)

    Shimobaba, Tomoyoshi; Matsushima, Kyoji; Takahashi, Takayuki; Nagahama, Yuki; Hasegawa, Satoki; Sano, Marie; Hirayama, Ryuji; Kakue, Takashi; Ito, Tomoyoshi

    2018-04-01

    We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of 65 , 536 × 65 , 536 pixels and a pixel pitch of 1 μm. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.

  6. SU-C-204-03: DFT Calculations of the Stability of DOTA-Based-Radiopharmaceuticals

    Energy Technology Data Exchange (ETDEWEB)

    Khabibullin, A.R.; Woods, L.M. [University of South Florida, Tampa, Florida (United States); Karolak, A.; Budzevich, M.M.; Martinez, M.V. [H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States); McLaughlin, M.L.; Morse, D.L. [University of South Florida, Tampa, Florida (United States); H. Lee Moffitt Cancer Center and Research Institute, Tampa, Florida (United States)

    2016-06-15

    Purpose: Application of the density function theory (DFT) to investigate the structural stability of complexes applied in cancer therapy consisting of the 1,4,7,10-tetraazacyclododecane-1,4,7,10-tetraacetic acid (DOTA) chelated to Ac225, Fr221, At217, Bi213, and Gd68 radio-nuclei. Methods: The possibility to deliver a toxic payload directly to tumor cells is a highly desirable aim in targeted alpha particle therapy. The estimation of bond stability between radioactive atoms and the DOTA chelating agent is the key element in understanding the foundations of this delivery process. Thus, we adapted the Vienna Ab-initio Simulation Package (VASP) with the projector-augmented wave method and a plane-wave basis set in order to study the stability and electronic properties of DOTA ligand chelated to radioactive isotopes. In order to count for the relativistic effect of radioactive isotopes we included Spin-Orbit Coupling (SOC) in the DFT calculations. Five DOTA complex structures were represented as unit cells, each containing 58 atoms. The energy optimization was performed for all structures prior to calculations of electronic properties. Binding energies, electron localization functions as well as bond lengths between atoms were estimated. Results: Calculated binding energies for DOTA-radioactive atom systems were −17.792, −5.784, −8.872, −13.305, −18.467 eV for Ac, Fr, At, Bi and Gd complexes respectively. The displacements of isotopes in DOTA cages were estimated from the variations in bond lengths, which were within 2.32–3.75 angstroms. The detailed representation of chemical bonding in all complexes was obtained with the Electron Localization Function (ELF). Conclusion: DOTA-Gd, DOTA-Ac and DOTA-Bi were the most stable structures in the group. Inclusion of SOC had a significant role in the improvement of DFT calculation accuracy for heavy radioactive atoms. Our approach is found to be proper for the investigation of structures with DOTA-based

  7. User interface tool based on the MCCM for the calculation of dpa distributions

    International Nuclear Information System (INIS)

    Pinnera, I.; Cruz, C.; Abreu, Y.; Leyva, A.

    2009-01-01

    The Monte Carlo assisted Classical Method (MCCM) was introduced by the authors to calculate the displacements per atom (dpa) distributions in solid materials, making use of the standard outputs of simulation code system MCNP and the classical theories of electron elastic scattering. Based on this method a new DLL with several user interface functions was implemented. Then, an application running on Windows systems was development in order to allow the easy handle of different useful functionalities included on it. In the present work this application is presented and some examples of it successful use in different interesting materials are exposed. (Author)

  8. Soil-structure interaction - a general method to calculate soil impedance

    International Nuclear Information System (INIS)

    Farvacque, M.; Gantenbein, F.

    1983-01-01

    A correct analysis of the seismic response of nuclear power plant buildings needs to take into account the soil structure interaction. The most classical and simple method consists in characterizing the soil by a stiffness and a damping function for each component of the translation and rotation of the foundation. In a more exact way an impedance function of the frequency may be introduced. Literature provides data to estimate these coefficients for simple soil and foundation configurations and using linear hypothesis. This paper presents a general method to calculate soil impedances which is based on the computation of the impulsive response of the soil using an axisymmetric 2D finite element Code (INCA). The Fourier transform of this response is made in the time interval before the return of the reflected waves on the boundaries of the F.E. domain. This procedure which limits the perturbing effects of the reflections is improved by introducing absorbing boundary elements. A parametric study for homogeneous and layered soils has been carried out using this method. (orig.)

  9. Measurement and communication of greenhouse gas emissions from U.S. food consumption via carbon calculators

    International Nuclear Information System (INIS)

    Kim, Brent; Neff, Roni

    2009-01-01

    Food consumption may account for upwards of 15% of U.S. per capita greenhouse gas emissions. Online carbon calculators can help consumers prioritize among dietary behaviors to minimize personal 'carbon footprints', leveraging against emissions-intensive industry practices. We reviewed the fitness of selected carbon calculators for measuring and communicating indirect GHG emissions from food consumption. Calculators were evaluated based on the scope of user behaviors accounted for, data sources, transparency of methods, consistency with prior data and effectiveness of communication. We found food consumption was under-represented (25%) among general environmental impact calculators (n = 83). We identified eight carbon calculators that accounted for food consumption and included U.S. users among the target audience. Among these, meat and dairy consumption was appropriately highlighted as the primary diet-related contributor to emissions. Opportunities exist to improve upon these tools, including: expanding the scope of behaviors included under calculations; improving communication, in part by emphasizing the ecological and public health co-benefits of less emissions-intensive diets; and adopting more robust, transparent methodologies, particularly where calculators produce questionable emissions estimates. Further, all calculators could benefit from more comprehensive data on the U.S. food system. These advancements may better equip these tools for effectively guiding audiences toward ecologically responsible dietary choices. (author)

  10. A photon source model based on particle transport in a parameterized accelerator structure for Monte Carlo dose calculations.

    Science.gov (United States)

    Ishizawa, Yoshiki; Dobashi, Suguru; Kadoya, Noriyuki; Ito, Kengo; Chiba, Takahito; Takayama, Yoshiki; Sato, Kiyokazu; Takeda, Ken

    2018-05-17

    An accurate source model of a medical linear accelerator is essential for Monte Carlo (MC) dose calculations. This study aims to propose an analytical photon source model based on particle transport in parameterized accelerator structures, focusing on a more realistic determination of linac photon spectra compared to existing approaches. We designed the primary and secondary photon sources based on the photons attenuated and scattered by a parameterized flattening filter. The primary photons were derived by attenuating bremsstrahlung photons based on the path length in the filter. Conversely, the secondary photons were derived from the decrement of the primary photons in the attenuation process. This design facilitates these sources to share the free parameters of the filter shape and be related to each other through the photon interaction in the filter. We introduced two other parameters of the primary photon source to describe the particle fluence in penumbral regions. All the parameters are optimized based on calculated dose curves in water using the pencil-beam-based algorithm. To verify the modeling accuracy, we compared the proposed model with the phase space data (PSD) of the Varian TrueBeam 6 and 15 MV accelerators in terms of the beam characteristics and the dose distributions. The EGS5 Monte Carlo code was used to calculate the dose distributions associated with the optimized model and reference PSD in a homogeneous water phantom and a heterogeneous lung phantom. We calculated the percentage of points passing 1D and 2D gamma analysis with 1%/1 mm criteria for the dose curves and lateral dose distributions, respectively. The optimized model accurately reproduced the spectral curves of the reference PSD both on- and off-axis. The depth dose and lateral dose profiles of the optimized model also showed good agreement with those of the reference PSD. The passing rates of the 1D gamma analysis with 1%/1 mm criteria between the model and PSD were 100% for 4

  11. Improvement of calculation method for temperature coefficient of HTTR by neutronics calculation code based on diffusion theory. Analysis for temperature coefficient by SRAC code system

    International Nuclear Information System (INIS)

    Goto, Minoru; Takamatsu, Kuniyoshi

    2007-03-01

    The HTTR temperature coefficients required for the core dynamics calculations had been calculated from the HTTR core calculation results by the diffusion code with which the corrections had been performed using the core calculation results by the Monte-Carlo code MVP. This calculation method for the temperature coefficients was considered to have some issues to be improved. Then, the calculation method was improved to obtain the temperature coefficients in which the corrections by the Monte-Carlo code were not required. Specifically, from the point of view of neutron spectrum calculated by lattice calculations, the lattice model was revised which had been used for the calculations of the temperature coefficients. The HTTR core calculations were performed by the diffusion code with the group constants which were generated by the lattice calculations with the improved lattice model. The core calculations and the lattice calculations were performed by the SRAC code system. The HTTR core dynamics calculation was performed with the temperature coefficient obtained from the core calculation results. In consequence, the core dynamics calculation result showed good agreement with the experimental data and the valid temperature coefficient could be calculated only by the diffusion code without the corrections by Monte-Carlo code. (author)

  12. The Fundamentals of a Business Model Based on Responsible Investments

    Directory of Open Access Journals (Sweden)

    Vadim Dumitrascu

    2016-03-01

    Full Text Available The harmonization of profitability and social responsibility is possible under the adoption and practice conditions by the companies of some adequate business models. “Responsible profitability” must benefit as well of management tools that guide the business sequentially, based on some objective decision making criteria towards sustainable economic behaviors. The simultaneous increase of the specific economic over-value generated by social responsible investment (SRI project and responsible intensity of economic employment reflects the company’s strong subscription to the authentic sustainable development path.

  13. Actinide-lanthanide separation by bipyridyl-based ligands. DFT calculations and experimental results

    International Nuclear Information System (INIS)

    Borisova, Nataliya E.; Eroshkina, Elizaveta A.; Korotkov, Leonid A.; Ustynyuk, Yuri A.; Alyapyshev, Mikhail Yu.; Eliseev, Ivan I.; Babain, Vasily A.

    2011-01-01

    In order to gain insights into effect of substituents on selectivity of Am/Eu separation, the synthesis and extractions tests were undertaken on the series of bipyridyl-based ligands (amides of 2,2'-bipyridyl-6,6'-dicarboxylic acid: L Ph - N,N'-diethyl-N,N'-diphenyl amide; L Bu2 - tetrabutyl amide; L Oct2 - tetraoctyl amide; L 3FPh - N,N'-diethyl-N,N'-bis-(3-fluorophenyl) amide; as well as N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dibrom-2,2'-bipyridyl-6,6'-dicarboxylic acid and N,N'-diethyl-N,N'-diphenyl amide of 4,4'-dinitro-2,2'-bipyridyl-6,6'-dicarboxylic acid) as well as structure and stability of their complexes with lanthanides and actinides were studied. The extraction tests were performed for Am, lanthanide series and transition metals in polar diluents in presence of chlorinated cobalt dicarbolide and have shown high distribution coefficients for Am. Also was found that the type of substituents on amidic nitrogen exerts great influence on the extraction of light lanthanides. For understanding of the nature of this effect we made QC-calculations at DFT level, binding constants determination and X-Ray structure determination of the complexes. The UV/VIS titration performed show that the composition of all complexes of the amides with lanthanides in solution is 1:1. In spite of the binding constants are high (lgβ about 6-7 in acetonitrile solution), lanthanide ions have binding constants with the same order of magnitude for dialkyl substituted extractants. The X-Ray structures of the complexes of bipyridyl-based amides show the composition of 1:1 and the coordination number of the ions being 10. The DFT optimized structures of the compounds are in good agreement with that obtained by X-Ray. The gas phase affinity of the amides to lanthanides shows strong correlation with the distribution ratios. We can infer that the bipyridyl-based amides form complexes with metal nitrates which have similar structure in solid and gas phases and in solution, and the DFT

  14. Ab-initio study on the absorption spectrum of color change sapphire based on first-principles calculations with considering lattice relaxation-effect

    Science.gov (United States)

    Novita, Mega; Nagoshi, Hikari; Sudo, Akiho; Ogasawara, Kazuyoshi

    2018-01-01

    In this study, we performed an investigation on α-Al2O3: V3+ material, or the so-called color change sapphire, based on first-principles calculations without referring to any experimental parameter. The molecular orbital (MO) structure was estimated by the one-electron MO calculations using the discrete variational-Xα (DV-Xα) method. Next, the absorption spectra were estimated by the many-electron calculations using the discrete variational multi-electron (DVME) method. The effect of lattice relaxation on the crystal structures was estimated based on the first-principles band structure calculations. We performed geometry optimizations on the pure α-Al2O3 and with the impurity V3+ ion using Cambridge Serial Total Energy Package (CASTEP) code. The effect of energy corrections such as configuration dependence correction and correlation correction was also investigated in detail. The results revealed that the structural change on the α-Al2O3: V3+ resulted from the geometry optimization improved the calculated absorption spectra. By a combination of both the lattice relaxation-effect and the energy correction-effect improve the agreement to the experiment fact.

  15. 24 CFR 982.515 - Family share: Family responsibility.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT SECTION 8 TENANT BASED ASSISTANCE: HOUSING CHOICE VOUCHER PROGRAM Rent and Housing Assistance Payment § 982.515 Family share: Family responsibility. (a) The family share is calculated by subtracting the amount of the housing assistance payment from the gross rent. (b) The family rent to owner is...

  16. GTV-based prescription in SBRT for lung lesions using advanced dose calculation algorithms

    International Nuclear Information System (INIS)

    Lacornerie, Thomas; Lisbona, Albert; Mirabel, Xavier; Lartigau, Eric; Reynaert, Nick

    2014-01-01

    The aim of current study was to investigate the way dose is prescribed to lung lesions during SBRT using advanced dose calculation algorithms that take into account electron transport (type B algorithms). As type A algorithms do not take into account secondary electron transport, they overestimate the dose to lung lesions. Type B algorithms are more accurate but still no consensus is reached regarding dose prescription. The positive clinical results obtained using type A algorithms should be used as a starting point. In current work a dose-calculation experiment is performed, presenting different prescription methods. Three cases with three different sizes of peripheral lung lesions were planned using three different treatment platforms. For each individual case 60 Gy to the PTV was prescribed using a type A algorithm and the dose distribution was recalculated using a type B algorithm in order to evaluate the impact of the secondary electron transport. Secondly, for each case a type B algorithm was used to prescribe 48 Gy to the PTV, and the resulting doses to the GTV were analyzed. Finally, prescriptions based on specific GTV dose volumes were evaluated. When using a type A algorithm to prescribe the same dose to the PTV, the differences regarding median GTV doses among platforms and cases were always less than 10% of the prescription dose. The prescription to the PTV based on type B algorithms, leads to a more important variability of the median GTV dose among cases and among platforms, (respectively 24%, and 28%). However, when 54 Gy was prescribed as median GTV dose, using a type B algorithm, the variability observed was minimal. Normalizing the prescription dose to the median GTV dose for lung lesions avoids variability among different cases and treatment platforms of SBRT when type B algorithms are used to calculate the dose. The combination of using a type A algorithm to optimize a homogeneous dose in the PTV and using a type B algorithm to prescribe the

  17. Energetics and performance of a microscopic heat engine based on exact calculations of work and heat distributions

    International Nuclear Information System (INIS)

    Chvosta, Petr; Holubec, Viktor; Ryabov, Artem; Einax, Mario; Maass, Philipp

    2010-01-01

    We investigate a microscopic motor based on an externally controlled two-level system. One cycle of the motor operation consists of two strokes. Within each stroke, the two-level system is in contact with a given thermal bath and its energy levels are driven at a constant rate. The time evolutions of the occupation probabilities of the two states are controlled by one rate equation and represent the system's response with respect to the external driving. We give the exact solution of the rate equation for the limit cycle and discuss the emerging thermodynamics: the work done on the environment, the heat exchanged with the baths, the entropy production, the motor's efficiency, and the power output. Furthermore we introduce an augmented stochastic process which reflects, at a given time, both the occupation probabilities for the two states and the time spent in the individual states during the previous evolution. The exact calculation of the evolution operator for the augmented process allows us to discuss in detail the probability density for the work performed during the limit cycle. In the strongly irreversible regime, the density exhibits important qualitative differences with respect to the more common Gaussian shape in the regime of weak irreversibility

  18. Electronic, Magnetic, and Transport Properties of Polyacrylonitrile-Based Carbon Nanofibers of Various Widths: Density-Functional Theory Calculations

    Science.gov (United States)

    Partovi-Azar, P.; Panahian Jand, S.; Kaghazchi, P.

    2018-01-01

    Edge termination of graphene nanoribbons is a key factor in determination of their physical and chemical properties. Here, we focus on nitrogen-terminated zigzag graphene nanoribbons resembling polyacrylonitrile-based carbon nanofibers (CNFs) which are widely studied in energy research. In particular, we investigate magnetic, electronic, and transport properties of these CNFs as functions of their widths using density-functional theory calculations together with the nonequilibrium Green's function method. We report on metallic behavior of all the CNFs considered in this study and demonstrate that the narrow CNFs show finite magnetic moments. The spin-polarized electronic states in these fibers exhibit similar spin configurations on both edges and result in spin-dependent transport channels in the narrow CNFs. We show that the partially filled nitrogen dangling-bond bands are mainly responsible for the ferromagnetic spin ordering in the narrow samples. However, the magnetic moment becomes vanishingly small in the case of wide CNFs where the dangling-bond bands fall below the Fermi level and graphenelike transport properties arising from the π orbitals are recovered. The magnetic properties of the CNFs as well as their stability have also been discussed in the presence of water molecules and the hexagonal boron nitride substrate.

  19. Calculation methods for determining dose equivalent

    International Nuclear Information System (INIS)

    Endres, G.W.R.; Tanner, J.E.; Scherpelz, R.I.; Hadlock, D.E.

    1988-01-01

    A series of calculations of neutron fluence as a function of energy in an anthropomorphic phantom was performed to develop a system for determining effective dose equivalent for external radiation sources. critical organ dose equivalents are calculated and effective dose equivalents are determined using ICRP-26 methods. Quality factors based on both present definitions and ICRP-40 definitions are used in the analysis. The results of these calculations are presented and discussed

  20. Review of theoretical calculations of hydrogen storage in carbon-based materials

    Energy Technology Data Exchange (ETDEWEB)

    Meregalli, V.; Parrinello, M. [Max-Planck-Institut fuer Festkoerperforschung, Stuttgart (Germany)

    2001-02-01

    In this paper we review the existing theoretical literature on hydrogen storage in single-walled nanotubes and carbon nanofibers. The reported calculations indicate a hydrogen uptake smaller than some of the more optimistic experimental results. Furthermore the calculations suggest that a variety of complex chemical processes could accompany hydrogen storage and release. (orig.)